-
Notifications
You must be signed in to change notification settings - Fork 125
RHAIENG-2460: Update x86_64 to amd64 #2778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughReplaced Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related issues
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
.tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml (2)
2-2: Update the generator script instead of manually editing generated files.This file is autogenerated by
scripts/generate_pull_request_pipelineruns.py. Manual edits will be lost when the script runs next. Update the generator script to replacebuild_platforms = ["linux/x86_64"]withbuild_platforms = ["linux/amd64"](around line 93), then regenerate all affected pipeline files.
16-16: Remove non-existent paths from the CEL trigger condition.Line 16 includes pathChanged() checks for
jupyter/datascience/ubi9-python-3.12/mongodb-org-6.0.repo-x86_64/**andjupyter/datascience/ubi9-python-3.12/mssql-2022.repo-x86_64/**, but these paths do not exist in the repository. These appear to be RPM configuration files referenced in the documentation tutorial, not actual repository files. Either remove these non-existent path checks from the trigger condition, or if they are genuinely needed, ensure these directories and files are committed to the repository..tekton/odh-base-image-cuda-py311-c9s-push.yaml (1)
3-3: Remove debug comment.The
#test1comment appears to be a debug artifact left from development.Apply this diff to remove it:
-#test1.tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml (1)
3-3: Remove debug comment.The
#test2comment appears to be a debug artifact left from development.Apply this diff to remove it:
-#test2
♻️ Duplicate comments (7)
.tekton/odh-base-image-cuda-py312-ubi9-push.yaml (1)
29-32: Same platform identifier update.This change is consistent with the repository-wide standardization to
linux/amd64..tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml (1)
36-38: Platform identifier updated.Consistent with the repository-wide change to use the canonical
linux/amd64identifier..tekton/multiarch-push-pipeline.yaml (1)
114-118: Default platform identifier updated in pipeline definition.This changes the default value for the
build-platformsparameter that will be used by all PipelineRuns referencing this pipeline. The change standardizes onlinux/amd64, which is the canonical identifier..tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml (1)
36-38: Platform identifier standardized..tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml (1)
36-38: Consistent platform identifier update..tekton/multiarch-pull-request-pipeline.yaml (1)
92-96: Default platform updated in pull-request pipeline.Consistent with the push pipeline, this standardizes the default platform identifier to
linux/amd64..tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml (1)
36-38: Final platform identifier update.
🧹 Nitpick comments (1)
.tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml (1)
36-38: Thelinux/amd64platform string is valid for Tekton/Konflux multiarch builds.The
linux/amd64format is a standard platform string used in Tekton Task definitions for multiarch builds, and Red Hat documentation treatsx86_64andamd64as equivalent designations for the same platform. The change is architecturally sound.However, since the PR notes this change aims to "take advantage of larger available resources," verify that:
- The build infrastructure routes
linux/amd64builds to the intended resource pool- At least one pipeline run completes successfully with this platform string to confirm the resource routing change works as expected
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (43)
.tekton/multiarch-pull-request-pipeline.yaml(1 hunks).tekton/multiarch-push-pipeline.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (28)
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py311-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cpu-py312-ubi9-on-pull-request
🔇 Additional comments (30)
.tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml (1)
36-38: Good change: standardizes to correct Docker/OCI platform identifier.The update from
linux/x86_64tolinux/amd64is correct.linux/amd64is the standard Docker/OCI platform string for 64-bit x86 architecture, whilelinux/x86_64is non-standard in container contexts. This change should be functionally equivalent but uses the proper identifier.However, since this affects CI/CD infrastructure across 43 pipeline files, please ensure:
- The
multiarch-pull-request-pipeline(line 46) correctly recognizes and supports thelinux/amd64platform- Build infrastructure resources are properly allocated for this platform identifier
- Test builds complete successfully with this change (as noted in the PR checklist: "Developer must manually test and verify the changes")
Run the following script to verify the multiarch-pull-request-pipeline definition supports the linux/amd64 platform:
#!/bin/bash # Description: Verify multiarch-pull-request-pipeline supports linux/amd64 platform # Search for the pipeline definition and any platform-related configuration rg -n -A 10 'kind: Pipeline' | rg -A 10 'multiarch-pull-request-pipeline' # Check for any platform references in Tekton configuration rg -n 'linux/amd64|linux/x86_64|build-platforms' .tekton/.tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml (1)
36-38: Clarify how the platform name change provides larger resources and verify the generator script change.The change from
linux/x86_64tolinux/amd64aligns with Docker's canonical platform naming convention—both normalize to the same TARGETPLATFORM identifier. However:
Resource allocation benefit: The PR description claims "take advantage of larger available resources," but since both platform names refer to the same x86-64 architecture, the mechanism by which this change provides access to larger resources remains unclear. Please explain how this platform naming change affects resource allocation in your build infrastructure.
Generator script verification: Line 2 indicates this file is autogenerated by
scripts/generate_pull_request_pipelineruns.py. Please confirm the change was made in the generator script to ensure consistency across all autogenerated files.Testing: The PR testing checklist is unchecked. Please verify this change has been tested on the
multiarch-pull-request-pipeline..tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml (1)
36-38: Platform identifier standardization; verify resource allocation rationale.The change from
linux/x86_64tolinux/amd64aligns with OCI/Docker conventions, wherelinux/amd64is the standard platform identifier for x86-64 systems. Both identifiers refer to the same 64-bit x86 architecture, and the technical change is valid.However, the PR description claims this enables "larger available resources," which is misleading. In Konflux, resource allocation is controlled via taskRunSpecs in PipelineRun files, not platform identifiers. Platform identifiers specify which architecture to build for, not resource allocation. Verify:
- Whether the infrastructure requires this specific identifier format (amd64 vs x86_64)
- The actual mechanism providing the claimed resource increase—if not via platform identifiers, update the PR description to reflect the true reason
- Consistency of this change across all 43 files mentioned in the PR
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml (1)
36-39: Verify platform naming consistency across build-platforms.The change from
linux/x86_64tolinux/amd64is correct and aligns with Docker's standard naming convention. However, there's a naming inconsistency that should be clarified:Line 38 uses the standard format
linux/amd64, while line 39 useslinux-d160-m2xlarge/arm64with a machine type qualifier. Since this custom format doesn't appear elsewhere in the repository, confirm:
- Whether
linux-d160-m2xlarge/arm64is the correct platform identifier for your Konflux configuration- Whether the amd64 platform should also specify a machine type to align with the PR goal of accessing "larger available resources"
Ensure both platforms follow your Konflux instance's expected format for multi-platform builds.
.tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml (1)
36-38: LGTM: Platform naming standardized.The update to
linux/amd64is correct and consistent with the Docker/OCI standard platform nomenclature..tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml (1)
36-38: LGTM: Platform identifier correctly updated.The change to
linux/amd64is appropriate and maintains consistency with Docker platform standards..tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml (1)
36-39: LGTM: Multi-platform configuration correctly updated.The update to
linux/amd64is correct while preserving thelinux/arm64platform for multi-architecture builds..tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml (1)
38-40: LGTM: Platform naming updated correctly.The change to
linux/amd64follows the standard Docker platform naming convention..tekton/odh-base-image-cpu-py312-ubi9-push.yaml (1)
31-34: LGTM: Push pipeline platform configuration updated correctly.The change to
linux/amd64is correct and maintains the multi-architecture build configuration withlinux/arm64..tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml (1)
36-39: LGTM: Multi-architecture build configuration updated.The platform identifier change to
linux/amd64is correct while preserving the multi-arch build withlinux/arm64..tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml (1)
36-41: LGTM: Comprehensive multi-architecture build configuration updated.The platform identifier change to
linux/amd64is correct and properly maintains all other platforms (linux/arm64,linux/ppc64le,linux/s390x) for this comprehensive multi-architecture build..tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml (1)
36-38: LGTM: Platform identifier correctly updated.The build-platforms change to
linux/amd64is correct and follows Docker standards.Note: Line 16 contains path patterns referencing "x86_64" in directory names (
mongodb-org-6.0.repo-x86_64/**,mssql-2022.repo-x86_64/**), but these are repository path names, not platform identifiers, so they appropriately remain unchanged..tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml (1)
37-40: LGTM: Multi-arch build platforms correctly updated.The change to
linux/amd64is correct, and thelinux/arm64platform is appropriately preserved for multi-architecture CPU builds..tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml (1)
36-40: LGTM: Multi-arch configuration correctly updated.The platform identifier change to
linux/amd64is correct, with other architectures (linux/ppc64le,linux/s390x) appropriately preserved for comprehensive CPU build coverage..tekton/odh-base-image-cuda-py312-c9s-push.yaml (1)
29-32: LGTM: Platform identifier updated correctly.The change to
linux/amd64is correct for this push pipeline, withlinux/arm64appropriately maintained for multi-arch CUDA support..tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml (1)
36-38: LGTM: Platform identifier correctly standardized.The update to
linux/amd64is correct for this CUDA workload pipeline..tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml (1)
36-40: LGTM: Multi-platform build configuration updated correctly.The platform change to
linux/amd64is correct, maintaining support for additional architectures (linux/ppc64le,linux/s390x) for CPU-based runtime builds..tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml (1)
36-38: LGTM: Platform identifier appropriately updated.The change to
linux/amd64is correct for ROCm (AMD GPU) workloads, which are specific to the amd64 architecture..tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml (1)
29-32: LGTM: Platform identifier standardization is correct and consistent.The change updates
linux/x86_64tolinux/amd64, the canonical platform identifier in Docker and OCI specifications. Both refer to the same x86-64 architecture, butlinux/amd64is the standardized form. This change has been consistently applied across all 20+ relevant.tektonfiles in the repository..tekton/odh-base-image-cuda-py311-c9s-push.yaml (1)
29-32: Platform change looks good.The update from
linux/x86_64tolinux/amd64correctly aligns with Docker/OCI naming standards..tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml (1)
36-38: Platform change is correct.The update to
linux/amd64follows Docker/OCI standards. Since this file is autogenerated (line 2), ensure the generator script was also updated..tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml (1)
36-38: LGTM!The platform standardization to
linux/amd64is appropriate..tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml (1)
29-32: Platform change looks good.The update to
linux/amd64is correct and consistent with the PR objectives..tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml (1)
36-38: LGTM!The platform update is consistent with the PR-wide standardization effort.
.tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml (1)
36-39: LGTM!The platform update is appropriate and maintains multi-arch build support.
.tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml (1)
36-39: Change aligns with standard platform nomenclature.The update to
linux/amd64is correct and consistent with the broader PR changes..tekton/odh-base-image-cpu-py312-c9s-push.yaml (1)
35-40: Platform identifier standardization is correct.The change to
linux/amd64follows standard Docker/OCI conventions and is consistent with the other pipeline updates in this PR..tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml (2)
37-42: Platform update is correct and consistent.The standardization to
linux/amd64is appropriate and aligns with Docker/OCI platform naming standards.
1-51: Complete PR testing checklist before merging.According to the PR objectives, the testing checklist items are currently unchecked:
- Run
make testbefore asking for review- Developer must manually test and verify the changes
Given that this change affects 43 pipeline files and impacts build infrastructure routing, please ensure these items are completed and the pipelines are tested in a non-production environment first.
.tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml (1)
36-39: Update the generator script to uselinux/amd64instead oflinux/x86_64.The standard Docker/OCI platform identifier for x86-64 systems is
linux/amd64, and the Tekton YAML files correctly reflect this change. However, the autogeneration script still hardcodeslinux/x86_64at line 143 ofscripts/generate_pull_request_pipelineruns.py. This will cause the Tekton files to revert to the old identifier if the generator is run again. Update:
- Line 143:
build_platforms = ["linux/x86_64"]→build_platforms = ["linux/amd64"]- Line 51 type annotation: remove
"linux/x86_64"from the allowed literals⛔ Skipped due to learnings
Learnt from: jiridanek Repo: opendatahub-io/notebooks PR: 2180 File: .tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml:36-39 Timestamp: 2025-08-28T12:42:23.404Z Learning: Konflux build system in opendatahub-io/notebooks repository requires "linux/x86_64" platform identifier format for Tekton PipelineRun build-platforms parameter, not the OCI-standard "linux/amd64". This is consistently used across all .tekton/ pipeline files and enforced by scripts/generate_pull_request_pipelineruns.py type definitions. Docker/buildah contexts use "linux/amd64" while Konflux/Tekton contexts use "linux/x86_64".
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux/amd64 | ||
| - linux-d160-m2xlarge/arm64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Examine the autogeneration script to understand platform handling
cat -n scripts/generate_pull_request_pipelineruns.py | head -100Repository: opendatahub-io/notebooks
Length of output: 4339
🏁 Script executed:
#!/bin/bash
# Search for platform configuration in Tekton/Konflux configs
find . -type f \( -name "*.yaml" -o -name "*.yml" -o -name "*.py" \) | xargs rg -l "platform|amd64|x86_64" | grep -E "(konflux|tekton|config)" | head -20Repository: opendatahub-io/notebooks
Length of output: 1574
🏁 Script executed:
#!/bin/bash
# Get complete view of the generator script, focusing on how platforms are used
wc -l scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 117
🏁 Script executed:
#!/bin/bash
# Look for where build_platforms are used in the generator script
cat -n scripts/generate_pull_request_pipelineruns.py | tail -200Repository: opendatahub-io/notebooks
Length of output: 8731
🏁 Script executed:
#!/bin/bash
# Search for where the YAML parameters section builds the platform list
rg -n "build-platforms|build_platforms" scripts/generate_pull_request_pipelineruns.py -A 5 -B 2Repository: opendatahub-io/notebooks
Length of output: 2829
🏁 Script executed:
#!/bin/bash
# Check if the specific component is mentioned in the generator script
rg -n "odh-base-image-cuda-12-8-py312-c9s" scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Search for any base-image-cuda components in the generator script
rg -n "odh-base-image-cuda" scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 203
🏁 Script executed:
#!/bin/bash
# Check if there's a corresponding -push.yaml file for this component
find .tekton -name "*odh-base-image-cuda-12-8-py312-c9s*"Repository: opendatahub-io/notebooks
Length of output: 181
🏁 Script executed:
#!/bin/bash
# Check the corresponding push file to understand the baseline
cat -n .tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml | head -50Repository: opendatahub-io/notebooks
Length of output: 2068
🏁 Script executed:
#!/bin/bash
# Check if there's a build-platforms parameter in the push file
rg -n "build-platforms|build_platforms" .tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml -A 5Repository: opendatahub-io/notebooks
Length of output: 208
Update the autogeneration script to emit correct platform nomenclature.
The generator script (scripts/generate_pull_request_pipelineruns.py) has not been updated to match the platform changes in this PR. Line 143 still defaults to ["linux/x86_64"] instead of ["linux/amd64"], and there is no handling for the linux-d160-m2xlarge/arm64 variant used here. If the generator is rerun, it will overwrite the correct platforms with outdated values. Update the script to emit linux/amd64 and add the component "odh-base-image-cuda-py312-c9s" to the appropriate platform configuration blocks.
Additionally, the platform strings (linux-extra-fast/amd64, linux-m2xlarge/arm64, linux-d160-m2xlarge/arm64) in the script appear to map to different runner pools with varying resource allocations. Ensure any platform changes are consistently applied across both push and pull-request pipeline generators.
🤖 Prompt for AI Agents
.tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml lines 36-39: the
project generator scripts still emit old platform names and miss the new arm64
variant and component mapping; update
scripts/generate_pull_request_pipelineruns.py to change the default platform
list at/around line 143 from ["linux/x86_64"] to ["linux/amd64"], add handling
for the linux-d160-m2xlarge/arm64 variant so it emits that exact platform
string, and ensure the component "odh-base-image-cuda-py312-c9s" is added into
the correct platform configuration blocks; also propagate corresponding platform
string changes (linux-extra-fast/amd64, linux-m2xlarge/arm64,
linux-d160-m2xlarge/arm64) into both push and pull-request pipeline generator
code paths so push and PR outputs remain consistent.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the review comment claims
# 1. Check if the YAML file exists and show its autogenerated comment and build-platforms section
echo "=== Checking the YAML file ==="
if [ -f ".tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml" ]; then
echo "File found. Showing lines 1-40:"
head -40 ".tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml"
else
echo "File not found"
fi
# 2. Check if generator script exists
echo -e "\n=== Checking generator script ==="
if [ -f "scripts/generate_pull_request_pipelineruns.py" ]; then
echo "Generator script found"
echo "Checking for x86_64 references:"
grep -n "x86_64" scripts/generate_pull_request_pipelineruns.py || echo "No x86_64 found"
echo -e "\nChecking for amd64 references:"
grep -n "amd64" scripts/generate_pull_request_pipelineruns.py || echo "No amd64 found"
echo -e "\nChecking for build-platforms references:"
grep -n "build-platforms" scripts/generate_pull_request_pipelineruns.py || echo "No build-platforms found"
else
echo "Generator script not found"
fi
# 3. List .tekton directory to see related files
echo -e "\n=== Contents of .tekton directory ==="
ls -la .tekton/ 2>/dev/null | head -20 || echo ".tekton directory not found"Repository: opendatahub-io/notebooks
Length of output: 4303
🏁 Script executed:
#!/bin/bash
# Examine the generator script more carefully to understand the logic
echo "=== Checking generator script context around lines 140-190 ==="
sed -n '135,195p' scripts/generate_pull_request_pipelineruns.py
echo -e "\n=== Checking if there are any configuration files that might determine platform ==="
find . -name "*.env" -o -name "*.yaml" -o -name "*.yml" | grep -E "(config|param)" | head -10Repository: opendatahub-io/notebooks
Length of output: 2888
🏁 Script executed:
#!/bin/bash
# Double-check what the generator script would output for this specific component
echo "=== Verify component name in generator script ==="
grep -n "odh-pipeline-runtime-tensorflow-cuda-py312-ubi9" scripts/generate_pull_request_pipelineruns.py
echo -e "\n=== Verify the default and extension lists for this component ==="
sed -n '140,170p' scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 1274
🏁 Script executed:
#!/bin/bash
# Final verification: confirm this component is NOT in any special case list
echo "=== All special case components in generator script ==="
echo "Case 1 - linux-extra-fast/amd64:"
sed -n '142,144p' scripts/generate_pull_request_pipelineruns.py | grep -A 1 "if component"
echo -e "\nCase 2 - extends linux/arm64:"
sed -n '148,152p' scripts/generate_pull_request_pipelineruns.py | grep -A 4 "if component"
echo -e "\nSearching entire script for 'tensorflow-cuda-py312-ubi9':"
grep -c "tensorflow-cuda-py312-ubi9" scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 248
🏁 Script executed:
#!/bin/bash
# Final check: what is the single match for tensorflow-cuda-py312-ubi9?
echo "=== Finding the single match for tensorflow-cuda-py312-ubi9 ==="
grep -n "tensorflow-cuda-py312-ubi9" scripts/generate_pull_request_pipelineruns.py
echo -e "\n=== Confirming default output for any component not in special lists ==="
sed -n '140,144p' scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 383
Generator script still outputs incorrect platform naming and will overwrite the YAML change.
The change to linux/amd64 in the YAML file is correct (Docker/OCI standard), but the OCI standard naming is linux/amd64, not linux/x86_64.
However, the generator script at line 143 still defaults to build_platforms = ["linux/x86_64"]. Since odh-pipeline-runtime-tensorflow-cuda-py312-ubi9 is not in any of the special-case component lists (lines 148-176), it will use this default and regenerate the YAML with the old, non-standard naming. The manual YAML edit must be accompanied by updating the generator script to use linux/amd64 instead of linux/x86_64 for consistency.
🤖 Prompt for AI Agents
In .tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml
lines 36-38 and in the generator script around lines 143-176, the pipeline
platforms are inconsistent: update the generator script default build_platforms
from "linux/x86_64" to the OCI-standard "linux/amd64" so generated YAMLs match
your manual edit; search the generator for any other occurrences of
"linux/x86_64" and replace them, keep the special-case component lists intact,
and run the generator to verify the YAML now contains "linux/amd64".
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the generation script and check for platform identifier usage
# Look for the generation script mentioned in the review
echo "=== Searching for generate_pull_request_pipelineruns.py ==="
fd "generate_pull_request_pipelineruns.py" 2>/dev/null || echo "File not found"
# Check for related scripts
echo ""
echo "=== Searching for Python scripts in scripts/ directory ==="
fd -t f "\.py$" "scripts/" 2>/dev/null | head -20
# Check the .tekton file mentioned in the review to verify it's autogenerated
echo ""
echo "=== Checking if .tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml exists and check first few lines ==="
if [ -f ".tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml" ]; then
echo "File found. First 5 lines:"
head -5 ".tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml"
else
echo "File not found in current repository"
# Search for similar files
echo ""
echo "=== Searching for similar .tekton files ==="
fd "pull-request.yaml" ".tekton/" 2>/dev/null | head -10
fi
# Search for x86_64 and amd64 references across Python files
echo ""
echo "=== Searching for x86_64 references in Python files ==="
rg -t py "x86_64" --color never 2>/dev/null | head -20
echo ""
echo "=== Searching for amd64 references in Python files ==="
rg -t py "amd64" --color never 2>/dev/null | head -20
# Check for build-platforms in YAML files
echo ""
echo "=== Searching for build-platforms references ==="
rg "build-platforms" --color never 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 3435
Update the generation script to use linux/amd64 instead of linux/x86_64.
The YAML file change is correct—Docker selects the linux/amd64 variant on x86-64 systems. However, scripts/generate_pull_request_pipelinerums.py still defaults to "linux/x86_64" in both the type annotation and the default value assignment. Any regeneration of this file will revert the change. Update the generation script to use "linux/amd64" consistently with the Docker platform identifier standard.
🤖 Prompt for AI Agents
In scripts/generate_pull_request_pipelineruns.py (search for the type annotation
and default value that use "linux/x86_64"), update both the annotation and the
default assignment to use "linux/amd64" instead of "linux/x86_64"; specifically
change any typed variable or function parameter hint that declares the platform
type and the default string literal assigned to that parameter/variable to
"linux/amd64" so regenerated YAML will keep the Docker-standard platform
identifier.
| - name: build-platforms | ||
| value: | ||
| # Increase build resources to use m2xlarge machines, avoid OOM | ||
| - linux/x86_64 | ||
| value: | ||
| - linux/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove trailing whitespace.
Line 37 has trailing whitespace after value: which may trigger linting issues. The platform change itself is correct.
Apply this diff to fix the whitespace:
- - name: build-platforms
- value:
+ - name: build-platforms
+ value:
- linux/amd64📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: build-platforms | |
| value: | |
| # Increase build resources to use m2xlarge machines, avoid OOM | |
| - linux/x86_64 | |
| value: | |
| - linux/amd64 | |
| - name: build-platforms | |
| value: | |
| - linux/amd64 |
🤖 Prompt for AI Agents
.tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml around
lines 36 to 38: remove the trailing whitespace after the literal `value:` on
line 37 so the line ends immediately after the colon; ensure no other trailing
spaces exist on adjacent lines and save the file to prevent lint failures.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux/amd64 | ||
| - linux-d160-m2xlarge/arm64 | ||
| - linux/ppc64le | ||
| - linux/s390x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revert platform identifier to linux/x86_64.
The Konflux build system in this repository requires the linux/x86_64 platform identifier format for Tekton PipelineRun build-platforms parameters, not the OCI-standard linux/amd64. Change line 38 back to linux/x86_64. This is consistently enforced across all .tekton/ pipeline files and the platform generation scripts.
🤖 Prompt for AI Agents
In .tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml
around lines 36 to 41, the platform identifier on line 38 uses the OCI-standard
"linux/amd64" but the Konflux build system expects "linux/x86_64"; revert that
entry back to "linux/x86_64" (leave the other platform lines unchanged) so the
build-platforms parameter matches the repository-wide convention and the
platform generation scripts.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for remaining x86_64 references in .tekton files
echo "=== Checking for remaining x86_64 references in .tekton ==="
rg 'x86_64' .tekton/ 2>/dev/null | head -20
# Check the generation script
echo -e "\n=== Checking if generation script exists ==="
fd -t f 'generate_pull_request_pipelineruns.py' 2>/dev/null
# Show generation script content if it exists
if [ -f scripts/generate_pull_request_pipelineruns.py ]; then
echo -e "\n=== Generation script exists, checking platform identifiers ==="
rg -n 'x86_64|amd64' scripts/generate_pull_request_pipelineruns.py
fiRepository: opendatahub-io/notebooks
Length of output: 9902
Update the generation script to maintain consistency with the YAML file changes.
Tekton officially supports the linux/amd64 platform identifier, confirming the YAML file change is compatible. However, scripts/generate_pull_request_pipelineruns.py still generates linux/x86_64 (line 143) and its type hints include the old identifier (line 51). Since this file is autogenerated, update the generation script to produce linux/amd64 consistently across all Tekton configuration files.
Additionally, clarify the PR description: x86_64, x86-64, and amd64 all refer to the same 64-bit CPU architecture, so the change is purely a naming convention alignment with OCI/Docker standards, not a resource capacity change.
🤖 Prompt for AI Agents
In .tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml
(lines 38-40) the platform key uses linux/amd64 but the generator scripts still
emit linux/x86_64; update scripts/generate_pull_request_pipelineruns.py (update
the literal at around line 143 and the type hint/annotation at around line 51)
so it emits "linux/amd64" everywhere instead of "linux/x86_64", adjust any tests
or templates the generator uses to match the new string, and update the PR
description text generated by the script to explain this is purely a naming
convention alignment (x86_64/x86-64/amd64 are the same architecture) rather than
a resource change.
2189ad9 to
b24261e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml (1)
2-2: Update the generator script to prevent overwriting these changes.This file is autogenerated by
scripts/generate_pull_request_pipelineruns.py. If the generator script hasn't been updated to emitlinux-mxlarge/amd64instead oflinux/x86_64, re-running the script will revert these manual changes.#!/bin/bash # Check if the generator script has been updated with the new platform identifiers rg -n "linux/x86_64|linux-mxlarge|linux/amd64" scripts/generate_pull_request_pipelineruns.py -C 5
♻️ Duplicate comments (9)
.tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml (1)
36-38: Platform identifier change may break Konflux builds.The change from
linux/x86_64tolinux-mxlarge/amd64contradicts previous learnings that Konflux requires thelinux/x86_64format. Ensure this new format is compatible with the current Konflux build infrastructure before merging.Based on learnings, Konflux previously enforced
linux/x86_64over OCI-standardlinux/amd64..tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml (2)
2-2: Ensure generator script is updated.This autogenerated file requires the generator script to be updated with the new platform identifiers to prevent future overwrites.
36-38: Verify Konflux compatibility with new platform format.Confirm that Konflux accepts
linux-mxlarge/amd64before merging, as previous requirements mandatedlinux/x86_64.Based on learnings, Konflux build system previously required
linux/x86_64format..tekton/odh-base-image-cuda-py312-c9s-push.yaml (1)
29-32: Verify Konflux support for the new platform format.Confirm that the Konflux build system supports
linux-mxlarge/amd64format and that the resource pool exists, as this contradicts previous requirements forlinux/x86_64.Based on learnings, Konflux previously required
linux/x86_64platform identifier format..tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml (1)
37-40: Verify platform format compatibility.Confirm that Konflux supports the
linux-mxlarge/amd64format, as previous system requirements mandatedlinux/x86_64.Based on learnings, Konflux build system previously required
linux/x86_64format..tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml (2)
2-2: Update generator script to prevent changes from being overwritten.This file is autogenerated. The generator script must be updated to emit the new platform identifiers, otherwise these changes will be lost when the script is re-run.
36-39: Verify platform format and update generator script.As noted in past reviews, the generator script needs updating to emit the correct platform nomenclature. Additionally, verify that Konflux supports both
linux-mxlarge/amd64andlinux-d160-m2xlarge/arm64formats.Based on learnings, Konflux previously required
linux/x86_64format..tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml (1)
2-2: Update generator script to maintain consistency.This autogenerated file requires the generator script to be updated with the new platform identifiers.
.tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml (1)
36-38: Remove trailing whitespace and verify platform format.Line 37 has trailing whitespace after
value:which should be removed. Additionally, verify thatlinux-mxlarge/amd64is a valid platform identifier in your Tekton environment.- name: build-platforms - value: + value: - linux-mxlarge/amd64
🧹 Nitpick comments (2)
.tekton/odh-base-image-cuda-py312-ubi9-push.yaml (1)
3-3: Consider removing stray test comment.The
#test2comment appears to be a leftover from testing. Consider removing it for cleaner configuration.apiVersion: tekton.dev/v1 kind: PipelineRun -#test2 metadata:.tekton/odh-base-image-cuda-py311-c9s-push.yaml (1)
3-3: Remove debug comment from production configuration.The
#test1comment on line 3 appears to be leftover debug/test code and should be removed from the production pipeline configuration.apiVersion: tekton.dev/v1 kind: PipelineRun -#test1 metadata:
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (43)
.tekton/multiarch-pull-request-pipeline.yaml(1 hunks).tekton/multiarch-push-pipeline.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (23)
- .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml
- .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml
- .tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-c9s-push.yaml
- .tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml
- .tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-ubi9-push.yaml
- .tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml
- .tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-28T12:42:23.404Z
Learnt from: jiridanek
Repo: opendatahub-io/notebooks PR: 2180
File: .tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml:36-39
Timestamp: 2025-08-28T12:42:23.404Z
Learning: Konflux build system in opendatahub-io/notebooks repository requires "linux/x86_64" platform identifier format for Tekton PipelineRun build-platforms parameter, not the OCI-standard "linux/amd64". This is consistently used across all .tekton/ pipeline files and enforced by scripts/generate_pull_request_pipelineruns.py type definitions. Docker/buildah contexts use "linux/amd64" while Konflux/Tekton contexts use "linux/x86_64".
Applied to files:
.tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py311-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
🔇 Additional comments (13)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml (2)
1-54: Note: AI summary inconsistency detected.The AI-generated summary mentions changes to
src/calculator.py(method additions, signature changes, and variable renaming), but this file is not included in the review. This suggests either the summary references the wrong PR or files are missing from the review scope.Based on the PR objectives, this change should only affect Tekton YAML files for platform specifications. If there are unrelated changes to Python files, they may warrant a separate PR.
38-39: The platform specification format appears non-standard for Tekton/Konflux.Standard Tekton/Konflux platform specifications use the format "linux/{architecture}" (e.g., "linux/x86_64", "linux/amd64", "linux/arm64"), but the change uses
linux-mxlarge/amd64instead. Themxlargeresource tier specification should not be part of the platform identifier string. Resource sizing is configured separately through taskRunSpecs in PipelineRun files, not embedded in platform labels. The same concern applies tolinux-d160-m2xlarge/arm64on line 39.Please verify:
- These platform labels (
linux-mxlarge/amd64,linux-d160-m2xlarge/arm64) are valid for your Konflux configuration- If resource sizing is intended, confirm it's configured via the appropriate Konflux mechanism rather than in the platform string
.tekton/multiarch-push-pipeline.yaml (1)
114-118: LGTM - Default platform updated to use larger build resources.The change from
linux/x86_64tolinux-mxlarge/amd64correctly updates the default build platform to leverage larger VM instances. The format aligns with the Konflux multi-platform-controller's{os}-{size}/{arch}convention as noted in the description.Ensure the
mxlargeinstance type is properly configured and available in your Konflux environment before merging..tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml (1)
36-38: LGTM - Platform correctly updated for ROCm build.The single-platform configuration is appropriate for ROCm (AMD GPU) builds, which are x86_64/amd64 only. The update to
linux-mxlarge/amd64aligns with the PR's objective..tekton/odh-base-image-cuda-py312-ubi9-push.yaml (1)
29-32: LGTM - Platform updated while preserving arm64 support.The amd64 platform is correctly updated to
linux-mxlarge/amd64, while arm64 remains unchanged for the CUDA base image build..tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml (1)
38-40: LGTM - Platform correctly updated for RStudio workbench build.The update to
linux-mxlarge/amd64is consistent with the PR's objective. The single-platform configuration aligns with other similar workbench pipelines in this PR..tekton/multiarch-pull-request-pipeline.yaml (1)
92-96: Critical: Verify default platform identifier with Konflux team.This pipeline template defines the default
build-platformsvalue used by all pull-request pipelines. Changing the default fromlinux/x86_64tolinux-mxlarge/amd64will affect all builds. Before merging:
- Confirm with the Konflux team that the new format is supported
- Verify that the
linux-mxlargeresource pool is available and stable- Test at least one complete build cycle with the new platform identifier
Based on learnings, Konflux previously required
linux/x86_64format, not OCI-standardlinux/amd64..tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml (1)
36-41: Verify resource pool identifiers and platform format consistency with Konflux deployment.The format
linux-mxlarge/amd64andlinux-d160-m2xlarge/arm64differs from the standard Konflux format documented aslinux/x86_64andlinux/arm64. The available platform values are determined by the configuration of the multi-platform-controller, which suggests deployment-specific formats may be valid. However:
- The resource-class prefix format (
linux-mxlarge,linux-d160-m2xlarge) is not documented in public Konflux documentation- The list shows mixed formats—first two entries use resource-class prefixes, while
linux/ppc64leandlinux/s390xdo not- Verify that both
linux-mxlargeandlinux-d160-m2xlargeresource pools exist in your Konflux deployment- Clarify whether this PR should standardize all entries to the same format or if mixed formats are intentional
.tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml (1)
29-32: Verify thelinux-mxlarge/amd64platform configuration before merge.The platform identifier format depends on your Konflux instance's multi-platform controller configuration. Confirm:
- The platform identifier
linux-mxlargeis configured in your Konflux instance's multi-platform controller- This configuration has been tested in your target Konflux environment
- The resource pool provides the resources mentioned in the PR description
.tekton/odh-base-image-cuda-py311-c9s-push.yaml (1)
29-32: Verify the platform identifier format.The change from
linux/x86_64tolinux-mxlarge/amd64introduces a resource-sized platform identifier. Please confirm this format is supported by your Konflux/Tekton environment, as standard Kubernetes platform identifiers follow theos/archpattern without size specifications..tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml (1)
36-39: The platform identifier format appears non-standard for Konflux.The identifiers
linux-mxlarge/amd64andlinux-m2xlarge/arm64do not match documented Konflux build platform specifications, which use the formatlinux/arch(e.g.,linux/x86_64,linux/arm64). The inclusion of resource sizing prefixes (mxlarge,m2xlarge) before the architecture is not found in standard Konflux documentation. Verify this custom format is supported by your specific Konflux instance, or update to the standardlinux/amd64andlinux/arm64format..tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml (1)
36-39: No changes required. The platform identifierlinux-mxlarge/amd64is a valid, documented format specific to your Konflux multi-platform-controller configuration. It is properly defined as the default value in the multiarch-pull-request-pipeline.yaml with the description: "List of platforms to build the container images on. The available set of values is determined by the configuration of the multi-platform-controller." The format (combining machine size and architecture) is intentionally supported by Konflux and is consistently used across the build pipeline definitions.Likely an incorrect or invalid review comment.
.tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml (1)
1-53: AI summary mentions unrelated file changes.The AI-generated summary mentions "also added a method, changed a method signature, and renamed a global variable in
src/calculator.py", but no such file is present in this review. This appears to be an error in the summary generation.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-mxlarge/amd64 | ||
| - linux/arm64 | ||
| - linux/ppc64le | ||
| - linux/s390x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check other CPU build pipeline files for platform format patterns
rg -n "build-platforms" .tekton/*cpu*.yaml -A 10 | head -100Repository: opendatahub-io/notebooks
Length of output: 9589
🏁 Script executed:
#!/bin/bash
# Check if there are any comments or references explaining the size prefixes
rg -n "linux-(mxlarge|extra-fast|m2xlarge|d160)" .tekton/ -B 2 | grep -E "(http|#|linux-)" | head -50Repository: opendatahub-io/notebooks
Length of output: 4738
Align arm64 build platform with sized format used in similar files.
The file uses linux-mxlarge/amd64 for amd64 but linux/arm64 for arm64. Similar base image files like odh-base-image-cuda-py312-c9s-pull-request.yaml use linux-m2xlarge/arm64 for arm64. Check whether arm64 should also use a size prefix (e.g., linux-m2xlarge/arm64) to match the pattern across base image builds.
🤖 Prompt for AI Agents
In .tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml around lines 37 to 42
the arm64 entry uses "linux/arm64" while similar files use a sized platform like
"linux-m2xlarge/arm64"; update the arm64 line to match the sizing pattern (e.g.,
change to "linux-m2xlarge/arm64") so platform entries are consistent with other
base-image YAMLs, and verify the chosen size matches the intended build
infrastructure.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-mxlarge/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Read the YAML file to see the context around line 38
head -n 50 .tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yamlRepository: opendatahub-io/notebooks
Length of output: 2352
🏁 Script executed:
# Check if the generation script exists and examine it
find . -name "generate_pull_request_pipelineruns.py" -type f 2>/dev/null && head -n 100 "$(find . -name 'generate_pull_request_pipelineruns.py' -type f 2>/dev/null | head -1)"Repository: opendatahub-io/notebooks
Length of output: 3687
🏁 Script executed:
# Continue reading the generation script to see where build_platforms is sourced
sed -n '1,200p' ./scripts/generate_pull_request_pipelineruns.py | tail -n 100Repository: opendatahub-io/notebooks
Length of output: 3806
🏁 Script executed:
# Search for the pytorch-rocm component in the generation script
grep -n "pytorch-rocm\|mxlarge" ./scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
# Search the entire repository for linux-mxlarge
rg "linux-mxlarge|mxlarge" --type yamlRepository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
# Search for pytorch-rocm in the generation script
rg "odh-pipeline-runtime-pytorch-rocm-py312-ubi9" ./scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
# Search for all instances of "linux-" in the script to find custom platform definitions
rg "linux-" ./scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 177
🏁 Script executed:
# Extract the entire platform assignment logic from the script
sed -n '80,180p' ./scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 3651
🏁 Script executed:
# Look for the push pipeline file
cat .tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-push.yaml | head -n 50Repository: opendatahub-io/notebooks
Length of output: 2060
🏁 Script executed:
# Check the complete build_platforms assignment logic - look for all "if component in" blocks
rg -A 2 "if component in" ./scripts/generate_pull_request_pipelineruns.py | head -n 100Repository: opendatahub-io/notebooks
Length of output: 702
🏁 Script executed:
# Check git history to see if this file was manually modified
git log --oneline -n 5 .tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml 2>/dev/null || echo "Git history not available"Repository: opendatahub-io/notebooks
Length of output: 113
🏁 Script executed:
# Show the git diff for the last commit that modified this file
git show b24261e9:.tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml 2>/dev/null | grep -A 5 "build-platforms" || echo "Previous version not available"Repository: opendatahub-io/notebooks
Length of output: 240
🏁 Script executed:
# Check if odh-pipeline-runtime-pytorch-rocm-py312-ubi9 is in any conditional block
grep -B 1 "odh-pipeline-runtime-pytorch-rocm-py312-ubi9" ./scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 50
Fix the build-platforms format: linux-mxlarge/amd64 is invalid and should be linux/x86_64.
This file is autogenerated by scripts/generate_pull_request_pipelineruns.py. The component odh-pipeline-runtime-pytorch-rocm-py312-ubi9 is not listed in any conditional block in the generation script, so it should use the default platform: linux/x86_64. The format linux-mxlarge/amd64 does not exist in the script and appears to be a manual edit or a remnant from an outdated version. Either regenerate the file using the current script or manually correct the value to match the standard format expected by the multiarch-pull-request-pipeline.
🤖 Prompt for AI Agents
.tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml around
line 38: the build-platforms entry uses the invalid format
"linux-mxlarge/amd64"; replace it with the standard default platform
"linux/x86_64" (or regenerate the file with
scripts/generate_pull_request_pipelineruns.py so the component falls into the
default block) so the multiarch-pull-request-pipeline accepts the platform
value.
b24261e to
4903d11
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (4)
.tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml (1)
36-41: Duplicate: Generator script update still required.This concern was already raised in a previous review. The generator script
scripts/generate_pull_request_pipelineruns.pyneeds to be updated to producelinux-mxlarge/amd64instead oflinux/x86_64for this component to prevent reversion on regeneration..tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml (1)
36-38: Duplicate: Generator script update required.A previous review already flagged that
scripts/generate_pull_request_pipelineruns.pyneeds to be updated to uselinux-mxlarge/amd64instead oflinux/x86_64. Since this file is autogenerated (Line 2), manual changes will be reverted on the next regeneration..tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml (1)
38-38: Verify that the custom platform identifier format is valid and supported.The platform identifier
linux-d160-m2xlarge/amd64uses a non-standard format. This is the same concern as in other files in this PR - please ensure themultiarch-pull-request-pipelineaccepts this custom format and that the generation script was updated accordingly..tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml (1)
38-38: Verify that the custom platform identifier format is valid and supported.The platform identifier
linux-d160-m2xlarge/amd64uses a non-standard format. This is the same concern as in other files in this PR - please ensure themultiarch-pull-request-pipelineaccepts this custom format and that the generation script was updated accordingly.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (43)
.tekton/multiarch-pull-request-pipeline.yaml(1 hunks).tekton/multiarch-push-pipeline.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (30)
- .tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-py312-ubi9-push.yaml
- .tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml
- .tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml
- .tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-py312-c9s-push.yaml
- .tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml
- .tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-c9s-push.yaml
- .tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml
- .tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml
- .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml
- .tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-ubi9-push.yaml
- .tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-py311-c9s-push.yaml
- .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml
- .tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml
- .tekton/multiarch-push-pipeline.yaml
- .tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (30)
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py311-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
🔇 Additional comments (7)
.tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml (1)
36-38: Review previously completed.The trailing whitespace issue on line 37 and the platform change validation have already been addressed in previous review comments. No new concerns identified.
.tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml (1)
37-40: Verify testing has been completed before merging.The build platform identifiers like
linux-mxlarge/amd64are used consistently throughout the .tekton files in this repository and are valid for this Konflux configuration. The change fromlinux/x86_64tolinux-mxlarge/amd64is correct and equivalent..tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml (1)
40-40: Remove unfounded claims about platform identifier change.The review comment claims a change from
linux/x86_64tolinux-mxlarge/amd64, butlinux/x86_64does not exist in the codebase. Thelinux-mxlarge/amd64format is already the standard configuration used consistently across many .tekton files in this repository and is configured through Red Hat's multi-platform-controller. No verification is needed—the platform identifier is already established and in use throughout the build pipeline configuration.Likely an incorrect or invalid review comment.
.tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml (1)
31-31: The platform identifier format is valid and documented in the pipeline.The
multiarch-push-pipeline.yamlexplicitly defineslinux-mxlarge/amd64as a valid and documented default value for thebuild-platformsparameter (lines 114–118). The parameter description states: "The available set of values is determined by the configuration of the multi-platform-controller." No verification is required; the format is already validated by the pipeline design itself.Likely an incorrect or invalid review comment.
.tekton/multiarch-pull-request-pipeline.yaml (1)
92-96: LGTM: Default platform updated in pipeline template.This Pipeline template change updates the default build-platform from
linux/x86_64tolinux-mxlarge/amd64, which will affect all PipelineRuns that use this template. The change aligns with the PR objective to leverage larger available resources.Ensure that all autogenerated PipelineRuns referencing this template are regenerated using the updated generator script to maintain consistency across the codebase.
.tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml (1)
38-38: Verify infrastructure support and previous resolution for custom platform format.The platform identifiers
linux-d160-m2xlarge/amd64andlinux-mxlarge/amd64use custom resource-class-prefixed formats. Platform values are determined by the configuration of the multi-platform-controller, and such custom formats are valid if configured.Please confirm:
- Is
linux-d160-m2xlarge/amd64configured in your multi-platform-controller deployment?- How was the previous concern about
linux-mxlarge/amd64resolved—was it added to the controller configuration or replaced?.tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml (1)
38-38: The custom platform identifierlinux-d160-m2xlarge/amd64requires verification against the multiarch-pull-request-pipeline configuration.The platform identifier deviates from standard OCI format (
linux/amd64), though custom formats likelinux-extra-fast/amd64andlinux-m2xlarge/arm64do exist elsewhere in the build configuration. However, the componentodh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9is not present inscripts/generate_pull_request_pipelineruns.py, so it's unclear whether this platform specification was generated by that script or configured separately.Verify:
- Whether the
multiarch-pull-request-pipelineaccepts thelinux-d160-m2xlarge/amd64format- Whether the generation script needs to be updated to include this component's platform configuration
- Konflux/Tekton infrastructure documentation for custom resource class identifiers
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-mxlarge/amd64 | ||
| - linux/arm64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the generator script exists and what platform identifiers it uses
echo "=== Checking generator script for platform identifiers ==="
if [ -f "scripts/generate_pull_request_pipelineruns.py" ]; then
echo "Generator script found. Searching for x86_64 and amd64 references:"
echo ""
echo "--- x86_64 references ---"
rg -n "x86_64" scripts/generate_pull_request_pipelineruns.py -C 2
echo ""
echo "--- amd64 references ---"
rg -n "amd64" scripts/generate_pull_request_pipelineruns.py -C 2
echo ""
echo "--- mxlarge references ---"
rg -n "mxlarge" scripts/generate_pull_request_pipelineruns.py -C 2
else
echo "Generator script not found at scripts/generate_pull_request_pipelineruns.py"
fd "generate.*pipeline" scripts/
fiRepository: opendatahub-io/notebooks
Length of output: 841
Update scripts/generate_pull_request_pipelineruns.py to include linux-mxlarge/amd64.
The generator script still defaults to linux/x86_64 (line 143) and has no references to linux-mxlarge/amd64. The manual change from linux/x86_64 to linux-mxlarge/amd64 in the autogenerated YAML will revert on the next regeneration unless the generator is updated.
🤖 Prompt for AI Agents
In .tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml lines 36-39:
the autogenerated build-platforms value was manually changed to include
linux-mxlarge/amd64 but the generator still defaults to linux/x86_64 (see
scripts/generate_pull_request_pipelineruns.py around line 143); update that
script to replace the hardcoded default "linux/x86_64" with the desired platform
identifier (or add linux-mxlarge/amd64 to the platforms list/mapping used by the
generator) so it emits "linux-mxlarge/amd64" (and preserve linux/arm64) in the
generated YAML; ensure any validation/tests or constants that enumerate
supported platforms are updated accordingly so future regenerations keep the
correct platform.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-mxlarge/amd64 | ||
| - linux/ppc64le | ||
| - linux/s390x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find the YAML file mentioned in the review
find . -name "*odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml" 2>/dev/null | head -5
# Also look for the generator script
find . -name "generate_pull_request_pipelineruns.py" 2>/dev/null | head -5
# List files in .tekton directory
ls -la .tekton/ 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 1931
🏁 Script executed:
# Check the first few lines of the YAML file to verify the autogeneration marker
head -20 ./.tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml
# Also check a section around lines 36-40 for the build-platforms
sed -n '30,45p' ./.tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yamlRepository: opendatahub-io/notebooks
Length of output: 1889
🏁 Script executed:
# Examine the generator script to see what platform identifiers it generates
cat ./scripts/generate_pull_request_pipelineruns.py | head -100
# Also search for any references to build-platforms or platform identifiers
grep -n "linux\|build-platforms\|platform" ./scripts/generate_pull_request_pipelineruns.py | head -30Repository: opendatahub-io/notebooks
Length of output: 4341
🏁 Script executed:
# Let's see more of the generator script, especially the logic around build_platforms
sed -n '130,200p' ./scripts/generate_pull_request_pipelineruns.py
# Also check if there's any reference to mxlarge
grep -n "mxlarge" ./scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 2927
🏁 Script executed:
# Check the full logic for this component in the generator
grep -A 2 "odh-pipeline-runtime-datascience-cpu-py312-ubi9" ./scripts/generate_pull_request_pipelineruns.py
# Verify if mxlarge exists anywhere in the codebase
grep -r "mxlarge" . --include="*.py" 2>/dev/nullRepository: opendatahub-io/notebooks
Length of output: 307
🏁 Script executed:
# Let's verify the exact logic flow - check if there are any conditions that set mxlarge for datascience
grep -B 5 -A 5 "linux-mxlarge" ./scripts/generate_pull_request_pipelineruns.py
# Verify what the generator would produce for this component
grep -B 20 "odh-pipeline-runtime-datascience-cpu-py312-ubi9" ./scripts/generate_pull_request_pipelineruns.py | grep -E "(build_platforms|linux|arm64|amd64|x86)" | tail -20Repository: opendatahub-io/notebooks
Length of output: 222
Update scripts/generate_pull_request_pipelineruns.py to support the linux-mxlarge/amd64 platform identifier.
This file is autogenerated, but the build-platforms change from linux/x86_64 to linux-mxlarge/amd64 is manual. Add the necessary logic to the generator script to produce linux-mxlarge/amd64 for the odh-pipeline-runtime-datascience-cpu-py312-ubi9 component, otherwise the next regeneration will revert this change.
🤖 Prompt for AI Agents
In .tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml
lines 36-40, the build-platforms entry uses the custom identifier
"linux-mxlarge/amd64" which is not produced by the autogenerated generator;
update scripts/generate_pull_request_pipelineruns.py to emit
"linux-mxlarge/amd64" for the odh-pipeline-runtime-datascience-cpu-py312-ubi9
component (or add a mapping rule) instead of the generic "linux/x86_64" so
regenerations keep this exact platform string; implement a conditional/mapping
in the generator that returns "linux-mxlarge/amd64" for that component name,
leave other components unchanged, and add a unit test or comment to prevent
future regressions.
4903d11 to
422ef0b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml (1)
2-2: Do not manually modify autogenerated files; update the generator script instead.Line 2 indicates this file is autogenerated by
scripts/generate_pull_request_pipelineruns.py. However, line 38 containslinux-d160-m2xlarge/amd64, which does not match the platform identifiers the generator would produce for this component. Manual changes to autogenerated files will be overwritten the next time the script runs.If this platform identifier is needed, add the component to the appropriate conditional in the generator script and regenerate the files.
♻️ Duplicate comments (3)
.tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml (1)
36-40: Verify the generator script has been updated to producelinux-mxlarge/amd64.As flagged in a previous review, this file is autogenerated but contains a manual change to the platform identifier. The generator script must be updated to prevent this change from being reverted on the next regeneration.
Run the following script to verify whether the generator script produces the correct platform identifier:
#!/bin/bash # Description: Check if generator script produces linux-mxlarge/amd64 for this component # Check the generator script for platform logic echo "=== Generator script platform logic ===" rg -n -A 5 -B 5 "build.platforms|linux-mxlarge|mxlarge" scripts/generate_pull_request_pipelineruns.py # Check if there's component-specific platform mapping echo -e "\n=== Component-specific platform mappings ===" rg -n "odh-pipeline-runtime-datascience-cpu-py312-ubi9" scripts/generate_pull_request_pipelineruns.py # Verify what the script would generate (if safe to run) echo -e "\n=== Checking if script can be dry-run ===" python scripts/generate_pull_request_pipelineruns.py --help 2>/dev/null || echo "Script may not support dry-run mode".tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml (1)
40-40: The generation script issue has already been flagged.The past review comment already addresses the need to update
scripts/generate_pull_request_pipelineruns.pyto emit the new platform identifiers consistently..tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml (1)
38-38: The generation script issue has already been flagged.This autogenerated file was manually edited. The past review comment already addresses the need to update
scripts/generate_pull_request_pipelineruns.pyto maintain consistency across all autogenerated files.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (43)
.tekton/multiarch-pull-request-pipeline.yaml(1 hunks).tekton/multiarch-push-pipeline.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (27)
- .tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-ubi9-push.yaml
- .tekton/multiarch-push-pipeline.yaml
- .tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml
- .tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-c9s-push.yaml
- .tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml
- .tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml
- .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-py311-c9s-push.yaml
- .tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-py312-ubi9-push.yaml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-28T12:42:23.404Z
Learnt from: jiridanek
Repo: opendatahub-io/notebooks PR: 2180
File: .tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml:36-39
Timestamp: 2025-08-28T12:42:23.404Z
Learning: Konflux build system in opendatahub-io/notebooks repository requires "linux/x86_64" platform identifier format for Tekton PipelineRun build-platforms parameter, not the OCI-standard "linux/amd64". This is consistently used across all .tekton/ pipeline files and enforced by scripts/generate_pull_request_pipelineruns.py type definitions. Docker/buildah contexts use "linux/amd64" while Konflux/Tekton contexts use "linux/x86_64".
Applied to files:
.tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (32)
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py311-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cpu-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-c9s-on-pull-request
🔇 Additional comments (12)
.tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml (1)
38-38: Verify this platform identifier is configured in your Konflux multi-platform-controller.The format
linux-d160-m2xlarge/amd64deviates from the standardlinux/<arch>format used in Tekton. The multi-platform-controller supports arbitrary platform strings configured on a platform basis, so this format is valid only ifd160-m2xlargeis explicitly configured in your controller's host-config or dynamic allocation settings. Confirm themultiarch-pull-request-pipelinereferences a controller instance with this resource tier defined, and that hosts are available to support it..tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml (2)
37-40: Mixed platform format is likely intentional.The
build-platformsarray now contains mixed formats:linux-mxlarge/amd64with a resource class prefix, whilelinux/ppc64leandlinux/s390xremain in standard format. This is consistent with the PR objective of "taking advantage of larger available resources" specifically for amd64 builds, and likely reflects that resource class selection is only available or necessary for the amd64 platform in Konflux.
38-38: Verify the Konflux multi-platform-controller configuration supports thelinux-mxlarge/amd64platform identifier.The platform format has been changed to include a resource class prefix (
mxlarge), which differs from standard Docker/OCI format and from the other platforms in the list. The available platform values are determined by the configuration of the multi-platform-controller. Confirm that your Konflux instance's controller is configured to support this format and that themultiarch-pull-request-pipelinepipeline can parse it correctly..tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml (1)
36-38: Verify the platform identifier is configured in the Konflux instance.The platform identifier
linux-d160-m2xlarge/amd64differs from standard Docker platform formats and must be configured in your Konflux instance's multi-platform-controller. Confirm this platform exists in your deployment's configuration before merging..tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml (1)
37-40: Platform identifier update looks correct.The update to
linux-mxlarge/amd64aligns with the multiarch-pull-request-pipeline template's default, and thelinux/arm64entry is correctly preserved. This maintains multi-architecture support while updating the x86_64 platform identifier..tekton/multiarch-pull-request-pipeline.yaml (1)
92-96: The platform identifierlinux-mxlarge/amd64is already widely used throughout the codebase in 18 Tekton pipeline files with consistent configuration. No orphaned references to the oldlinux/x86_64format exist in platform parameters, and the multi-platform-controller clearly supports the custom resource-size prefixed identifiers (includinglinux-d160-m2xlarge/,linux-extra-fast/, etc.). No verification action is needed..tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml (1)
29-32: Verify the intentional use oflinux-mxlarge/amd64.The platform identifier has been updated to
linux-mxlarge/amd64while preservinglinux/arm64for multi-platform builds. Note that this uses a different resource variant (mxlarge) compared to thed160-m2xlargeused in the pull-request pipeline. Confirm this resource allocation is appropriate for CUDA 12.8 base image builds..tekton/odh-base-image-cuda-py312-c9s-push.yaml (1)
29-32: LGTM!The platform identifier has been updated to
linux-mxlarge/amd64while preservinglinux/arm64. This change is consistent with similar push pipeline updates in the PR and maintains multi-platform build support..tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml (1)
36-38: The platform identifierlinux-d160-m2xlarge/amd64is consistent with the project's build infrastructure configuration and is widely used across all GPU/accelerated build pipelines in the codebase. No issues identified..tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml (1)
36-41: File appears manually edited and will be overwritten by autogeneration script.This file is autogenerated by
scripts/generate_pull_request_pipelineruns.py, but thebuild-platformsvalues on lines 36-41 don't match what the script produces for this component. The script generates["linux/x86_64"]by default, then extends with["linux-m2xlarge/arm64"],["linux/ppc64le"], and["linux/s390x"]forodh-workbench-jupyter-trustyai-cpu-py312-ubi9—notlinux-d160-m2xlarge/amd64. Either regenerate this file by running the generation script or update the script to reflect the intended platform configuration.Likely an incorrect or invalid review comment.
.tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml (2)
38-38: Platform formatlinux-d160-m2xlarge/amd64is valid and documented.The multiarch-pull-request-pipeline definition explicitly documents
build-platformsas a parameter where "The available set of values is determined by the configuration of the multi-platform-controller." The default value in the pipeline islinux-mxlarge/amd64, matching the same format pattern. This custom format for specifying platform and resource tier is widely used across the codebase (rocm-pytorch, cuda, tensorflow, trustyai, and other images), indicating it is a documented Konflux multi-platform-controller feature, not a non-standard or invalid format.Likely an incorrect or invalid review comment.
1-2: Do not manually edit autogenerated files. Update the generation script instead.This file is autogenerated by
scripts/generate_pull_request_pipelineruns.py. The platform formatlinux-d160-m2xlarge/amd64at line 38 is a manual edit that will be overwritten when the script is next executed. If this custom platform is required, add theodh-pipeline-runtime-pytorch-rocm-py312-ubi9component to the appropriate conditional block in the generation script to ensure the change persists across regenerations.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-mxlarge/amd64 | ||
| - linux/ppc64le | ||
| - name: dockerfile |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Search for the multiarch-pull-request-pipeline definition
git ls-files "*.yaml" "*.yml" | head -20
# Find files referencing the pipeline
rg "multiarch-pull-request-pipeline" --type yaml
# Find other occurrences of build-platforms to see format patterns
rg "build-platforms:" --type yaml -A 5 | head -100
# Search for other platform identifier patterns with machine types
rg "linux-[a-z0-9-]+/[a-z0-9]+" --type yamlRepository: opendatahub-io/notebooks
Length of output: 863
🏁 Script executed:
#!/bin/bash
# Search for the multiarch-pull-request-pipeline definition
echo "=== Files referencing multiarch-pull-request-pipeline ==="
rg "multiarch-pull-request-pipeline" --type yaml -l
echo ""
echo "=== The file we're reviewing ==="
cat ".tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml" | head -50
echo ""
echo "=== All build-platforms occurrences ==="
rg "build-platforms:" --type yaml -A 4
echo ""
echo "=== Search for linux- platform format ==="
rg "linux-[a-z0-9]+" --type yamlRepository: opendatahub-io/notebooks
Length of output: 2532
🏁 Script executed:
#!/bin/bash
# Check the generation script mentioned in the file header
echo "=== Generation script ==="
cat scripts/generate_pull_request_pipelineruns.py 2>/dev/null | head -100
echo ""
echo "=== Search for mxlarge in the repo ==="
rg "mxlarge" --type all
echo ""
echo "=== Search for platform configuration or machine types ==="
rg "machine.type|machine-type|resource" --type yaml
echo ""
echo "=== Check for push pipeline to compare ==="
cat ".tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-push.yaml" 2>/dev/null | grep -A 5 "build-platforms:" || echo "Push file not found or no build-platforms"Repository: opendatahub-io/notebooks
Length of output: 8855
🏁 Script executed:
#!/bin/bash
# Find the corresponding push pipeline
echo "=== Looking for push pipelines ==="
find .tekton -name "*push.yaml" | head -10
echo ""
echo "=== Check odh-workbench-jupyter-minimal-cpu-py312-ubi9-push.yaml ==="
cat ".tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-push.yaml" 2>/dev/null | grep -A 5 "build-platforms:" || echo "Not found"
echo ""
echo "=== Check other similar pull-request files ==="
find .tekton -name "*pull-request.yaml" -type f | head -5 | while read file; do
echo "=== $file ==="
grep -A 3 "build-platforms:" "$file" 2>/dev/null | head -10
done
echo ""
echo "=== Search the generation script for where build_platforms values come from ==="
grep -n "build.platforms\|linux" scripts/generate_pull_request_pipelineruns.py | head -30Repository: opendatahub-io/notebooks
Length of output: 1775
🏁 Script executed:
#!/bin/bash
# Get the full context around the build_platforms assignment
echo "=== Full conditional logic for build_platforms ==="
sed -n '130,180p' scripts/generate_pull_request_pipelineruns.py
echo ""
echo "=== Check the transform_build_pipeline_to_pr_pipeline function signature ==="
sed -n '110,140p' scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 3382
Replace linux-mxlarge/amd64 with linux/x86_64.
The platform identifier linux-mxlarge/amd64 is invalid for this component. According to the generation script in scripts/generate_pull_request_pipelineruns.py, the component odh-workbench-jupyter-minimal-cpu-py312-ubi9 should use linux/x86_64 as the primary platform (line 143), extended with linux/ppc64le for multiarch builds. The mxlarge machine-type specifier does not exist in the pipeline configuration; only extra-fast and m2xlarge are valid. Change line 38 to - linux/x86_64.
🤖 Prompt for AI Agents
.tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml lines
36-40: the build-platforms list contains an invalid platform identifier
`linux-mxlarge/amd64`; replace that entry with the correct primary platform
`linux/x86_64` so the list becomes `- linux/x86_64` and retain the existing `-
linux/ppc64le` entry for multiarch builds, ensuring the pipeline matches the
generation script expectations.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-d160-m2xlarge/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarify the platform identifier change against the PR description.
The change to linux-d160-m2xlarge/amd64 is a valid Konflux platform identifier with resource tier configuration. However, the PR description states the goal is to replace linux/x86_64 with linux/amd64, but the actual value includes the resource tier prefix d160-m2xlarge. Confirm whether this resource tier specification aligns with the intended change or if the PR description needs updating.
🤖 Prompt for AI Agents
.tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml
lines 38-38: the value was changed to the Konflux identifier
"linux-d160-m2xlarge/amd64" which includes a resource-tier prefix
(d160-m2xlarge) but the PR description says replacing "linux/x86_64" with
"linux/amd64"; confirm intent and make one of two fixes: if the resource tier is
intended, update the PR description to state the full identifier
("linux-d160-m2xlarge/amd64") and check other pipeline files for consistent
usage, or if only the architecture change was intended, remove the resource-tier
prefix and set the line to "linux/amd64" (and adjust any related docs/tests to
match).
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-d160-m2xlarge/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revert to standard Konflux platform identifier format.
The format linux-d160-m2xlarge/amd64 is not a valid Konflux platform identifier. Konflux Tekton pipelines use standard formats like "linux/x86_64" and "linux/arm64". The resource prefix d160-m2xlarge is not recognized by the Konflux build system. Change this back to linux/x86_64 to match the standard Konflux configuration used elsewhere in the repository.
🤖 Prompt for AI Agents
.tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml
around line 38: the platform identifier "linux-d160-m2xlarge/amd64" is invalid
for Konflux; replace it with the standard identifier "linux/x86_64" so the
Tekton pipeline uses the recognized platform format; update the line to the
canonical platform string and ensure there are no other resource-specific
prefixes in that field.
422ef0b to
91a9326
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml (1)
24-54: Add taskRunSpecs to increase compute resources for pipeline tasks.The file is missing
taskRunSpecsconfiguration that similar PipelineRun files include. For consistency and to align with the PR's goal of using larger resources, add task-level compute resource specifications forclair-scanandecosystem-cert-preflight-checkstasks, similar to the pattern inodh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml:taskRunSpecs: - pipelineTaskName: clair-scan computeResources: requests: cpu: '8' memory: 32Gi limits: cpu: '8' memory: 32Gi - pipelineTaskName: ecosystem-cert-preflight-checks computeResources: requests: cpu: '8' memory: 32Gi limits: cpu: '8' memory: 32Gi.tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml (1)
1-2: Update the generator script to include the platform and taskRunSpecs configuration for this component.The YAML file contains manual changes—specifically the
linux-d160-m2xlarge/amd64platform andtaskRunSpecsblock forclair-scan—that are not present inscripts/generate_pull_request_pipelineruns.py. Since this file is autogenerated, these changes will be lost when the generator runs again. Add the platform assignment and task resource specifications to the generator script to persist these modifications..tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml (1)
2-2: Update the generator script - direct edits to this autogenerated file will be overwritten.As noted in the past review comment, since this file is autogenerated by
scripts/generate_pull_request_pipelineruns.py, all changes (both the platform identifier update and thetaskRunSpecsaddition) must be implemented in the generator script itself. Direct edits to this YAML file will be lost the next time the script runs.Additionally, please note that the past review's characterization of this change as "purely a naming convention alignment" is incomplete. The change from
linux/x86_64tolinux-d160-m4xlarge/amd64includes a host instance type specification (d160-m4xlarge), which is a substantive change for resource selection, not merely a naming convention update. The generator script and PR description should reflect this distinction.
♻️ Duplicate comments (4)
.tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml (1)
38-38: Platform identifier format remains flagged as invalid.This concern was already raised in a previous review. The format
linux-d160-m2xlarge/amd64is not a standard Konflux platform identifier. Additionally, the PR description states the goal is to "replace linux/x86_64 with linux/amd64," but the actual value includes a resource-class prefix (d160-m2xlarge), which is inconsistent with the stated objective.Standard Konflux platform identifiers are
linux/amd64,linux/arm64, orlinux/x86_64. The resource allocation should be handled throughtaskRunSpecs(which you've correctly added in lines 45-62), not by embedding resource hints in the platform identifier.Verify whether this format is actually supported in your Konflux setup by running:
#!/bin/bash # Search for documentation or examples of platform identifier formats in the repository rg -n -C3 'build-platforms' --type=yaml | head -100.tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml (1)
36-41: Generator script must be updated first to prevent data loss.This file is autogenerated (line 2), but the generator script
scripts/generate_pull_request_pipelineruns.pydoes not yet support the custom platform formatlinux-mxlarge/amd64. The previous review correctly identified that running the generator will overwrite these manual changes back tolinux/x86_64.Before merging, the generator script must be updated to:
- Support custom platform identifiers (linux-mxlarge/amd64, linux-d160-m2xlarge/amd64, etc.)
- Include logic to emit the correct platform format for each component
- Preserve the exact platform strings currently in the .tekton/ files
Verify that the generator script has been updated:
#!/bin/bash # Check if the generator script supports custom platform formats echo "=== Checking generator script for custom platform support ===" if grep -q "linux-mxlarge\|linux-d160" scripts/generate_pull_request_pipelineruns.py; then echo "✓ Generator script contains custom platform references" grep -n "linux-mxlarge\|linux-d160" scripts/generate_pull_request_pipelineruns.py else echo "✗ Generator script does NOT contain custom platform references" echo "Current platform logic:" grep -n "build_platforms\|linux/x86_64\|linux/amd64" scripts/generate_pull_request_pipelineruns.py | head -20 fi echo -e "\n=== Checking type hints for platform parameter ===" grep -A5 "def pull_request_pipelinerun_template" scripts/generate_pull_request_pipelineruns.py | grep -A3 "build_platforms".tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml (1)
1-2: Same autogeneration concern as the previous file.This file also declares itself as autogenerated by
scripts/generate_pull_request_pipelineruns.py. Ensure the generator script is updated to produce these changes consistently across all affected pipeline files..tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml (1)
36-38: Trailing whitespace on line 37 already flagged.This formatting issue has been identified in previous reviews with fix suggestions provided.
🧹 Nitpick comments (2)
.tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml (1)
43-62: Minor: Extra blank lines.Lines 43 and 62 introduce extra blank lines not present in similar files (e.g.,
odh-base-image-rocm-py312-ubi9-pull-request.yaml). Consider removing them for consistency, though since these files are autogenerated, this may be acceptable.- name: path-context value: . - # Added taskRunSpecs to increase compute resources taskRunSpecs: - pipelineTaskName: clair-scan computeResources: requests: cpu: '8' memory: 32Gi limits: cpu: '8' memory: 32Gi - pipelineTaskName: ecosystem-cert-preflight-checks computeResources: requests: cpu: '8' memory: 32Gi limits: cpu: '8' memory: 32Gi - pipelineRef:.tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml (1)
43-43: Minor wording improvement.The comment states "to increase compute resources," but this is actually adding
taskRunSpecsfor the first time, not increasing existing allocations.Consider this wording:
- # Added taskRunSpecs to increase compute resources + # Added taskRunSpecs to allocate compute resources for resource-intensive tasks
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (43)
.tekton/multiarch-pull-request-pipeline.yaml(1 hunks).tekton/multiarch-push-pipeline.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml(1 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (18)
- .tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml
- .tekton/multiarch-pull-request-pipeline.yaml
- .tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-c9s-push.yaml
- .tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml
- .tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml
- .tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml
- .tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (21)
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
🔇 Additional comments (28)
.tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml (1)
45-62: LGTM! Correct approach for resource allocation.The
taskRunSpecsconfiguration is the proper Tekton mechanism for increasing compute resources for specific pipeline tasks. The resource specifications are well-structured with matching requests and limits (guaranteed QoS class), and the 8 CPU / 32Gi memory allocation is appropriate for resource-intensive tasks likeclair-scanandecosystem-cert-preflight-checks..tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml (2)
45-62: Confirm resource specifications are appropriate.The
taskRunSpecssection is well-structured. Note that setting requests equal to limits (8 CPUs, 32Gi for both tasks) eliminates resource burstability but ensures guaranteed resources. This is likely intentional for predictability in CI/CD pipelines.Please confirm:
- The 8 CPU / 32Gi specifications are appropriate for both
clair-scanandecosystem-cert-preflight-checkstasks- Both tasks genuinely require identical resource allocations
38-38: Verify the platform identifier format against your Konflux configuration.The platform identifier
linux-d160-m2xlarge/amd64uses a non-standard format combining a host type (d160-m2xlarge) with the architecture. The available set of values for this parameter is determined by the configuration of the multi-platform-controller. Confirm this format is valid in your Konflux deployment and thatd160-m2xlargeis the correct host type for this pipeline..tekton/multiarch-push-pipeline.yaml (1)
114-118: Verify the custom platform identifier is valid in your Konflux infrastructure.The multi-platform-controller platform is an arbitrary string, allowing different configurations of the same underlying platform. The identifier
linux-mxlarge/amd64must be configured in your Konflux instance's multi-platform-controller before this pipeline can execute. Confirm this platform name exists in your infrastructure's host-config ConfigMap or pool configuration..tekton/odh-base-image-cuda-py311-c9s-push.yaml (1)
29-32: Update the PR description to accurately reflect the implementation change.The platform identifier format
linux-mxlarge/amd64is valid and used throughout the codebase (e.g.,linux-extra-fast/amd64,linux-m2xlarge/arm64). However, the PR description states the goal is to "replacelinux/x86_64withlinux/amd64" while the actual change useslinux-mxlarge/amd64. Update the PR description to say "replacelinux/x86_64withlinux-mxlarge/amd64" to accurately reflect the resource sizing change intended by this modification.Likely an incorrect or invalid review comment.
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml (1)
36-39:and.tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml (1)
39-40: Platform configuration aligns with established patterns in the codebase.The
linux-mxlarge/amd64identifier for x86_64 builds is consistent with patterns used across multiple.tekton/configuration files (odh-base-image-cuda, odh-base-image-cuda-12-8, odh-workbench-* variants, and pipeline configuration files). Thelinux/arm64notation for ARM64 builds is standard. These platform identifiers are managed by the Konflux multi-platform controller configuration and follow the resource tier naming convention already established in your pipelines..tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml (2)
45-62: LGTM: Resource specifications for scan tasks.The
taskRunSpecsaddition appropriately allocates resources for the scan tasks. Setting requests equal to limits creates a Guaranteed QoS class, ensuring these tasks won't be throttled—particularly important for scanning large ROCm/TensorFlow container images.
38-38: This platform identifier format is consistent throughout the codebase and is the standard configuration for this repository's Konflux infrastructure. No verification needed—the valuelinux-d160-m2xlarge/amd64follows the established pattern used across 40+ similar pipeline files.Likely an incorrect or invalid review comment.
.tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml (2)
45-62: Resource specifications look appropriate for scanning tasks.The addition of
taskRunSpecswith substantial compute resources (8 CPU, 32Gi memory) forclair-scanandecosystem-cert-preflight-checksis well-structured. Setting identical requests and limits prevents bursting but guarantees resources, which is appropriate for predictable production workloads.A few observations:
- Both tasks receive the same resource allocation; ensure this is appropriate for each task's actual workload characteristics
- The clair-scan and preflight-checks tasks are typically resource-intensive, so the substantial allocation appears justified
- The pattern is consistent with other PipelineRuns in this PR, indicating a coordinated approach
You may want to monitor resource utilization after deployment to confirm these allocations are optimal and adjust if needed.
38-38: The platform identifierlinux-d160-m4xlarge/amd64is consistent with the established pattern across 40+ Tekton YAML files in this repository. Similar identifiers likelinux-d160-m2xlarge/amd64,linux-mxlarge/amd64, andlinux-extra-fast/amd64are used throughout, indicating this is a custom resource allocation scheme for your build infrastructure rather than standard Docker platform syntax. No changes needed..tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml (1)
36-38: Platform update looks correct; verify iftaskRunSpecsshould also be added.The
linux-d160-m2xlarge/amd64platform correctly replaceslinux/x86_64. However, similar ROCm files in this PR (e.g.,odh-base-image-rocm-py312-ubi9-pull-request.yaml) also addtaskRunSpecsforclair-scanandecosystem-cert-preflight-checks. Please confirm whether this file intentionally omitstaskRunSpecsor if it should be added for consistency..tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml (1)
36-38: Platform update is correct.Same pattern as
odh-base-image-rocm-py312-c9s-pull-request.yaml— the platform is updated buttaskRunSpecsis not added. If the omission in both c9s variants is intentional, this is fine; otherwise, consider adding the resource specs for consistency with the ubi9 variants..tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml (2)
36-39: LGTM!Platform updated to
linux-mxlarge/amd64while preserving multi-arch support withlinux/arm64.
44-61: Resource specifications are well-configured.Setting
requests == limitsensures a Guaranteed QoS class for these tasks, which provides predictable resource allocation. The 8 CPU / 32Gi memory allocation aligns with other files in this PR..tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml (2)
36-38: LGTM!Platform correctly updated to
linux-d160-m2xlarge/amd64.
43-60: LGTM!The
taskRunSpecsconfiguration forclair-scanandecosystem-cert-preflight-checksis consistent with other ubi9 pipeline files in this PR..tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml (1)
36-38: LGTM!Platform updated to
linux-d160-m4xlarge/amd64. The larger instance type (m4xlarge vs m2xlarge) compared to the ROCm 6.2 variant suggests intentionally increased resources for ROCm 6.4 builds..tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml (2)
44-60: Good resource allocation strategy.The
taskRunSpecsconfiguration appropriately allocates dedicated compute resources (8 CPUs, 32Gi memory) forclair-scanandecosystem-cert-preflight-checks. Setting requests equal to limits ensures QoS Guaranteed class, which prevents resource contention for these critical security and certification tasks.
38-38: Platform identifier format is valid for Konflux but machine class availability must be confirmed.The format
linux-d160-m2xlarge/amd64is a valid Konflux/multi-platform-controller platform specification. The platform is an arbitrary string allowing for different configurations of the same underlying platform, which supports custom identifiers that include machine class information. However, the available set of values is determined by the configuration of the multi-platform-controller. Ensure thed160-m2xlargemachine class is configured in your infrastructure's host-config ConfigMap and that builds can be successfully allocated to it..tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml (1)
38-60: Changes are consistent with the first file.The platform identifier update and
taskRunSpecsadditions match the pattern fromodh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml, demonstrating consistency across the PR. The same verification and minor comment wording suggestion from the previous file apply here..tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml (1)
45-62: The taskRunSpecs configuration is valid and appropriate.The task names
clair-scanandecosystem-cert-preflight-checksare legitimate tasks in themultiarch-pull-request-pipeline. The resource allocations (8 CPUs, 32Gi memory) are intentional overrides to address resource exhaustion issues and are consistently applied across multiple similar pipeline configurations in the codebase.# Added taskRunSpecs to increase compute resources taskRunSpecs: - pipelineTaskName: clair-scan computeResources: requests: cpu: '8' memory: 32Gi limits: cpu: '8' memory: 32Gi - pipelineTaskName: ecosystem-cert-preflight-checks computeResources: requests: cpu: '8' memory: 32Gi limits: cpu: '8' memory: 32Gi.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml (2)
47-64: Approved. ThetaskRunSpecsaddition correctly allocates guaranteed resources (8 CPU, 32Gi memory) to theclair-scanandecosystem-cert-preflight-checkstasks defined inmultiarch-pull-request-pipeline.yaml, appropriately supporting resource-intensive scanning and preflight operations.
38-40: Verify the host pool platform identifier is recognized by your Konflux deployment.The available set of platform values is determined by the configuration of the multi-platform-controller, making them environment-specific. However, standard Konflux documentation shows architecture examples as "linux/amd64", not the format
linux-d160-m4xlarge/amd64shown in lines 38-40. Thed160-m4xlargeprefix appears to be a host pool or instance identifier specific to your infrastructure. Confirm this format is valid in your Konflux deployment and thatd160-m4xlargecorresponds to an available host pool.Additionally, the PR description states "replace linux/x86_64 with linux/amd64" but the actual change is to
linux-d160-m4xlarge/amd64– consider updating the description to clarify that this also specifies a particular host pool for builds..tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml (1)
36-39: Build platforms updated with host-specific identifiers.Both amd64 and arm64 platforms have been updated with host-specific prefixes (
linux-d160-m2xlarge/amd64andlinux-m2xlarge/arm64), which aligns with the PR objective to leverage larger available resources.However, note the inconsistency with the corresponding push pipeline already flagged in my previous comment on
odh-base-image-cuda-py312-c9s-push.yaml..tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml (1)
40-40: This file is autogenerated and the platform identifier format is consistent with existing patterns in this repository (e.g.,linux-d160-m4xlarge/amd64,linux-extra-fast/amd64,linux-m2xlarge/arm64). The absence oftaskRunSpecsappears intentional for this minimal CPU variant, whereas the comparable CUDA variant includes them for higher compute requirements. No changes needed..tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml (2)
45-61: Resource sizing is consistent across the PR.The taskRunSpecs addition is part of a systematic change affecting 25 pull-request pipeline files. All files with taskRunSpecs use identical resource allocation (8 CPUs, 32Gi memory) for both
clair-scanandecosystem-cert-preflight-checkstasks, while 48 other files remain unchanged. The consistent sizing across diverse workload types (jupyter, rstudio, pipeline-runtime, base-image variants) suggests this is a deliberate standardization effort rather than an arbitrary change. Matching requests and limits appropriately ensures Guaranteed QoS class for these resource-intensive scanning tasks.
38-38: Verify build platform alignment and document the resource sizing decisionThe platform identifier
linux-mxlarge/amd64is valid in the Konflux/AppStudio infrastructure (as referenced in the script comments pointing to the AppStudio host-config configuration). However, this component (odh-workbench-rstudio-minimal-cpu-py312-c9s) is not explicitly configured in the generation script's special cases for platform overrides. The generation script would default components tolinux/x86_64unless specifically listed for alternative platforms.Since this file was manually modified to use
linux-mxlarge/amd64alongside the newtaskRunSpecsallocation (8 CPU / 32Gi memory), please clarify:
- Is this deliberate for this specific workload to use larger build resources despite the "minimal-cpu" naming, or should it follow the script-generated default?
- If intentional, consider updating the generation script to document this choice so the platform identifier is preserved across future regenerations (currently
odh-workbench-rstudio-minimal-cpu-py312-c9sis not in the special case lists).
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-mxlarge/amd64 | ||
| - linux/arm64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find and examine the CUDA YAML files mentioned in the review
# Locate the files
echo "=== Finding CUDA Tekton YAML files ==="
find .tekton -name "*cuda*" -type f | head -20
echo ""
echo "=== Content of push pipeline (lines 29-32) ==="
if [ -f ".tekton/odh-base-image-cuda-py312-c9s-push.yaml" ]; then
sed -n '29,32p' .tekton/odh-base-image-cuda-py312-c9s-push.yaml
else
echo "File not found"
fi
echo ""
echo "=== Content of PR pipeline (lines 38-39) ==="
if [ -f ".tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml" ]; then
sed -n '38,39p' .tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml
else
echo "File not found"
fi
echo ""
echo "=== Full build-platforms sections in push pipeline ==="
grep -A 4 "build-platforms:" .tekton/odh-base-image-cuda-py312-c9s-push.yaml 2>/dev/null || echo "Not found"
echo ""
echo "=== Full build-platforms sections in PR pipeline ==="
grep -A 4 "build-platforms:" .tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml 2>/dev/null || echo "Not found"Repository: opendatahub-io/notebooks
Length of output: 1692
🏁 Script executed:
#!/bin/bash
# Compare build-platforms across multiple CUDA push/PR file pairs
echo "=== Comparing build-platforms across CUDA push/PR pairs ==="
echo ""
# Get list of CUDA base image files
CUDA_BASE_FILES=$(find .tekton -name "odh-base-image-cuda*" -type f | sort)
for push_file in $(echo "$CUDA_BASE_FILES" | grep "push.yaml"); do
pr_file="${push_file//-push.yaml/-pull-request.yaml}"
if [ -f "$pr_file" ]; then
echo "=== File pair: $(basename "$push_file") and $(basename "$pr_file") ==="
echo "Push:"
grep -A 4 "name: build-platforms" "$push_file" | head -5
echo "PR:"
grep -A 4 "name: build-platforms" "$pr_file" | head -5
echo ""
fi
doneRepository: opendatahub-io/notebooks
Length of output: 1861
Review platform identifier consistency across all CUDA push and pull-request pipeline pairs.
Inconsistent build-platforms specifications were found across multiple CUDA Tekton YAML files:
-
File under review (odh-base-image-cuda-py312-c9s-push.yaml): Uses
linux-mxlarge/amd64andlinux/arm64, while the corresponding pull-request pipeline useslinux-d160-m2xlarge/amd64andlinux-m2xlarge/arm64. -
Related file (odh-base-image-cuda-py311-c9s-push.yaml): Uses
linux-mxlarge/amd64andlinux/arm64, while the PR pipeline useslinux-d160-m4xlarge/amd64andlinux-d160-m4xlarge/arm64. -
Defect found (odh-base-image-cuda-12-8-py312-c9s-push.yaml): The amd64 platform entry is malformed (
linux-d160-m2xlargewithout architecture suffix); the PR pipeline correctly specifieslinux-d160-m2xlarge/amd64.
Some file pairs (cuda-12-8-py312-ubi9 and cuda-py312-ubi9) maintain consistent platforms across push and PR pipelines, suggesting these differences may be intentional. Please clarify whether inconsistencies across files are deliberate resource allocation choices for push vs. PR workflows, or whether they should be standardized.
6d5210d to
b7de368
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml (1)
2-2: Update the generation script to use the new platform identifier for CUDA builds.Since this file is autogenerated by
scripts/generate_pull_request_pipelineruns.py, the generation script must be updated to producelinux-d160-m4xlarge/amd64instead of the defaultlinux/x86_64. Currently, the generation script does not contain this platform identifier, so these manual edits will be lost on the next regeneration. Update the script to include platform logic for CUDA components and addlinux-d160-m4xlarge/amd64to the type annotation on line 51.
♻️ Duplicate comments (7)
.tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml (1)
36-40: Generator script still needs updating.The previous review comment regarding the need to update
scripts/generate_pull_request_pipelineruns.pyto supportlinux-mxlarge/amd64for this component still applies. Until the generator is updated, this manual change will be reverted on the next regeneration..tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml (1)
36-39: Generator script still needs updating.The platform change from
linux/x86_64tolinux-mxlarge/amd64will revert on the next regeneration unless the generator script is updated (as noted in previous reviews)..tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml (1)
36-39: Generator script requires updates.As flagged in previous reviews, the generator script has not been updated to emit these platform identifiers. Manual changes will be overwritten on the next regeneration.
.tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml (2)
38-38: Generator script update required.As noted in previous reviews, the generator script must be updated to prevent these platform changes from being reverted.
45-62: taskRunSpecs configuration approved.The compute resource specifications are appropriate. However, as noted in past reviews, the generator script must be updated to include
taskRunSpecslogic..tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml (1)
38-38: Critical: Inconsistent and potentially invalid platform identifier.This line shows
linux-mxlarge/amd64, but the AI-generated summary for this file states the platform should be updated tolinux-d160-m4xlarge/amd64. Additionally, a previous review flaggedlinux-mxlarge/amd64as an invalid platform identifier, noting that the generation script expectslinux/x86_64for this component.There are multiple conflicting pieces of information:
- PR description: "Update from linux/x86_64 to linux/amd64"
- AI summary: Change should be to
linux-d160-m4xlarge/amd64- Actual code: Shows
linux-mxlarge/amd64- Past review: Claims only
linux/x86_64,extra-fast, andm2xlargeare validPlease verify:
- What is the correct platform identifier format for this PipelineRun?
- Should this match the AI summary's
linux-d160-m4xlarge/amd64?- Is the generation script (
scripts/generate_pull_request_pipelineruns.py) up-to-date with the new platform identifiers?Run the following script to check the generation script and other similar files for the correct pattern:
#!/bin/bash # Check the generation script for platform identifier logic echo "=== Generation script platform logic ===" cat scripts/generate_pull_request_pipelineruns.py | grep -A 10 -B 5 "build.platforms\|linux" echo "" echo "=== Check all pull-request YAML files for platform patterns ===" rg "^\s+- linux-[a-z0-9-]+/amd64" .tekton/ -n -C 2 echo "" echo "=== Check for mxlarge vs d160-m4xlarge usage ===" rg "mxlarge|d160" .tekton/ --type yaml -n.tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml (1)
36-41: Verify the custom platform format is supported and clarify the PR description discrepancy.The platform value
linux-mxlarge/amd64differs from standard Docker platform notation (linux/amd64) and appears to be a Konflux-specific resource selector. Additionally, the PR description states updating to "linux/amd64" but the actual change uses "linux-mxlarge/amd64".Please verify:
- That
linux-mxlarge/amd64is a valid and supported platform selector in your Konflux/Tekton environment- Whether the PR description should be updated to reflect the custom format being used
- Why other architectures (arm64, ppc64le, s390x) remain in standard format while only x86_64→amd64 uses the custom selector
What is the linux-mxlarge platform selector in Konflux Tekton pipelines?Note: The generator script incompatibility mentioned in past review comments remains unresolved.
🧹 Nitpick comments (4)
.tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml (1)
45-45: Consider enhancing the comment specificity.The inline comment states "Added taskRunSpecs to increase compute resources" but could be more informative for future maintainers.
Consider updating to something like:
# TaskRunSpecs configure guaranteed resources (8 CPUs, 32Gi) for scanning tasks # to leverage larger d160-m2xlarge hosts and reduce pipeline execution time.tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml (1)
40-43: Consider adding taskRunSpecs for consistency.Several other CUDA/ROCm pipelines in this PR (e.g.,
odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml,odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml) addtaskRunSpecsto increase compute resources forclair-scanandecosystem-cert-preflight-checkstasks. Consider whether this pipeline would benefit from the same resource allocation..tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml (1)
40-43: Consider adding taskRunSpecs for resource-intensive tasks.Other similar pipelines in this PR include
taskRunSpecsto allocate 8 CPUs and 32Gi memory forclair-scanandecosystem-cert-preflight-checks. If these tasks are resource-intensive for this component, adding the same configuration would be beneficial..tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml (1)
38-38: Verify the host pool configuration for d160-m2xlarge.The platform identifier
linux-d160-m2xlarge/amd64is a Konflux-specific host pool format (not standard OCI format). While this format is consistently used across pull-request pipelines in the codebase, confirm that thed160-m2xlargehost pool is available and properly configured in your Konflux environment.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (35)
.tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml(2 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (16)
- .tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml
- .tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml
- .tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml
- .tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (51)
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-runtime-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
- GitHub Check: build (cuda-rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py311-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cpu-py312-c9s-on-pull-request
🔇 Additional comments (41)
.tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml (1)
26-26: Pipeline timeout increase looks appropriate.Doubling the timeout to 6h is reasonable when moving to larger build hosts that may require additional provisioning time or run more comprehensive builds.
.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml (3)
25-26: Pipeline timeout doubled to 6h.The timeout increase from 3h to 6h aligns with the PR's goal of leveraging larger resources. However, verify that 6h is appropriate for this specific workload's typical completion time plus a reasonable buffer.
36-41: Verify platform selection consistency.This file uses
linux-extra-fast/amd64for the amd64 build, while other PipelineRuns in this PR uselinux-mxlarge/amd64,linux-d160-m2xlarge/amd64, orlinux-d160-m4xlarge/amd64. Confirm thatlinux-extra-fast/amd64is the correct platform choice for this workload, or if it should be updated to match the pattern used in other files for resource consistency.
24-56: Missing taskRunSpecs for resource allocation.Several other PipelineRuns in this PR add
taskRunSpecsblocks to allocate explicit compute resources (8 CPUs and 32Gi memory) for tasks likeclair-scanandecosystem-cert-preflight-checks. This file does not include such specifications. Verify whether this omission is intentional or if this PipelineRun should also include resource allocations for consistency and to fully leverage the larger available resources mentioned in the PR objectives..tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml (3)
26-26: Pipeline timeout extended to accommodate larger resources.The timeout increase from 3h to 6h aligns with the resource expansion and additional task specifications.
38-38: No action needed. The platform identifier formatlinux-mxlarge/amd64is valid and standard in this codebase. It represents a Konflux multi-platform-controller node pool selector, and the sizing choice is consistent with other CPU-only minimal workloads. Different workload types appropriately use different sizes (e.g.,linux-d160-m2xlarge/amd64for ROCm/CUDA builds,linux-mxlarge/amd64for CPU builds).
45-61: Verify resource allocations for task specifications with cluster capacity context.The taskRunSpecs allocations (8 CPU, 32Gi memory each with guaranteed QoS) are defensible for security scanning operations. However, to fully validate these specifications, confirm:
- Whether both tasks run concurrently (requiring 16 CPUs + 64Gi memory simultaneously) or sequentially
- The linux-mxlarge node pool has documented capacity to support peak concurrent usage
- Whether guaranteed QoS (requests=limits) is intentional for predictable performance vs. scheduling flexibility tradeoffs
.tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml (2)
26-26: Timeout increase aligns with larger resource allocation.The 6-hour pipeline timeout is appropriate for builds using the larger d160-m4xlarge hosts.
38-39: AI summary contains factual inconsistencies with the code.The AI summary states:
- "Build-platforms updated: linux/x86_64 -> linux-d160-m2xlarge/amd64"
- "Added taskRunSpecs to increase compute resources"
However, the actual code shows:
- Line 38:
linux-d160-m4xlarge/amd64(notm2xlarge)- No
taskRunSpecssection is present in the filePlease verify that the summary accurately reflects the changes in this file, or if it was generated from a different version.
.tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml (2)
26-26: LGTM: Timeout increase aligns with platform change.Doubling the pipeline timeout to 6h is reasonable given the shift to new build platforms with potentially different scheduling characteristics.
24-53: Without access to the repository to verify the comparison with other PR files and the specific tasks run by the multiarch-pull-request-pipeline, I cannot confirm whether taskRunSpecs should be present in this PipelineRun. Consider checking whether the referenced pipeline definition requires or recommends taskRunSpecs for specific security scanning tasks (like clair-scan or ecosystem-cert-preflight-checks) before adding them..tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml (3)
26-26: Verify the timeout increase rationale.The pipeline timeout has been doubled to 6h despite allocating significantly more compute resources (8 CPUs, 32Gi memory per task). Typically, increased resources should reduce execution time, not extend it.
Possible explanations:
- Longer queue times on the larger host classes
- Defensive measure for resource contention
- Empirical finding from testing
Please confirm this timeout increase is intentional and document the reasoning if it's based on observed behavior.
45-62: TaskRunSpecs configuration looks good.The addition of
taskRunSpecswith guaranteed QoS (requests == limits) forclair-scanandecosystem-cert-preflight-checksis appropriate for ensuring consistent performance of critical security scanning tasks.The resource allocation (8 CPUs, 32Gi memory) appears reasonable for container image scanning and preflight certification checks.
Please confirm:
- These resource values have been validated through testing
- The larger host class (
d160-m2xlarge) provides sufficient capacity for these allocations- Similar taskRunSpecs are consistently applied across other pipeline files in this PR
38-38: The platform identifierlinux-d160-m2xlarge/amd64is the established format used consistently across all pull-request pipeline files in this repository. This format is valid for Konflux's multi-platform-controller, whered160-m2xlargespecifies the host class and instance size for the build. No changes needed.Likely an incorrect or invalid review comment.
.tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml (1)
26-26: Timeout increase looks reasonable.The 6h timeout aligns with the PR objective to accommodate builds on larger resource pools.
.tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml (2)
26-26: Timeout increase approved.The 6h timeout is consistent with other pipelines in this PR and supports builds on larger resource pools.
36-39: Verify platform identifiers are configured in your Konflux multi-platform-controller.The platform identifiers
linux-d160-m2xlarge/amd64andlinux-m2xlarge/arm64use non-standard format with resource-tier prefixes. While Konflux's multi-platform controller accepts arbitrary platform strings, these must be explicitly configured in your controller's host-config. Confirm the multi-platform-controller deployment supports these specific identifiers, or use standard formats likelinux/amd64andlinux/arm64if not configured..tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml (2)
26-26: Timeout increase approved.The 6h timeout supports longer builds on the larger resource pools specified in the platform identifiers.
44-61: taskRunSpecs configuration looks good.The increased compute resources (8 CPUs, 32Gi memory) for
clair-scanandecosystem-cert-preflight-checksalign with the PR objective to leverage larger available resources. However, these changes also need to be incorporated into the generator script to persist across regenerations..tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml (2)
26-26: Timeout increase approved.Consistent with other pipelines in this PR.
43-60: taskRunSpecs configuration is appropriate.The increased compute resources for scanning tasks align with the PR objective. Ensure these specifications are also added to the generator script.
.tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml (1)
26-26: Timeout increase approved.Supports longer build times on larger resource pools.
.tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml (2)
26-26: Timeout increase approved.Consistent with the PR objective and other pipeline changes.
43-60: taskRunSpecs configuration looks good.Resource allocations are appropriate for scanning tasks. Ensure the generator script is updated to include this configuration.
.tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml (3)
26-26: Timeout increase approved.Aligns with other pipelines in this PR.
45-62: taskRunSpecs configuration approved pending platform identifier verification.The compute resource specifications are appropriate. However, ensure the generator script is updated and the platform identifier format is confirmed as valid.
38-38: Platform identifier format requires clarification before merging.The conflicting assessments about
linux-d160-m2xlarge/amd64validity cannot be resolved without access to project-specific platform configuration or additional context from other PipelineRun files in this repository..tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml (3)
26-26: Verify that the 6-hour timeout aligns with actual build times.The pipeline timeout was doubled from 3 hours to 6 hours. While this provides more headroom, ensure this duration is based on observed build times with the new platform configuration, as excessively long timeouts can delay failure detection.
45-62: Resource specifications look appropriate for the specified tasks.The
taskRunSpecsadditions allocate 8 CPUs and 32Gi memory to bothclair-scanandecosystem-cert-preflight-checks. Setting requests equal to limits provides guaranteed QoS, which is appropriate for resource-intensive security scanning and certification tasks. These allocations align with the PR objective of leveraging larger available resources.
36-38: The platform identifier formatlinux-d160-m4xlarge/amd64is valid for Konflux deployments using the multi-platform controller. Platform strings in Konflux are arbitrary and environment-specific; the prefix specifies the host resource class, which must be configured in the multi-platform-controller's host-config. Ensure thed160-m4xlargehost class is available in your deployment configuration..tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml (2)
26-26: Verify the timeout increase aligns with the new build platform.The pipeline timeout has been doubled from 3h to 6h. While this change aligns with the PR objective of using larger resources, ensure this timeout is appropriate for the actual build platform being used.
46-63: The taskRunSpecs configuration is correct and aligns with repository standards.The taskRunSpecs addition appropriately allocates 8 CPU and 32Gi memory to
clair-scanandecosystem-cert-preflight-checkstasks, matching the identical configuration used across similar pull-request pipelines in the repository. Setting requests equal to limits ensures guaranteed resource availability for these critical scanning operations. This is an established pattern used in 26 of 73 Tekton pipeline files..tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml (2)
26-26: Verify the 6-hour timeout is necessary.A 6-hour timeout for a PR pipeline is quite long and could delay developer feedback. Ensure this extended duration is justified for ROCm builds on the new host type, as excessive timeouts can block resources unnecessarily.
45-62: Verify the resource requirements are appropriate.Both
clair-scanandecosystem-cert-preflight-checksare allocated identical, substantial resources (8 CPUs, 32Gi memory each). Consider whether:
- These specific values are necessary for ROCm image builds or if they could be tuned based on actual task requirements.
- The two tasks genuinely need identical resources, or if they could have different allocations based on their specific workloads.
- Setting requests equal to limits is optimal for your use case (this prevents overcommit but reduces scheduling flexibility).
.tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml (4)
26-26: Verify the timeout increase is necessary.The pipeline timeout has been doubled from 3h to 6h. While this may accommodate longer build times on different machine types, please confirm this increase is intentional and necessary. Consider whether:
- The new machine type requires longer provisioning time
- The build process itself takes longer
- This masks any performance issues that should be addressed
24-51: Inconsistency: Missing taskRunSpecs mentioned in AI summary.The AI-generated summary states that
taskRunSpecswere added forclair-scanandecosystem-cert-preflight-checkstasks with compute resources (8 CPUs, 32Gi memory). However, these additions are not visible in the provided code changes. Please verify:
- Were these changes intended for this file?
- Should taskRunSpecs be added to this pipeline configuration?
- Is the AI summary describing changes from a different file?
1-51: Reminder: Sync changes to konflux-central repository.Based on the PR comments, these changes will need to be pushed to the konflux-central repository. Please ensure this sync is completed as part of the deployment process.
38-38: Verify machine type resource allocation for ROCm pipelines.The platform change to
linux-d160-m2xlarge/amd64is applied consistently across multiple GPU-intensive pipelines (ROCm and CUDA variants). While the commit message "More resources" suggests an intentional infrastructure decision to provide larger compute resources for GPU builds, the source of this machine type assignment cannot be definitively traced through the visible generation logic. Confirm this machine type is provisioned and appropriate within your Konflux infrastructure for ROCm build requirements..tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml (2)
25-26: Timeout increase looks reasonable.Doubling the pipeline timeout from 3h to 6h aligns with the platform change and increased resource allocation.
44-61: Good addition with appropriate resource allocation.The
taskRunSpecssection explicitly allocates substantial resources (8 CPU, 32Gi memory) toclair-scanandecosystem-cert-preflight-checkstasks, which aligns with the PR objective to leverage larger available resources.However, verify:
- These resource levels are available in the target Konflux environment
- The pipeline has been tested with these resource allocations (the PR checklist shows testing items as unchecked)
- Setting
requests = limitsprovides guaranteed QoS but reduces scheduling flexibility—confirm this is the intended behavior.tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml (1)
26-26: File is out of sync with its generator script.This file is marked as autogenerated by
scripts/generate_pull_request_pipelineruns.py, but contains hardcoded values that contradict the generator:
- Timeout: file has
6hbut generator produces3h- First platform: file has
linux-mxlarge/amd64but generator produceslinux/x86_64Either regenerate this file using the script or update the generator to match these values and the hardcoded markers to remove the autogeneration claim.
Likely an incorrect or invalid review comment.
| - linux-d160-m4xlarge/amd64 | ||
| - linux-d160-m4xlarge/arm64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the generation script and examine it
if [ -f "scripts/generate_pull_request_pipelineruns.py" ]; then
echo "=== Found generation script ==="
head -50 scripts/generate_pull_request_pipelineruns.py
echo ""
echo "=== Searching for platform references ==="
grep -n "platform\|d160\|x86_64\|arm64" scripts/generate_pull_request_pipelineruns.py | head -30
else
echo "Script not found at expected path"
find . -name "*generate*pipeline*.py" -o -name "*pipelinerun*.py" 2>/dev/null | head -10
fiRepository: opendatahub-io/notebooks
Length of output: 2122
🏁 Script executed:
#!/bin/bash
# Get more context around platform assignment logic
sed -n '125,185p' scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 2499
Update the generation script before merging to persist these platform changes.
This file is autogenerated from scripts/generate_pull_request_pipelineruns.py. The script currently hardcodes ["linux/x86_64", "linux/arm64"] for CUDA components (lines 145-151), so these manual changes to linux-d160-m4xlarge/amd64 and linux-d160-m4xlarge/arm64 will be overwritten when the script is run next. Update the generation script to support d160-m4xlarge platforms for CUDA components, or these changes will be lost.
🤖 Prompt for AI Agents
In .tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml around lines 38-39,
the platform entries were changed to linux-d160-m4xlarge/amd64 and
linux-d160-m4xlarge/arm64 but this file is autogenerated; update
scripts/generate_pull_request_pipelineruns.py (around lines 145-151 where CUDA
platforms are hardcoded as ["linux/x86_64","linux/arm64"]) to emit the
d160-m4xlarge/amd64 and d160-m4xlarge/arm64 variants for CUDA components (either
by adding a mapping for CUDA platform names or by parameterizing platform list),
then regenerate the YAML so the manual changes persist.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-mxlarge/amd64 | ||
| - linux/arm64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cd /tmp && cat > /tmp/examine_yaml.sh << 'EOF'
#!/bin/bash
# First, let's find and examine the file mentioned
echo "=== Locating the YAML file ==="
find . -name "*odh-base-image-cuda-py312-ubi9-pull-request.yaml" -type f 2>/dev/null
echo -e "\n=== Examining content around build-platforms ==="
# Let's look for files containing build-platforms
rg -n "build-platforms" --type yaml -B 5 -A 5 2>/dev/null | head -100
echo -e "\n=== Looking for platform patterns in .tekton directory ==="
find ./.tekton -name "*.yaml" -type f 2>/dev/null | head -10
echo -e "\n=== Checking for similar platform specifications ==="
rg -n "linux-mxlarge|linux/arm64|linux/x86_64" --type yaml 2>/dev/null | head -50
EOF
chmod +x /tmp/examine_yaml.sh
bash /tmp/examine_yaml.shRepository: opendatahub-io/notebooks
Length of output: 262
🏁 Script executed:
#!/bin/bash
echo "=== Repository structure ==="
ls -la .tekton/ 2>/dev/null | head -20
echo -e "\n=== All YAML files in .tekton ==="
find ./.tekton -type f -name "*.yaml" 2>/dev/null
echo -e "\n=== All pull-request related files ==="
find ./.tekton -type f -name "*pull-request*" 2>/dev/null
echo -e "\n=== General repository structure ==="
ls -la | head -20
echo -e "\n=== All YAML files containing 'odh' ==="
find . -type f -name "*odh*.yaml" 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 11726
🏁 Script executed:
#!/bin/bash
echo "=== File content around lines 30-45 ==="
head -n 50 ./.tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml | tail -n 25
echo -e "\n=== Full file content ==="
cat -n ./.tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml
echo -e "\n=== Checking for build-platforms in all pull-request files ==="
grep -n "build-platforms" ./.tekton/*pull-request*.yaml
echo -e "\n=== Checking all platform values across files ==="
grep -n "linux-" ./.tekton/*.yaml | grep -E "(linux-|amd64|arm64)" | head -20Repository: opendatahub-io/notebooks
Length of output: 9334
🏁 Script executed:
#!/bin/bash
echo "=== Checking CPU-only files for arm64 platform ==="
grep -A 5 "build-platforms" ./.tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml
grep -A 5 "build-platforms" ./.tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml
echo -e "\n=== Checking if linux/arm64 appears anywhere else ==="
grep -n "linux/arm64" ./.tekton/*.yaml
echo -e "\n=== Checking push.yaml files for the same image ==="
grep -A 3 "build-platforms" ./.tekton/odh-base-image-cuda-py312-ubi9-push.yamlRepository: opendatahub-io/notebooks
Length of output: 1411
🏁 Script executed:
#!/bin/bash
echo "=== Check multiarch-pull-request-pipeline.yaml to understand build-platforms usage ==="
grep -B 5 -A 10 "name: build-platforms" ./.tekton/multiarch-pull-request-pipeline.yaml | head -30
echo -e "\n=== Check if there are any comments explaining platform choices ==="
grep -B 10 "linux-mxlarge" ./.tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml | head -15
echo -e "\n=== Verify if this is a code-generation artifact ==="
head -5 ./.tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml
echo -e "\n=== Check the generation script to understand the pattern ==="
find . -name "generate_pull_request*" -type fRepository: opendatahub-io/notebooks
Length of output: 1422
🏁 Script executed:
#!/bin/bash
echo "=== Checking the generation script ==="
head -100 ./scripts/generate_pull_request_pipelineruns.py
echo -e "\n=== Looking for platform configuration/mapping ==="
grep -r "linux-mxlarge\|linux/arm64" ./scripts/ 2>/dev/null
echo -e "\n=== Check if there's a configuration file defining platforms ==="
find . -name "*.yaml" -o -name "*.yml" -o -name "*.json" | xargs grep -l "linux-mxlarge\|build.*platform" 2>/dev/null | head -5Repository: opendatahub-io/notebooks
Length of output: 4646
🏁 Script executed:
#!/bin/bash
echo "=== Extract the complete generation function for build_platforms ==="
grep -A 150 "def transform_build_pipeline_to_pr_pipeline" ./scripts/generate_pull_request_pipelineruns.py | grep -A 50 "build_platforms"
echo -e "\n=== Check docker_metadata.yaml or similar config for platform overrides ==="
find . -name "*docker*" -o -name "*metadata*" -o -name "*config*" | grep -E "\.(yaml|yml|json)$" | head -10
echo -e "\n=== Look for any recent changes to platform specifications ==="
git log --oneline --all -- ".tekton/*ubi9*pull-request.yaml" | head -10
echo -e "\n=== Check git diff to see what changed in this specific file ==="
git diff HEAD~5..HEAD -- "./.tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml" 2>/dev/null || echo "No recent git history available"Repository: opendatahub-io/notebooks
Length of output: 5132
🏁 Script executed:
#!/bin/bash
echo "=== Find where the amd64 platform prefix is set for this component ==="
grep -B 30 "odh-base-image-cuda-py312-ubi9" ./scripts/generate_pull_request_pipelineruns.py | grep -A 30 "build_platforms"
echo -e "\n=== Commit that introduced these resource changes ==="
git log --all --oneline -p -- "./scripts/generate_pull_request_pipelineruns.py" | grep -B 5 -A 10 "linux-mxlarge\|More resources" | head -50
echo -e "\n=== Check if there's documentation about resource tiers ==="
find . -name "*.md" -type f | xargs grep -l "linux-mxlarge\|resource.*tier\|build.*platform" 2>/dev/nullRepository: opendatahub-io/notebooks
Length of output: 832
🏁 Script executed:
#!/bin/bash
echo "=== Get the full platform configuration logic from generation script ==="
sed -n '/def transform_build_pipeline_to_pr_pipeline/,/^def /p' ./scripts/generate_pull_request_pipelineruns.py | head -200
echo -e "\n=== Check the most recent commit message ==="
git log --format="%H %s %b" -n 5Repository: opendatahub-io/notebooks
Length of output: 4766
🏁 Script executed:
#!/bin/bash
echo "=== Check if push.yaml for this component has different platforms ==="
cat -n ./.tekton/odh-base-image-cuda-py312-ubi9-push.yaml | grep -A 5 "build-platforms"
echo -e "\n=== Compare with another CUDA file that has resource specs ==="
echo "--- odh-base-image-cuda-py312-c9s-pull-request.yaml (has d160-m2xlarge) ---"
grep -A 3 "build-platforms" ./.tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml
echo -e "\n--- odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml ---"
grep -A 3 "build-platforms" ./.tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml
echo -e "\n=== Verify the generation script actually generates what's in the file ==="
python3 << 'EOF'
# Check if script would generate linux-mxlarge/amd64 for this component
component = "odh-base-image-cuda-py312-ubi9"
build_platforms = ["linux/x86_64"]
if component in [
"odh-base-image-cuda-py311-c9s",
"odh-base-image-cuda-py312-c9s",
"odh-base-image-cuda-py312-ubi9",
"odh-pipeline-runtime-minimal-cpu-py312-ubi9",
]:
build_platforms.extend(["linux/arm64"])
print(f"Generated platforms for {component}:")
print(build_platforms)
print(f"\nActual platforms in file:")
print("['linux-mxlarge/amd64', 'linux/arm64']")
print(f"\nMismatch: script generates {build_platforms} but file has ['linux-mxlarge/amd64', 'linux/arm64']")
EOFRepository: opendatahub-io/notebooks
Length of output: 154
Verify the platform identifier format and fix asymmetry.
The build-platforms at lines 38-39 shows an inconsistency: linux-mxlarge/amd64 for the amd64 platform but plain linux/arm64 for arm64.
Verify that:
- Both platforms should use the same resource-tier format (e.g., both
linux-mxlarge/amd64andlinux-mxlarge/arm64), or - Neither should include the resource prefix
Other multiarch pipelines in this repository that specify resource tiers apply them consistently to both architectures (see odh-base-image-cuda-py312-c9s-pull-request.yaml). Update arm64 to match the amd64 resource specification.
🤖 Prompt for AI Agents
.tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml around lines 36 to 39:
the build-platforms list uses an inconsistent platform identifier format (amd64
is specified as linux-mxlarge/amd64 while arm64 is plain linux/arm64); update
the arm64 entry to use the same resource-tier format as amd64 (e.g., change
linux/arm64 to linux-mxlarge/arm64) so both architectures use the same resource
prefix, or alternatively make both entries resource-prefix-free — apply the
consistent choice used across other multiarch pipelines (prefer matching
linux-mxlarge/arm64).
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-d160-m2xlarge/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update generator script to prevent reversion.
The platform identifier change must be incorporated into scripts/generate_pull_request_pipelineruns.py.
🤖 Prompt for AI Agents
.tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml around lines 36-38:
the manual change to the platform identifier (linux-d160-m2xlarge/amd64) will be
overwritten by the PR pipeline generator; update
scripts/generate_pull_request_pipelineruns.py so it emits the new platform
string instead of the old one. Locate the mapping or default value that produces
"build-platforms" entries and replace the old identifier with
"linux-d160-m2xlarge/amd64" (or make the mapping configurable), ensure the
generator writes the updated value to the YAML, and add/adjust a unit test or a
comment to prevent future reversion.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-d160-m2xlarge/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Platform change requires generator script update.
The change to linux-d160-m2xlarge/amd64 must be reflected in scripts/generate_pull_request_pipelineruns.py to avoid being reverted during the next regeneration.
🤖 Prompt for AI Agents
.tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml lines 36-38: the
platform value was changed to "linux-d160-m2xlarge/amd64" but the generator
script scripts/generate_pull_request_pipelineruns.py still produces the old
value and will overwrite this change on regen; open that script and add or
replace the platform entry (string) in the platforms list or mapping to include
"linux-d160-m2xlarge/amd64" so the generated YAML matches this file, save and
run the generator locally to verify the YAML contains the new platform string.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-d160-m2xlarge/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for the specific file and examine its context
echo "=== Examining the target file ==="
head -50 .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml
echo ""
echo "=== Searching for build-platforms patterns in .tekton directory ==="
rg -n "build-platforms" .tekton/ -A 1Repository: opendatahub-io/notebooks
Length of output: 10837
🏁 Script executed:
#!/bin/bash
# Search for actual build-platforms values in CUDA builds and other similar files
echo "=== CUDA-based RStudio builds ==="
rg -A 2 "odh-workbench-rstudio.*cuda" .tekton/ -l | while read f; do
echo "File: $f"
rg -A 3 "build-platforms" "$f"
done
echo ""
echo "=== CPU-based RStudio builds for comparison ==="
rg -A 2 "odh-workbench-rstudio.*cpu" .tekton/ -l | head -2 | while read f; do
echo "File: $f"
rg -A 3 "build-platforms" "$f"
done
echo ""
echo "=== Push vs Pull Request pattern ==="
echo "CPU Push:"
grep -A 3 "build-platforms" .tekton/odh-base-image-cpu-py312-c9s-push.yaml
echo ""
echo "CPU Pull Request:"
grep -A 3 "build-platforms" .tekton/odh-base-image-cpu-py312-c9s-pull-request.yamlRepository: opendatahub-io/notebooks
Length of output: 1177
🏁 Script executed:
#!/bin/bash
# Check push variants for these files
echo "=== CUDA RStudio push variants ==="
grep -A 3 "build-platforms" .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-push.yaml 2>/dev/null || echo "File not found"
echo ""
grep -A 3 "build-platforms" .tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-push.yaml 2>/dev/null || echo "File not found"
echo ""
echo "=== Search for any documentation or comments about d160 sizing ==="
rg "d160" . --type yaml | head -20Repository: opendatahub-io/notebooks
Length of output: 201
🏁 Script executed:
#!/bin/bash
# Find all CUDA push files to understand the pattern
echo "=== All CUDA push files ==="
find .tekton -name "*cuda*push*" -type f | sort
echo ""
echo "=== Check a CUDA push file for build-platforms ==="
grep -A 3 "build-platforms" .tekton/odh-base-image-cuda-py312-ubi9-push.yaml 2>/dev/null | head -10Repository: opendatahub-io/notebooks
Length of output: 1087
🏁 Script executed:
#!/bin/bash
# Check the RStudio CUDA push files specifically
echo "=== RStudio CUDA c9s push ==="
grep -A 5 "build-platforms" .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-push.yaml
echo ""
echo "=== RStudio CUDA rhel9 push ==="
grep -A 5 "build-platforms" .tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-push.yaml
echo ""
echo "=== Summary comparison ==="
echo "Pull Request (c9s CUDA):"
grep -A 2 "value:" .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml | head -3
echo ""
echo "Push (c9s CUDA):"
grep -A 2 "value:" .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-push.yaml | head -6Repository: opendatahub-io/notebooks
Length of output: 480
🏁 Script executed:
#!/bin/bash
# More targeted search for build-platforms in RStudio CUDA push files
echo "=== RStudio CUDA c9s push (build-platforms section) ==="
sed -n '/build-platforms/,/dockerfile/p' .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-push.yaml 2>/dev/null | head -10
echo ""
echo "=== RStudio CUDA rhel9 push (build-platforms section) ==="
sed -n '/build-platforms/,/dockerfile/p' .tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-push.yaml 2>/dev/null | head -10
echo ""
echo "=== Check if files are empty or don't have build-platforms ==="
wc -l .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-push.yaml .tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-push.yaml
echo ""
echo "=== Full content of c9s push (first 60 lines) ==="
head -60 .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-push.yamlRepository: opendatahub-io/notebooks
Length of output: 2380
Verify why CUDA resource sizing differs between c9s and rhel9 variants.
The platform change from linux/x86_64 to linux-d160-m2xlarge/amd64 aligns with allocating larger resources for CUDA workloads. However, the c9s CUDA build uses linux-d160-m2xlarge, while the rhel9 CUDA build uses linux-d160-m4xlarge/amd64. Confirm whether the different sizing (m2xlarge vs m4xlarge) is intentional or if both variants should use the same resource tier.
🤖 Prompt for AI Agents
.tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml around
line 38: the CUDA agent/resource label was changed to linux-d160-m2xlarge/amd64
which differs from the rhel9 CUDA build that uses linux-d160-m4xlarge/amd64;
verify whether the m2xlarge vs m4xlarge difference is intentional by checking
repository CI docs, recent commits, and any cloud/cluster capacity or cost
requirements, then update this file to match the intended tier (either change to
linux-d160-m4xlarge/amd64 for parity with rhel9 or update the rhel9 variant to
m2xlarge) and add a short comment explaining why that specific machine size is
required to prevent future drift.
…d scanning First changing from x86_64 to amd4, then resolving the resource issues by stepping to the larger size, to avoid OOM. Also update compute resource for clair-scan and ecosystem-cert-preflight-checks
b7de368 to
7bda7b1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml (1)
2-2: Update the generation script before merging to prevent loss of manual changes.This file is autogenerated by
scripts/generate_pull_request_pipelineruns.py. Manual modifications will be overwritten when the script regenerates these files:
taskRunSpecs with compute resources - The generator has no support for
taskRunSpecs. Manual additions for clair-scan and ecosystem-cert-preflight-checks tasks (8 CPU, 32Gi memory) will be lost.Timeout value - Generator hardcodes "3h" (line 85) but YAML has been manually set to "6h". This will be overwritten on regeneration.
Type hints mismatch - The parameter type at line 51 restricts platforms to
["linux/x86_64", "linux/arm64", "linux/ppc64le", "linux/s390x"], but code actually emits"linux-extra-fast/amd64"(line 141) and"linux-m2xlarge/arm64"(line 158).Update the generator to support all these features before merging, then regenerate the YAML files to ensure consistency.
♻️ Duplicate comments (14)
.tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml (1)
36-38: Verify generator script has been updated.As previously flagged, since this file is autogenerated (line 2), the change to
linux-d160-m2xlarge/amd64will be reverted unlessscripts/generate_pull_request_pipelineruns.pyhas been updated.Run the following script to check if the generator has been updated with the new platform identifier:
#!/bin/bash # Description: Verify the generator script includes the new platform identifier # Check if the generator script contains the new platform identifier rg -n -C3 'linux-d160-m2xlarge/amd64|build-platforms' scripts/generate_pull_request_pipelineruns.py.tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml (1)
36-39: Platform change already flagged in previous review.The generation script update requirement for platform values has been identified in a prior review comment.
.tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml (1)
36-40: Verify platform identifier validity and resolve discrepancy.This change conflicts with a previous critical review that flagged
linux-mxlarge/amd64as invalid. Additionally, there's a discrepancy: the enriched summary indicates this file should uselinux-d160-m4xlarge/amd64, but the code showslinux-mxlarge/amd64.Since this file is autogenerated (line 2), manual changes will be overwritten. If the new platform identifiers are valid in your Konflux environment:
- Update the generation script (
scripts/generate_pull_request_pipelineruns.py) to emit the correct platform identifier- Regenerate this file
- Ensure the platform matches what's documented in the enriched summary
If
linux-mxlarge/amd64is not a valid Tekton platform identifier, revert tolinux/x86_64as the previous review suggested.#!/bin/bash # Verify the generation script and platform patterns across the repo echo "=== Check generation script for platform handling ===" rg -n "mxlarge|m2xlarge|m4xlarge|d160" scripts/generate_pull_request_pipelineruns.py -A 3 -B 3 echo -e "\n=== Find all platform identifiers used in .tekton files ===" rg "^\s+- linux[/-]" .tekton/ --type yaml | sort -u echo -e "\n=== Check if this file's platform matches others ===" rg "linux-mxlarge/amd64" .tekton/ --type yaml -l.tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml (1)
37-37: Remove trailing whitespace aftervalue:.Line 37 has trailing spaces after the colon that should be removed to avoid linting issues.
🔎 Apply this diff to fix the whitespace:
- - name: build-platforms - value: + - name: build-platforms + value: - linux-d160-m2xlarge/amd64.tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml (2)
1-2: Update the generator script to prevent changes from being overwritten.Line 2 indicates this file is autogenerated by
scripts/generate_pull_request_pipelineruns.py. Manual edits to the platform identifier and timeout will be lost when the generator script is re-run. As flagged in previous reviews, the generator script needs to be updated to emit these new values.
38-39: Reconcile platform format with PR objectives.The platform identifier
linux-mxlarge/amd64includes a host-specific prefix, whereas the PR description states the goal is to update tolinux/amd64. Verify this host-type prefix is intentional and required by Konflux to access the larger resource pool..tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml (1)
1-2: Update the generator script to make platform changes durable.This file is autogenerated but has been manually edited. Without updating
scripts/generate_pull_request_pipelineruns.py, these changes will be lost on regeneration..tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml (2)
1-2: Update the generator script to prevent manual changes from being lost.This autogenerated file has been manually modified. Update
scripts/generate_pull_request_pipelineruns.pyto emit the new platform identifiers, timeout values, and taskRunSpecs configuration.
38-38: Document the rationale for using d160-m4xlarge vs other host types.This component uses
linux-d160-m4xlarge/amd64(note them4xlargesuffix), which differs from thed160-m2xlargeandmxlargevariants used in other files. The naming suggests different resource tiers, but without documentation, the selection criteria is unclear..tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml (1)
1-2: Update the generator script for durable changes.As noted in previous reviews, this autogenerated file requires updates to
scripts/generate_pull_request_pipelineruns.pyto avoid manual changes being overwritten during regeneration..tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml (1)
1-2: Update the generator script to preserve manual edits.This autogenerated file has been manually modified. Ensure
scripts/generate_pull_request_pipelineruns.pyis updated to generate these values automatically..tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml (2)
1-2: Update the generator script to avoid losing changes.This autogenerated file requires updates to
scripts/generate_pull_request_pipelineruns.pyto prevent regeneration from overwriting the manual edits.
38-38: Consider documenting platform tier selection logic.This workbench component uses
linux-d160-m4xlarge/amd64, similar to.tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml. The pattern suggests workbench/larger components may require the m4xlarge tier. Document this selection logic in the generator script or a README to aid future maintenance..tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml (1)
1-2: Update the generator script to make changes permanent.This autogenerated file has been manually edited. Update
scripts/generate_pull_request_pipelineruns.pyto generate the new platform identifiers, timeout, and taskRunSpecs automatically.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (44)
.tekton/multiarch-pull-request-pipeline.yaml(1 hunks).tekton/multiarch-push-pipeline.yaml(1 hunks).tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-cpu-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-base-image-cpu-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-cuda-py311-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-cuda-py312-c9s-push.yaml(1 hunks).tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-base-image-cuda-py312-ubi9-push.yaml(1 hunks).tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml(1 hunks).tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml(2 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml(2 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml(2 hunks).tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (25)
- .tekton/odh-base-image-cpu-py312-c9s-push.yaml
- .tekton/odh-base-image-cuda-12-8-py312-ubi9-push.yaml
- .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-6-4-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-py312-ubi9-push.yaml
- .tekton/odh-pipeline-runtime-minimal-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-jupyter-trustyai-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-py312-c9s-push.yaml
- .tekton/odh-base-image-rocm-6-3-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-6-4-py312-ubi9-pull-request.yaml
- .tekton/odh-workbench-rstudio-minimal-cuda-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cuda-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml
- .tekton/multiarch-pull-request-pipeline.yaml
- .tekton/odh-workbench-rstudio-minimal-cpu-py312-c9s-pull-request.yaml
- .tekton/multiarch-push-pipeline.yaml
- .tekton/odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-pull-request.yaml
- .tekton/odh-pipeline-runtime-datascience-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-cpu-py312-c9s-pull-request.yaml
- .tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml
- .tekton/odh-base-image-rocm-py312-c9s-pull-request.yaml
- .tekton/odh-base-image-cuda-12-8-py312-c9s-push.yaml
- .tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-pull-request.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (35)
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cpu-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-3-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py311-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-c9s-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-rocm-6-4-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-12-8-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-base-image-cuda-py312-c9s-on-pull-request
🔇 Additional comments (29)
.tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml (1)
25-26: LGTM: Timeout increased to accommodate larger builds.The pipeline timeout increase from 3h to 6h aligns with the PR objective to leverage larger available resources and allows for longer-running builds on the new platform configuration.
.tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml (2)
26-26: LGTM: Timeout extension appropriate for platform change.The 6h timeout provides adequate buffer for builds on the new platform infrastructure.
43-60: taskRunSpecs configuration is consistent with established patterns across all pipeline files.The 8 CPU / 32Gi allocation for
clair-scanandecosystem-cert-preflight-checksis standardized across all.tekton/pipeline variants (rocm, cuda, rhel9, c9s, ubi9). These files are autogenerated, indicating the resource allocation was a deliberate architectural decision. Setting requests=limits ensures predictable resource allocation..tekton/odh-workbench-rstudio-minimal-cpu-py312-rhel9-pull-request.yaml (3)
26-55: Inconsistency with AI summary: taskRunSpecs missing.The AI-generated summary states that
taskRunSpecswere added with compute resource specifications forclair-scanandecosystem-cert-preflight-checkstasks. However, the actual code shows notaskRunSpecssection in the spec. The spec contains onlytimeouts,params,pipelineRef,taskRunTemplate,workspaces, andstatusfields.Please verify whether the
taskRunSpecswere intended to be added to this file or if the summary is incorrect.
27-28: Timeout increase aligns with infrastructure change.The pipeline timeout has been increased to 6h (from 3h per the AI summary), which aligns with the PR objective of leveraging larger available resources. This provides adequate buffer for builds running on the new platform configuration.
38-40: Verify host size specification syntax in build-platforms parameter.The platform identifier uses
linux-mxlarge/amd64, which combines a host size specification (mxlarge) with the architecture. While standard Docker/OCI platform format isos/arch(e.g.,linux/amd64), Konflux's multi-platform-controller uses a PLATFORM parameter to provision build hosts by size. Confirm that:
- The syntax
linux-mxlarge/amd64is valid and correctly recognized by your Konflux deployment- The
mxlargehost size provides the intended larger resources mentioned in the PR objectives- This change has been tested successfully in your build environment
.tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml (1)
25-26: Timeout increase to 6h looks appropriate for CUDA builds.CUDA image builds are resource-intensive and may require extended time. This change aligns with the larger compute resources being allocated.
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-pull-request.yaml (2)
38-38: Verify platform identifier is configured in your Konflux multi-platform-controller.The platform identifier
linux-d160-m2xlarge/amd64is organization-specific and must be configured in your Konflux instance's multi-platform-controller to be valid. Confirm this configuration exists and check consistency across all.tektonfiles in the PR to ensure the same platform identifiers are used.
26-26: The 6-hour pipeline timeout is consistent across all pull-request build configurations in the repository (CPU, GPU/CUDA, ROCm variants), with push builds using 8 hours. This file is autogenerated, and the timeout follows the established pattern across similar workbench and pipeline-runtime images. No action required..tekton/odh-workbench-jupyter-minimal-cpu-py312-ubi9-pull-request.yaml (2)
26-26: No action required. The 6h pipeline timeout is consistent with all other pull-request pipeline files in the repository (.tekton/*-pull-request.yaml files), where 6h is the standard timeout. Push files use 8h. This follows an established project-wide pattern and does not indicate an unjustified increase.
46-63: Verify task names and resource specifications.The
taskRunSpecssection allocates substantial resources (8 CPU, 32Gi memory) to two tasks. Please confirm:
- Task names
clair-scanandecosystem-cert-preflight-checksare correct and match themultiarch-pull-request-pipeline- These resource levels (8 CPU and 32Gi per task) are appropriate for your workload—particularly important given these are intensive security/certification scanning tasks
- Your cluster has sufficient capacity for concurrent execution of these resource-heavy tasks during builds
.tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml (1)
45-62: Compute resource allocations align with existing pipeline infrastructure.The added
taskRunSpecsallocate 8 CPUs and 32Gi memory toclair-scanandecosystem-cert-preflight-checkstasks with requests matching limits (QoS: Guaranteed). These allocations are consistent with the existing pattern used across all pull-request pipeline files in the codebase, indicating they represent established infrastructure decisions rather than new experimental allocations.The Guaranteed QoS strategy is appropriate for critical CI/CD scanning tasks and ensures they have sufficient resources to complete reliably. No changes recommended.
.tekton/odh-workbench-jupyter-pytorch-rocm-py312-ubi9-pull-request.yaml (4)
26-26: Timeout increase is appropriate.The 6-hour timeout is reasonable given the platform change to larger resource hosts and potentially longer build times.
2-2: Verify that the generator script has been updated to produce these changes or manually regenerate after updates.This file is autogenerated by
scripts/generate_pull_request_pipelineruns.py. The current changes (timeout increased to 6h, platform set to linux-d160-m2xlarge/amd64, and taskRunSpecs added) are not reflected in the generator script and will be lost when the script runs again. Either:
- Update the generator script to include these configurations for this component
- Regenerate this file after updating the generator script
- Document why manual changes are necessary and ensure they're reapplied after generation
45-62: Task names and resource allocation verified, but clarify resource inconsistencies.The
taskRunSpecssection correctly references bothclair-scanandecosystem-cert-preflight-checkstasks from themultiarch-pull-request-pipeline. However, the resource allocation differs significantly from similar tasks in push pipelines:
- Pull-request allocation: 8 CPU requests/limits, 32Gi memory requests/limits
- Push pipeline allocation: 8 CPU requests, 16 CPU limits; 16Gi memory requests, 32Gi limits
Clarify why this pull-request pipeline has double the memory requests but half the CPU limits compared to the push pipelines. Setting requests equal to limits ensures guaranteed QoS, which is appropriate for critical scanning tasks, but the resource discrepancy across pipeline types should be documented or reconciled if unintentional.
38-38: The host-specific platform identifiers are intentionally assigned per-component and are standard for this project.The format
linux-d160-m2xlarge/amd64is generated byscripts/generate_pull_request_pipelineruns.pyand used consistently across 40+ build configurations. Themultiarch-pull-request-pipelineaccepts these values as thebuild-platformsparameter. Verify that the assignment for this component is correct in the generator script and that the multi-platform-controller in your Konflux instance has these hosts configured..tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml (1)
26-26: Verify extended timeout is sufficient for larger builds.The pipeline timeout has been extended from 3h to 6h, presumably to accommodate the larger resource pool. Ensure this timeout aligns with actual build duration requirements for this component.
.tekton/odh-pipeline-runtime-pytorch-cuda-py312-ubi9-pull-request.yaml (1)
45-62: LGTM: Resource allocation for intensive tasks.The addition of explicit compute resources (8 CPU, 32Gi memory) for
clair-scanandecosystem-cert-preflight-checkstasks is appropriate for ensuring these resource-intensive operations complete successfully on the larger host..tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml (2)
44-61: LGTM: Task-specific resource allocation.The taskRunSpecs appropriately allocate 8 CPU and 32Gi memory for the intensive clair-scan and ecosystem-cert-preflight-checks tasks, matching the pattern in other files.
38-39: Platform identifiers are correctly configured for multiarch builds.Both amd64 and arm64 use the same resource tier prefix (
linux-d160-m2xlarge/), which matches the pattern across the codebase where architecture pairs consistently share resource tier designations. This configuration enables the multiarch pipeline to handle both platforms with appropriate resource allocation..tekton/odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-pull-request.yaml (1)
45-62: LGTM: Appropriate resource allocation.The taskRunSpecs configuration correctly allocates increased compute resources for the clair-scan and ecosystem-cert-preflight-checks tasks.
.tekton/odh-workbench-jupyter-minimal-rocm-py312-ubi9-pull-request.yaml (1)
45-62: LGTM: Resource allocation for compute-intensive tasks.The taskRunSpecs appropriately increase resources for clair-scan and ecosystem-cert-preflight-checks.
.tekton/odh-base-image-rocm-py312-ubi9-pull-request.yaml (1)
43-60: LGTM: Task-level resource specifications.The taskRunSpecs configuration appropriately allocates 8 CPU and 32Gi memory for the resource-intensive clair-scan and ecosystem-cert-preflight-checks tasks.
.tekton/odh-base-image-cuda-py311-c9s-push.yaml (1)
31-32: The platform identifierlinux-mxlarge/amd64is valid and intentional. This format is used consistently throughout the repository for multiarch builds and represents a Konflux runner pool identifier. The PR successfully updates fromlinux/x86_64tolinux-mxlarge/amd64as stated in the objective. The difference between the pull-request variant (which useslinux-d160-m4xlargewith increased compute resources) and the push variant (which useslinux-mxlarge/amd64) is intentional, as confirmed by the commit message indicating the resource upgrade was to address OOM issues during scanning operations. No action required..tekton/odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-pull-request.yaml (3)
26-26: Timeout increase is appropriate for resource-intensive builds.Extending the pipeline timeout from 3 hours to 6 hours aligns with the PR objective of leveraging larger compute resources, which may extend build duration.
45-62: Excellent use of taskRunSpecs for resource allocation.The
taskRunSpecssection correctly specifies compute resources (8 CPU, 32Gi memory) for specific tasks that require additional resources. This is the proper Tekton mechanism for allocating task-specific compute resources and aligns well with the PR's goal of leveraging larger resources for resource-intensive operations like vulnerability scanning and certification checks.
38-38: Verify thatlinux-d160-m2xlarge/amd64is configured in your Konflux multi-platform-controller.Custom platform identifiers (non-standard formats like
linux-d160-m2xlarge/amd64) require explicit configuration in the multi-platform-controller's ConfigMap. While this pattern is used throughout the repository, confirm with your platform engineering team that this platform identifier exists before merging, otherwise the build will fail.If not configured, either request the addition to the multi-platform-controller or use a standard identifier like
linux/amd64withtaskRunSpecsfor resource allocation..tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml (2)
28-28: Pipeline timeout extended to 6 hours.The timeout increase from 3h to 6h aligns with the PR objective of leveraging larger compute resources. Ensure this duration is appropriate for the expected build time on the new platform.
47-64: Compute resource allocation for scanning tasks looks appropriate.The
taskRunSpecssection properly allocates dedicated resources (8 CPU, 32Gi memory) forclair-scanandecosystem-cert-preflight-checkstasks. Setting requests equal to limits ensures guaranteed resources, which is appropriate for resource-intensive scanning operations.However, since this file is autogenerated (line 2), verify that
scripts/generate_pull_request_pipelineruns.pyhas been updated to emit thesetaskRunSpecsfor the appropriate pipelines. Manual edits to this file will be overwritten on the next generation.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-mxlarge/amd64 | ||
| - linux/arm64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find and examine the file and search for build-platforms usage pattern
# First, locate the file
find . -name ".tekton" -o -name "odh-base-image-cpu-py312-ubi9-push.yaml" 2>/dev/null | head -20
# Then search for the parameter name in .tekton directory
echo "=== Searching for build-platforms parameter ==="
rg -n "build-platforms" . --type yaml -A 3 -B 1 2>/dev/null | head -50
# Search for similar platform patterns
echo "=== Searching for platform patterns ==="
rg -n "linux-[a-z]+/(amd64|arm64)" . --type yaml 2>/dev/null | head -30Repository: opendatahub-io/notebooks
Length of output: 215
🏁 Script executed:
#!/bin/bash
# Read the file to see what's actually there
cat -n ./.tekton/odh-base-image-cpu-py312-ubi9-push.yaml | head -50Repository: opendatahub-io/notebooks
Length of output: 1924
Clarify platform identifier format consistency across build targets.
The build-platforms parameter uses inconsistent formats: linux-mxlarge/amd64 for amd64 (line 33) but linux/arm64 for arm64 (line 34). The linux-mxlarge/ prefix appears to specify a resource class for the build environment.
Confirm this format is recognized by your Konflux/Tekton build pipeline. If the larger resource allocation is needed only for amd64 builds, that's valid; otherwise, ensure consistency across all platforms or document why the formats differ.
🤖 Prompt for AI Agents
In .tekton/odh-base-image-cpu-py312-ubi9-push.yaml around lines 31 to 34, the
build-platforms list mixes platform identifier formats (linux-mxlarge/amd64 vs
linux/arm64) which may be a resource-class prefix; either normalize both entries
to the same format (e.g., linux/amd64 and linux/arm64) or apply the
resource-class prefix consistently (e.g., linux-mxlarge/amd64 and
linux-mxlarge/arm64), and if the mxlarge prefix is intentionally only for amd64,
add an inline comment explaining why and/or validate that your Konflux/Tekton
pipeline accepts the prefixed format so the build selection and resource
allocation behave as expected.
| # Added taskRunSpecs to increase compute resources | ||
| taskRunSpecs: | ||
| - pipelineTaskName: clair-scan | ||
| computeResources: | ||
| requests: | ||
| cpu: '8' | ||
| memory: 32Gi | ||
| limits: | ||
| cpu: '8' | ||
| memory: 32Gi | ||
| - pipelineTaskName: ecosystem-cert-preflight-checks | ||
| computeResources: | ||
| requests: | ||
| cpu: '8' | ||
| memory: 32Gi | ||
| limits: | ||
| cpu: '8' | ||
| memory: 32Gi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find the generation script and examine its content
find . -name "generate_pull_request_pipelineruns.py" -type f 2>/dev/null | head -5Repository: opendatahub-io/notebooks
Length of output: 115
🏁 Script executed:
# Also check if .tekton directory exists and what files are there
ls -la .tekton/ 2>/dev/null | grep -E "\.yaml|\.yml" || echo "No .tekton directory found"Repository: opendatahub-io/notebooks
Length of output: 7812
🏁 Script executed:
# Search for references to taskRunSpecs in the entire repo
grep -r "taskRunSpecs" . --include="*.py" --include="*.yaml" --include="*.yml" 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 1908
🏁 Script executed:
# Look for the specific file mentioned in the review
if [ -f ".tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml" ]; then
echo "=== File exists, showing lines 44-61 ==="
sed -n '44,61p' .tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml
else
echo "File not found, searching for similar files..."
find . -name "*pull-request*.yaml" -type f 2>/dev/null | head -10
fiRepository: opendatahub-io/notebooks
Length of output: 552
🏁 Script executed:
# Now let's examine the generation script
cat scripts/generate_pull_request_pipelineruns.py | head -100Repository: opendatahub-io/notebooks
Length of output: 3639
🏁 Script executed:
# Check if the script mentions taskRunSpecs or computeResources
grep -n "taskRunSpecs\|computeResources\|clair-scan\|ecosystem-cert" scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
# Get the full script to understand its structure
wc -l scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 117
🏁 Script executed:
# Let's view the rest of the script to see the complete structure
tail -150 scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 5666
🏁 Script executed:
# Let's also check the push pipeline file to see if it has taskRunSpecs
head -100 .tekton/odh-base-image-cuda-py311-c9s-push.yamlRepository: opendatahub-io/notebooks
Length of output: 1723
🏁 Script executed:
# Let's see the complete pull_request_pipelinerun_template function
sed -n '48,150p' scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 4171
🏁 Script executed:
# Let's verify by checking if any pull-request files have taskRunSpecs that aren't in push files
echo "=== Pull-request files with taskRunSpecs (no push equivalent) ==="
for pr_file in .tekton/*pull-request*.yaml; do
if grep -q "taskRunSpecs" "$pr_file"; then
push_file="${pr_file%-pull-request.yaml}-push.yaml"
if ! grep -q "taskRunSpecs" "$push_file" 2>/dev/null; then
echo "$(basename $pr_file) has taskRunSpecs but $(basename $push_file) doesn't"
fi
fi
done | head -20Repository: opendatahub-io/notebooks
Length of output: 1866
🏁 Script executed:
# Count how many files would be affected by this
echo "=== Summary ==="
pr_with_taskrunspecs=$(grep -l "taskRunSpecs" .tekton/*pull-request*.yaml 2>/dev/null | wc -l)
push_with_taskrunspecs=$(grep -l "taskRunSpecs" .tekton/*push*.yaml 2>/dev/null | wc -l)
echo "Pull-request files with taskRunSpecs: $pr_with_taskrunspecs"
echo "Push files with taskRunSpecs: $push_with_taskrunspecs"Repository: opendatahub-io/notebooks
Length of output: 156
Update the generation script to support taskRunSpecs for pull-request pipelines.
The taskRunSpecs block added to pull-request files will be lost when scripts/generate_pull_request_pipelineruns.py regenerates them, since the pull_request_pipelinerun_template() function does not include this field. The generation script must be updated to emit taskRunSpecs with the compute resource specifications for clair-scan and ecosystem-cert-preflight-checks tasks. Additionally, the comment on line 44 will be stripped during regeneration since the script controls the output format.
🤖 Prompt for AI Agents
In .tekton/odh-base-image-cuda-py311-c9s-pull-request.yaml around lines 44-61,
the added taskRunSpecs block for clair-scan and ecosystem-cert-preflight-checks
will be lost because scripts/generate_pull_request_pipelineruns.py's
pull_request_pipelinerun_template() doesn't emit taskRunSpecs (and the in-file
comment will be dropped since the template controls output). Update
scripts/generate_pull_request_pipelineruns.py: modify
pull_request_pipelinerun_template() to include a taskRunSpecs array that defines
entries for pipelineTaskName: clair-scan and pipelineTaskName:
ecosystem-cert-preflight-checks with computeResources requests/limits cpu '8'
and memory 32Gi; also add the comment text into the generated output by
inserting a YAML comment/header in the template so it appears in regenerated
files.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-d160-m2xlarge/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Analyze platform identifier patterns across all Tekton files in this PR
echo "=== Platform identifiers used across PR ==="
rg -n "linux-(mxlarge|d160-m2xlarge|d160-m4xlarge|m2xlarge)/amd64" .tekton/ --no-heading 2>/dev/null | \
awk -F: '{print $1 ": " $NF}' | \
sort -u
echo -e "\n=== Count by platform type ==="
rg "linux-(mxlarge|d160-m2xlarge|d160-m4xlarge|m2xlarge)/amd64" .tekton/ --no-filename -o 2>/dev/null | sort | uniq -cRepository: opendatahub-io/notebooks
Length of output: 4198
🏁 Script executed:
#!/bin/bash
# Check specific files to understand the pattern
echo "=== CPU-only file (mxlarge) ==="
grep -A 5 -B 5 "linux-mxlarge" .tekton/odh-base-image-cpu-py312-ubi9-pull-request.yaml | head -20
echo -e "\n=== CUDA file with mxlarge ==="
grep -A 5 -B 5 "linux-mxlarge" .tekton/odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml | head -20
echo -e "\n=== CUDA file with d160-m2xlarge ==="
grep -A 5 -B 5 "linux-d160-m2xlarge" .tekton/odh-base-image-cuda-12-8-py312-c9s-pull-request.yaml | head -20
echo -e "\n=== ROCm file with d160-m2xlarge ==="
grep -A 5 -B 5 "linux-d160-m2xlarge" .tekton/odh-base-image-rocm-6-3-py312-c9s-pull-request.yaml | head -20
echo -e "\n=== Check for any documentation comments ==="
rg -i "platform|resource|node.*selector|mxlarge|d160" .tekton/ -B 2 -A 2 | grep -E "^#|platform|resource|mxlarge|d160" | head -30Repository: opendatahub-io/notebooks
Length of output: 4895
Document platform identifier and resource pool selection criteria.
This repository uses three platform identifiers across Tekton build files: linux-mxlarge/amd64, linux-d160-m2xlarge/amd64, and linux-d160-m4xlarge/amd64. While a pattern exists (CPU/minimal workloads use mxlarge; GPU/ROCm-heavy builds use d160 variants), the selection is inconsistent and undocumented. For example, odh-base-image-cuda-12-8-py312-ubi9-pull-request.yaml uses mxlarge while the c9s variant of the same CUDA version uses d160-m2xlarge. Add comments or documentation explaining which host types are required for each component category to improve maintainability and prevent future misalignments.
🤖 Prompt for AI Agents
.tekton/odh-pipeline-runtime-pytorch-rocm-py312-ubi9-pull-request.yaml around
line 38: the Tekton platform identifier selection is undocumented and
inconsistent across files (mxlarge vs d160 variants); add a short comment at the
top of this file (and mirror into other Tekton YAMLs) that documents the
selection criteria and examples (e.g., "linux-mxlarge/amd64: CPU/minimal
workloads; linux-d160-m2xlarge/amd64: GPU/ROCm-heavy or high-memory jobs;
linux-d160-m4xlarge/amd64: highest resource GPU/IO-bound jobs"), and add a
single-sentence rationale for why this file uses the chosen host type; also
add/update a central README or CONTRIBUTING section listing the three
identifiers, selection rules, and guidance for future PRs so maintainers can
align host types consistently.
| spec: | ||
| timeouts: | ||
| pipeline: 3h | ||
| pipeline: 6h |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
echo "=== Checking if file exists and examining timeout pattern ==="
FILE=".tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml"
if [ -f "$FILE" ]; then
echo "File found, checking timeouts:"
grep -A 2 "timeouts:" "$FILE" | head -20
else
echo "File not found in root. Searching for .tekton files..."
fi
echo ""
echo "=== Checking all timeouts in .tekton directory ==="
find .tekton -name "*.yaml" -type f | head -5 | while read f; do
echo "--- $f ---"
grep -E "(timeouts:|pipeline:)" "$f" 2>/dev/null || echo "No timeouts found"
done
echo ""
echo "=== Checking generation script ==="
if [ -f "scripts/generate_pull_request_pipelineruns.py" ]; then
echo "Generation script found. Checking for timeout configuration:"
grep -n "timeout\|6h\|3h" scripts/generate_pull_request_pipelineruns.py | head -20
else
echo "Generation script not found"
fi
echo ""
echo "=== Listing scripts directory ==="
ls -la scripts/ 2>/dev/null | head -10Repository: opendatahub-io/notebooks
Length of output: 1435
🏁 Script executed:
#!/bin/bash
echo "=== Generation script context around timeout (lines 75-95) ==="
sed -n '75,95p' scripts/generate_pull_request_pipelineruns.py
echo ""
echo "=== Checking git diff or git log for recent changes to generation script ==="
cd . && git log --oneline scripts/generate_pull_request_pipelineruns.py 2>/dev/null | head -5 || echo "Not a git repo or no history"
echo ""
echo "=== Checking if there are changes staged/modified in generation script ==="
git diff scripts/generate_pull_request_pipelineruns.py 2>/dev/null | head -50 || echo "Git diff not available"
echo ""
echo "=== Full context around timeouts in generation script ==="
grep -B 5 -A 5 '"pipeline": "3h"' scripts/generate_pull_request_pipelineruns.py
echo ""
echo "=== Checking if there's a platform configuration in generation script ==="
grep -n "platform\|linux\|amd64\|nodeSelector" scripts/generate_pull_request_pipelineruns.py | head -20Repository: opendatahub-io/notebooks
Length of output: 2304
Update the generation script to reflect the 6-hour timeout change.
The .tekton file has pipeline: 6h, but scripts/generate_pull_request_pipelineruns.py still specifies "pipeline": "3h" on line 85. Since this file is autogenerated, the generation script must be updated to persist this timeout change. Otherwise, regenerating the file will revert the timeout back to 3h.
If the 6-hour timeout is intentional (aligning with the RHAIENG-2460 compute resource updates), update the generation script to "pipeline": "6h" to ensure consistency across all generated files.
🤖 Prompt for AI Agents
In .tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml
around line 26 the pipeline timeout was changed to 6h but the generator
scripts/generate_pull_request_pipelineruns.py still emits "pipeline": "3h" (line
~85); update that script to emit "pipeline": "6h" (and any related
constants/templates) so regenerated files keep the 6-hour timeout, then re-run
the generator to verify output is updated.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-d160-m4xlarge/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify platform identifier patterns across all modified .tekton files
echo "=== Checking all build-platforms values in .tekton files ==="
rg -A 2 "build-platforms" .tekton/ | grep -A 1 "value:" | head -20
echo ""
echo "=== Checking for any remaining x86_64 references ==="
rg "x86_64" .tekton/ | head -20
echo ""
echo "=== Checking generation script for platform defaults ==="
if [ -f "scripts/generate_pull_request_pipelineruns.py" ]; then
rg "x86_64|amd64" scripts/generate_pull_request_pipelineruns.py | head -20
else
echo "Generation script not found at scripts/generate_pull_request_pipelineruns.py"
find . -name "*generate*" -type f 2>/dev/null | grep -E "\.(py|sh)$" | head -10
fiRepository: opendatahub-io/notebooks
Length of output: 11190
🏁 Script executed:
#!/bin/bash
# Inspect generation script more carefully
echo "=== Finding and examining generation script ==="
find . -name "*generate*" -type f \( -name "*.py" -o -name "*.sh" \) 2>/dev/null | head -5
echo ""
echo "=== Checking generation script content around build_platforms ==="
rg -B 5 -A 10 'build_platforms.*\[' scripts/ | head -80
echo ""
echo "=== Check if generation script was recently modified and what changes affect .tekton files ==="
git log --oneline -20 -- scripts/generate_pull_request_pipelineruns.py 2>/dev/null | head -10Repository: opendatahub-io/notebooks
Length of output: 6238
🏁 Script executed:
#!/bin/bash
# Check the specific file and understand what component should generate what
echo "=== Check the specific file from review comment ==="
cat .tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml | grep -A 5 "build-platforms"
echo ""
echo "=== Search for odh-workbench-jupyter-minimal-cuda in generation script ==="
rg "odh-workbench-jupyter-minimal-cuda" scripts/generate_pull_request_pipelineruns.py
echo ""
echo "=== Check git diff for recent commit RHAIENG-2460 ==="
git show --stat 7bda7b18 | head -40
echo ""
echo "=== Check what changed in that commit for build_platforms ==="
git show 7bda7b18 -- scripts/generate_pull_request_pipelineruns.py | grep -B 3 -A 3 "build_platforms" | head -60Repository: opendatahub-io/notebooks
Length of output: 5180
🏁 Script executed:
#!/bin/bash
# Final verification: is this component manually edited or should be in generation script?
echo "=== List all components with special build_platforms in generation script ==="
rg 'if component in \[' -A 5 scripts/generate_pull_request_pipelineruns.py | grep -E "(if component|odh-)"
echo ""
echo "=== Check if there's a d160 variant in generation script ==="
rg "d160|d160-m4xlarge|d160-m2xlarge" scripts/
echo ""
echo "=== Check git blame for the specific line in deployed file ==="
git log --all --full-history -p -- .tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml | grep -B 5 -A 5 "linux-d160-m4xlarge" | head -30Repository: opendatahub-io/notebooks
Length of output: 1769
Generation script does not include this component and needs updating.
The deployed file uses linux-d160-m4xlarge/amd64, but scripts/generate_pull_request_pipelineruns.py does not list odh-workbench-jupyter-minimal-cuda-py312-ubi9 in any special build platform configuration. The script would default this component to linux/x86_64 if regenerated, causing the deployed platform to be overwritten.
Add this component to the appropriate conditional block in the generation script to ensure the resource-specific platform identifier is consistently maintained across regenerations.
🤖 Prompt for AI Agents
.tekton/odh-workbench-jupyter-minimal-cuda-py312-ubi9-pull-request.yaml lines
38-38: the deployed file contains the platform identifier
"linux-d160-m4xlarge/amd64" but scripts/generate_pull_request_pipelineruns.py
does not include odh-workbench-jupyter-minimal-cuda-py312-ubi9 in its special
build-platform mapping, so regenerating will revert it to linux/x86_64; update
scripts/generate_pull_request_pipelineruns.py by adding
odh-workbench-jupyter-minimal-cuda-py312-ubi9 to the appropriate
conditional/list for resource-specific platforms (the block that maps components
to linux-d160-m4xlarge/amd64 or the CUDA/UBI9 special-case group), ensuring the
generator emits the exact "linux-d160-m4xlarge/amd64" identifier for this
component on future runs.
| - name: build-platforms | ||
| value: | ||
| - linux/x86_64 | ||
| - linux-d160-m4xlarge/amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find the file and check if it exists, then examine its content around line 40
find . -name "*pull-request.yaml" -o -name "*.tekton/*.yaml" 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 1384
🏁 Script executed:
# Find the specific workbench-rstudio file
find . -name "*workbench*rstudio*.yaml" 2>/dev/null
# Then examine the file around line 40 with context
if [ -f "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" ]; then
echo "=== File content around line 40 ==="
sed -n '30,50p' "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml"
fiRepository: opendatahub-io/notebooks
Length of output: 1305
🏁 Script executed:
# Check git log to see recent changes to this file
git log --oneline -n 20 -- "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" 2>/dev/null
# Show the latest diff for this file
echo "=== Latest git diff ==="
git diff HEAD~1 HEAD -- "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" 2>/dev/null | head -100Repository: opendatahub-io/notebooks
Length of output: 180
🏁 Script executed:
# Look for generation scripts - common patterns
find . -name "*.py" -o -name "*.sh" -o -name "Makefile" 2>/dev/null | xargs grep -l "tekton\|generate" 2>/dev/null | grep -v ".git" | head -10Repository: opendatahub-io/notebooks
Length of output: 509
🏁 Script executed:
# Check if there's a template or common pattern in other workbench files
echo "=== Checking line 40 in other workbench files ==="
for file in ./.tekton/odh-workbench-*.yaml; do
echo "File: $file"
sed -n '40p' "$file" 2>/dev/null
done | head -40Repository: opendatahub-io/notebooks
Length of output: 2509
🏁 Script executed:
# Let's look at the generation script to understand how these platform identifiers are created
echo "=== Examining the generation script ==="
head -100 ./ci/cached-builds/konflux_generate_component_build_pipelines.py
# Also check if there's a config file or data structure that defines platforms
echo ""
echo "=== Looking for platform configuration ==="
grep -r "linux-d160-m4xlarge\|amd64" . --include="*.py" --include="*.yaml" --include="*.json" 2>/dev/null | grep -v ".git" | head -20Repository: opendatahub-io/notebooks
Length of output: 5629
🏁 Script executed:
# Let's see more of the git history and PR/commit details
git log --all --oneline -n 50 -- "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" 2>/dev/null
# Check the commit that changed this file
git show 7bda7b18 --stat 2>/dev/null | head -50Repository: opendatahub-io/notebooks
Length of output: 3287
🏁 Script executed:
# Check for any recent changes to the generation script itself
echo "=== Recent changes to generation script ==="
git log --oneline -n 20 -- "./ci/cached-builds/konflux_generate_component_build_pipelines.py" 2>/dev/null
# Show the actual generation script content
echo ""
echo "=== Generation script content (first 150 lines) ==="
head -150 ./ci/cached-builds/konflux_generate_component_build_pipelines.pyRepository: opendatahub-io/notebooks
Length of output: 7084
🏁 Script executed:
# Let's check if there's a README or documentation about the build platforms
find . -name "README*" -o -name "*.md" 2>/dev/null | xargs grep -l "platform\|build-platform" 2>/dev/null | head -5
# Also check for any CI/CD configuration that might explain this
ls -la ./.github/workflows/ ./.tekton/ 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 1559
🏁 Script executed:
# Let's look at what changed in the latest commit
echo "=== Full commit message and details ==="
git show 7bda7b18 2>/dev/null | head -100Repository: opendatahub-io/notebooks
Length of output: 1992
🏁 Script executed:
# Let's see the full generation script, specifically looking for where build-platforms are determined
grep -n "build-platforms\|build_platforms" ./ci/cached-builds/konflux_generate_component_build_pipelines.py
# And check for any data structure or config that maps components to platforms
grep -n "rstudio\|platform" ./ci/cached-builds/konflux_generate_component_build_pipelines.py | head -40
# Full script - let's see the main logic
wc -l ./ci/cached-builds/konflux_generate_component_build_pipelines.pyRepository: opendatahub-io/notebooks
Length of output: 1865
🏁 Script executed:
# Let's look at the entire script to find where platform assignment happens
tail -500 ./ci/cached-builds/konflux_generate_component_build_pipelines.py | head -300Repository: opendatahub-io/notebooks
Length of output: 14014
🏁 Script executed:
# Check if there's a mapping or config file for components to platforms
find ./ci -name "*.py" -o -name "*.yaml" -o -name "*.json" 2>/dev/null | xargs grep -l "rstudio.*cuda\|platform.*map" 2>/dev/null
# Also check the commit diff more thoroughly
git show 7bda7b18 -- ./ci/cached-builds/konflux_generate_component_build_pipelines.py 2>/dev/null | head -200Repository: opendatahub-io/notebooks
Length of output: 9470
🏁 Script executed:
# Let's look for the actual component configuration or release data
find ./ci -type f \( -name "*.py" -o -name "*.yaml" -o -name "*.txt" -o -name "*.json" \) 2>/dev/null | head -20
# Check if there's a components.yaml or similar
ls -la ./ci/cached-builds/ 2>/dev/nullRepository: opendatahub-io/notebooks
Length of output: 2151
🏁 Script executed:
# Let's look at one of the input config files or understand the structure better
echo "=== Searching for release or component configuration ==="
find . -name "*.yaml" -o -name "*.json" 2>/dev/null | grep -v ".git\|.tekton\|\.github" | head -20
# Check if there's a releases or components file
cat ./ci/cached-builds/releases.yaml 2>/dev/null || echo "No releases.yaml"
cat ./ci/cached-builds/components.yaml 2>/dev/null || echo "No components.yaml"Repository: opendatahub-io/notebooks
Length of output: 1239
🏁 Script executed:
# Search for the main function and where component_name maps to platform
grep -n "def main\|rstudio\|cuda.*platform\|PLATFORMS\|platform.*=" ./ci/cached-builds/konflux_generate_component_build_pipelines.py | head -50
# Look for any dictionary or data structure that maps components to specific platforms
python3 << 'EOF'
import re
with open('./ci/cached-builds/konflux_generate_component_build_pipelines.py', 'r') as f:
content = f.read()
# Find the main function and what comes after
main_match = re.search(r'def main\(\).*', content, re.DOTALL)
if main_match:
main_content = main_match.group(0)
# Extract first 2000 chars of main function
print("=== Main function (first 2000 chars) ===")
print(main_content[:2000])
EOFRepository: opendatahub-io/notebooks
Length of output: 154
🏁 Script executed:
# Let's search for where the platform identifiers are being set for specific components
grep -B5 -A5 "linux-d160-m4xlarge\|linux-m2xlarge\|build-platforms" ./ci/cached-builds/konflux_generate_component_build_pipelines.py | head -60Repository: opendatahub-io/notebooks
Length of output: 2774
🏁 Script executed:
# Check the full main function and initialization
sed -n '/^def main/,/^def \|^if __name__/p' ./ci/cached-builds/konflux_generate_component_build_pipelines.py | head -200Repository: opendatahub-io/notebooks
Length of output: 1891
🏁 Script executed:
# Look for any config or mapping dictionary
grep -n "dict\|PLATFORMS\|platforms.*=\|component.*platform" ./ci/cached-builds/konflux_generate_component_build_pipelines.py | head -30Repository: opendatahub-io/notebooks
Length of output: 208
🏁 Script executed:
# Let's look at the end of the script where main is called
tail -100 ./ci/cached-builds/konflux_generate_component_build_pipelines.pyRepository: opendatahub-io/notebooks
Length of output: 3675
🏁 Script executed:
# The script shows hard-coded "linux/x86_64" but files have different values
# Let me check if there's custom logic or post-processing that modifies platforms per component
grep -n "component_name\|task_name\|platform\|linux-d160\|linux-m" ./ci/cached-builds/konflux_generate_component_build_pipelines.py | head -50Repository: opendatahub-io/notebooks
Length of output: 2648
🏁 Script executed:
# Check if the component_build_pipeline function has any conditional logic for platforms
sed -n '62,300p' ./ci/cached-builds/konflux_generate_component_build_pipelines.py | grep -A20 "def component_build_pipeline"Repository: opendatahub-io/notebooks
Length of output: 929
🏁 Script executed:
# Search the entire script for any logic that modifies platforms based on component type or name
python3 << 'EOF'
with open('./ci/cached-builds/konflux_generate_component_build_pipelines.py', 'r') as f:
lines = f.readlines()
# Look for any conditional logic around platform or cuda/rocm
for i, line in enumerate(lines, 1):
if 'platform' in line.lower() or 'cuda' in line.lower() or 'rocm' in line.lower() or 'component_name' in line:
print(f"{i:4d}: {line.rstrip()}")
EOFRepository: opendatahub-io/notebooks
Length of output: 2429
🏁 Script executed:
# Let's check if there's a separate component or release definition file
find . -name "*.yaml" -o -name "*.yml" 2>/dev/null | xargs grep -l "rstudio.*cuda\|platform.*cuda" 2>/dev/null | grep -v ".tekton\|.git"Repository: opendatahub-io/notebooks
Length of output: 205
🏁 Script executed:
# Look for any recent modifications to the generated files
git log --all --oneline -- "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" 2>/dev/null | head -5
# And check if there's a diff to see what changed
git diff HEAD~10 HEAD -- "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" 2>/dev/null | head -50Repository: opendatahub-io/notebooks
Length of output: 246
🏁 Script executed:
# Check if the generated YAML files have any comments indicating manual modification
head -5 ./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml
# Check the full commit that added these platform changes
git show 7bda7b18 -- "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" 2>/dev/null | head -100Repository: opendatahub-io/notebooks
Length of output: 3807
🏁 Script executed:
# Let's see if the generation script has been recently updated - check its history
git log --all -p -- "./ci/cached-builds/konflux_generate_component_build_pipelines.py" 2>/dev/null | head -300Repository: opendatahub-io/notebooks
Length of output: 14280
🏁 Script executed:
# Check if there's a taskRunSpecs section that was added which might explain resource changes
grep -B2 -A10 "taskRunSpecs" ./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yamlRepository: opendatahub-io/notebooks
Length of output: 477
🏁 Script executed:
# Let's look at the full spec of the script around line 720 where taskRunSpecs is generated
sed -n '710,760p' ./ci/cached-builds/konflux_generate_component_build_pipelines.pyRepository: opendatahub-io/notebooks
Length of output: 2201
🏁 Script executed:
# The header says "autogenerated by scripts/generate_pull_request_pipelineruns.py" but we found a different script
# Let's check if that script exists
find . -name "*generate_pull_request*" 2>/dev/null
# Also check if these files are actually being manually maintained or if there's manual post-processing
git log --all --oneline -n 30 -- "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" 2>/dev/null
# Let's check the previous version of this file to understand the history
git show 70d0c355 -- "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" 2>/dev/null | head -80Repository: opendatahub-io/notebooks
Length of output: 3451
🏁 Script executed:
# Check if the platforms are being set dynamically via some other mechanism
# Let's look for any sed/awk scripts or post-processing
find ./ci -type f \( -name "*.sh" -o -name "*.py" \) 2>/dev/null | xargs grep -l "sed\|awk\|platform\|linux-d160" 2>/dev/null | head -10Repository: opendatahub-io/notebooks
Length of output: 448
🏁 Script executed:
# Let's verify the current state - are the files manually maintained or auto-generated?
git diff --cached -- "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" 2>/dev/null | head -50
# Check the status of .tekton files
git status -- "./.tekton/" 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 133
🏁 Script executed:
# Found it! Let's check the actual generator script that's being used
head -100 ./scripts/generate_pull_request_pipelineruns.py
# And check its recent history
git log --oneline -n 20 -- "./scripts/generate_pull_request_pipelineruns.py" 2>/dev/nullRepository: opendatahub-io/notebooks
Length of output: 3728
🏁 Script executed:
# Let's understand the full context of the platform change
# Check if the script has logic to set different platforms per component
grep -n "platform\|cuda\|rstudio\|linux-d160" ./scripts/generate_pull_request_pipelineruns.py | head -30Repository: opendatahub-io/notebooks
Length of output: 969
🏁 Script executed:
# Let's see the full script to understand how platforms are determined
cat ./scripts/generate_pull_request_pipelineruns.pyRepository: opendatahub-io/notebooks
Length of output: 9993
🏁 Script executed:
# Let's check the exact diff between the two commits to see all changes
git diff 70d0c355 7bda7b18 -- "./.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml" 2>/dev/nullRepository: opendatahub-io/notebooks
Length of output: 1470
🏁 Script executed:
# One final check - let's see what the current PR description says (if available in commit message)
git log --format="%B" 7bda7b18 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 346
Document the manual platform override and maintenance strategy.
The platform identifier changed to linux-d160-m4xlarge/amd64 to address OOM issues by targeting larger compute instances—a legitimate resource scaling decision evidenced by the added taskRunSpecs (8 CPU, 32Gi RAM). However, this file is marked as autogenerated by scripts/generate_pull_request_pipelineruns.py yet is maintained manually. The generator script has no entry for this component and defaults to linux/x86_64, so this file will not be regenerated and requires ongoing manual maintenance.
Required:
- Either add this component to the platform mapping in
scripts/generate_pull_request_pipelineruns.pywith thelinux-d160-m4xlarge/amd64configuration, or remove the "autogenerated" header and document this as a manually maintained override file. - Update the header comment to clearly state the manual maintenance requirement to prevent future confusion.
🤖 Prompt for AI Agents
.tekton/odh-workbench-rstudio-minimal-cuda-py312-rhel9-pull-request.yaml around
line 40: this file contains a manual platform override
(linux-d160-m4xlarge/amd64) but is marked as autogenerated while the generator
script (scripts/generate_pull_request_pipelineruns.py) has no entry for this
component and defaults to linux/x86_64; fix by either (A) adding this component
and its linux-d160-m4xlarge/amd64 mapping to
scripts/generate_pull_request_pipelineruns.py so the file can be regenerated
consistently, or (B) remove the “autogenerated” header from this YAML and
replace it with a clear header stating this file is a manual platform override
that requires ongoing maintenance (include reason for override and who/where to
update), and commit that header change to prevent future confusion.
|
This last test was good. No failures in clair-scan and ecosystem-cert-preflight-checks These 3 failures are known issue: Red Hat Konflux / odh-base-image-cpu-py312-ubi9-on-pull-request |
|
Thank you Vath for fixing this. Changes looks good to me. /lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: atheo89 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@atheo89 this is a PR to odh-konflux-central https://github.com/opendatahub-io/odh-konflux-central/pull/95/files |
|
@ysok: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/hold Revision 7bda7b1 was retested 3 times: holding |
Description
Update ./tekton yaml files from linux/x86_64 to linux/amd64 to take advantage of the available larger resources.
How Has This Been Tested?
Self checklist (all need to be checked):
make test(gmakeon macOS) before asking for reviewDockerfile.konfluxfiles should be done inodh/notebooksand automatically synced torhds/notebooks. For Konflux-specific changes, modifyDockerfile.konfluxfiles directly inrhds/notebooksas these require special attention in the downstream repository and flow to the upcoming RHOAI release.Merge criteria:
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.