Skip to content

Conversation

@Al-Pragliola
Copy link

@Al-Pragliola Al-Pragliola commented Aug 14, 2025

superseeds #302

Description

How Has This Been Tested?

Merge criteria:

  • All the commits have been signed-off (To pass the DCO check)
  • The commits have meaningful messages
  • Automated tests are provided as part of the PR for major new functionalities; testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work.
  • Code changes follow the kubeflow contribution guidelines.
  • For first time contributors: Please reach out to the Reviewers to ensure all tests are being run, ensuring the label ok-to-test has been added to the PR.

If you have UI changes

  • The developer has added tests or explained why testing cannot be added.
  • Included any necessary screenshots or gifs if it was a UI change.
  • Verify that UI/UX changes conform the UX guidelines for Kubeflow.

Summary by CodeRabbit

  • New Features

    • Added Experiments and Experiment Runs APIs, including artifact association and metric history.
    • Introduced new artifact types: Dataset, Metric, and Parameter.
    • Added rich filtering (filterQuery, artifactType, stepIds) across listing endpoints.
    • UI: new multi-filter toolbar, “Latest” badge in version selector, streamlined “View all versions.”
    • BFF: option to skip TLS verification for local/dev setups.
    • Catalog: support excluding models via patterns.
  • Documentation

    • Updated guides to REST/OpenAPI; removed ml-metadata references.
  • Chores

    • Docker Compose now uses MySQL-backed datastore; removed ml-metadata services.
    • Tightened GitHub Actions permissions; added a fuzz-test workflow.
  • Tests

    • Expanded API, UI, and client test coverage.

tarilabs and others added 22 commits August 12, 2025 08:34
…9.0.20250809 in /clients/python (kubeflow#1445)

Bumps [types-python-dateutil](https://github.com/typeshed-internal/stub_uploader) from 2.9.0.20250708 to 2.9.0.20250809.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-python-dateutil
  dependency-version: 2.9.0.20250809
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…on (kubeflow#1446)

Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.10.2 to 7.10.3.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](coveragepy/coveragepy@7.10.2...7.10.3)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.10.3
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…eflow#1448)

Bumps [boto3](https://github.com/boto/boto3) from 1.40.3 to 1.40.6.
- [Release notes](https://github.com/boto/boto3/releases)
- [Commits](boto/boto3@1.40.3...1.40.6)

---
updated-dependencies:
- dependency-name: boto3
  dependency-version: 1.40.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…ubeflow#1449)

Bumps [ruff](https://github.com/astral-sh/ruff) from 0.12.7 to 0.12.8.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](astral-sh/ruff@0.12.7...0.12.8)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.12.8
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…flow#1435)

* ci: Enable execution of make test-fuzz via pull request comment

* Provide simplified mechanism for executing `make test-fuzz` for specific pull requests

Signed-off-by: Chris Hambridge <[email protected]>

* ci: Fix comment check and update status checks section.

Signed-off-by: Chris Hambridge <[email protected]>

* Update .github/workflows/test-fuzz.yml

Signed-off-by: Matteo Mortari <[email protected]>

---------

Signed-off-by: Chris Hambridge <[email protected]>
Signed-off-by: Matteo Mortari <[email protected]>
Co-authored-by: Matteo Mortari <[email protected]>
…python (kubeflow#1450)

Bumps [schemathesis](https://github.com/schemathesis/schemathesis) from 4.0.21 to 4.0.25.
- [Release notes](https://github.com/schemathesis/schemathesis/releases)
- [Changelog](https://github.com/schemathesis/schemathesis/blob/master/CHANGELOG.md)
- [Commits](schemathesis/schemathesis@v4.0.21...v4.0.25)

---
updated-dependencies:
- dependency-name: schemathesis
  dependency-version: 4.0.25
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* add filter support to model catalog

Signed-off-by: Adysen Rothman <[email protected]>

* add filter support to yaml

Signed-off-by: Adysen Rothman <[email protected]>

* update tests

Signed-off-by: Adysen Rothman <[email protected]>

* more clear model verbiage & logging

Signed-off-by: Adysen Rothman <[email protected]>

* rm sample rhec catalog --> example in readme

Signed-off-by: Adysen Rothman <[email protected]>

---------

Signed-off-by: Adysen Rothman <[email protected]>
…#1224 (kubeflow#1318)

* feat: initial version of experiments and runs API

Signed-off-by: Dhiraj Bokde <[email protected]>

* feat: experiments and runs initial implementation (wip)

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: fixed failing unit tests for experiments and runs

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: added experiment and experimentrun tests

Signed-off-by: Dhiraj Bokde <[email protected]>

* feat: added DataSet, Metric, and Parameter types

Signed-off-by: Dhiraj Bokde <[email protected]>

* feat: added implementatio of DataSet, Metric, and Param, including service tests

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: replace int properties for timestamps with string because mlmd type properties only support int32, not int64

Signed-off-by: Dhiraj Bokde <[email protected]>

* feat: add support for artifactType query param to filter artifact types in artifact queries

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: add metrics history endpoint and metric history storage for experiment run metrics

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: fix artifactType query param type in generated service

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: fix go lint error in unit test

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: filter out metric history from artifacts endpoints

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: fix metric history name to use last update time to avoid name conflicts

Signed-off-by: Dhiraj Bokde <[email protected]>

* feat: add filterQuery param on all context types to search by properties and custom properties

Signed-off-by: Dhiraj Bokde <[email protected]>

* feat: initial version of experiment tracking implemented on embedmd, rebased on main

Signed-off-by: Dhiraj Bokde <[email protected]>

* feat: add support for filterQuery parameter for all ListResponse endpoints for embedmd datastore

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: add support for stepIds query parameter in embedmd datastore

Signed-off-by: Dhiraj Bokde <[email protected]>

* feat: refactor embedmd db service to use generic repository implementation to reduce code duplication

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: add support for artifactType query parameter for embedmd datastore

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: use mysql 8.3 in unit tests

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: refactor name mapping and default name handling in embedmd datastore

Signed-off-by: Dhiraj Bokde <[email protected]>

* feat: support updating metrics and parameters by name, fix ignoring metric history when retrieving all artifacts for runs and versions

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: add missing generated openapi python client files for PR github action check

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: fix failing shared db tests

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: add support for metric and parameter description, add missing type property migraiton

Signed-off-by: Dhiraj Bokde <[email protected]>

* chore: update files from main

Signed-off-by: Alessio Pragliola <[email protected]>

* fix: added missing godoc comments in pkg/api/api.go

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: replace ambiguous ArtifactListReponse return type from GetExperimentRunMetricHistory with MetricListResponse

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: fixed incorrect artifactType in dataset response, added tests to verify all artifact types

Signed-off-by: Dhiraj Bokde <[email protected]>

* feat: add validation for endTimeSinceEpoch property on experiment run updates

Signed-off-by: Dhiraj Bokde <[email protected]>

* Replace value type validation map with a switch in query_translator.go

Co-authored-by: Paul Boyd <[email protected]>
Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: add service e2e tests for filterQuery, fix name query param handling, fix DB tests that didn't use parent id prefix

Signed-off-by: Dhiraj Bokde <[email protected]>

* chore: code cleanup, replace interface{} with any, added vetting for internal/db/filter

Signed-off-by: Dhiraj Bokde <[email protected]>

* chore: added flag vF for fixed string grep exclude

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: copied orderby and parameters back to registry and catalog to have different values

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: fixed mlmd query translator handling of escaped backslashes

Signed-off-by: Dhiraj Bokde <[email protected]>

* chore: add test to verify parseCustomPropertyField won't panic with a property name ending in dot

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: sync generated python client code

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: readiness probe tests and new types

Signed-off-by: Alessio Pragliola <[email protected]>

* chore: refactor readiness_test

Signed-off-by: Alessio Pragliola <[email protected]>

* fix: ensure parentResourceId is used to filter resource lookup by params, add unit tests for duplicate child resource lookups

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: throw an error if a metric value is missing, add test to validate

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: fix http status error code for invalid ids

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: more id validation, fixed filterQuery passing to DB layer

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: fix failing unit test

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: validate experiment id when listing runs

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: fix failing validation test after fixing http status codes

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: avoid duplicate key errors if externalid is set in metric when creating metric history entries

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: add fuzzer tests for experiment runs and new artifact types

Signed-off-by: Dhiraj Bokde <[email protected]>

* chore: code cleanup and format fuzzer tests

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: log error in fuzzer test

Signed-off-by: Dhiraj Bokde <[email protected]>

* fix: handle null artifact names correctly on create

Signed-off-by: Dhiraj Bokde <[email protected]>

---------

Signed-off-by: Dhiraj Bokde <[email protected]>
Signed-off-by: Alessio Pragliola <[email protected]>
Signed-off-by: Alessio Pragliola <[email protected]>
Co-authored-by: Alessio Pragliola <[email protected]>
Co-authored-by: Alessio Pragliola <[email protected]>
Co-authored-by: Paul Boyd <[email protected]>
…beflow#1454)

* ci: Update the GHA workflow to utilize labels instead of comments

* Use label "run-fuzz-test" to trigger GHA flow

Signed-off-by: Chris Hambridge <[email protected]>

* ci: fix the image version variable to work with the deploy_on_kind expectation

Signed-off-by: Chris Hambridge <[email protected]>

---------

Signed-off-by: Chris Hambridge <[email protected]>
as it's blocking merge on Approval label
which do Skip the optional Fuzz test

Removing as anyway we need another approach

Signed-off-by: Matteo Mortari <[email protected]>
* Structure MR BFF server to use secure HTTP calls

Signed-off-by: manaswinidas <[email protected]>

* add configurable TLS verification for MR HTTP client

Signed-off-by: manaswinidas <[email protected]>

* Add instructions to disable TLS for local

Signed-off-by: manaswinidas <[email protected]>

---------

Signed-off-by: manaswinidas <[email protected]>
* feat: remove most of MLMD references

Signed-off-by: Alessio Pragliola <[email protected]>

* fix: post rebase

Signed-off-by: Alessio Pragliola <[email protected]>

* fix: add function from mlmd deleted file

Signed-off-by: Alessio Pragliola <[email protected]>

* chore: removed pending mlmd references

Signed-off-by: Alessio Pragliola <[email protected]>

* fix: remove unused function

Signed-off-by: Alessio Pragliola <[email protected]>

* chore: remove unused files

Signed-off-by: Alessio Pragliola <[email protected]>

---------

Signed-off-by: Alessio Pragliola <[email protected]>
@mturley is providing extensive contributions to the UI efforts
both in terms of code, architecture, and coordination with UI
team

Signed-off-by: Matteo Mortari <[email protected]>
* fixed filtering in model registry views

Signed-off-by: rsun19 <[email protected]>

* extracted utils function

Signed-off-by: rsun19 <[email protected]>

* moved filter toolbar to shared folder

Signed-off-by: rsun19 <[email protected]>

---------

Signed-off-by: rsun19 <[email protected]>
* Provide simple workflow dispatch mechanism to avoid GHA vulnerabilities.

Signed-off-by: Chris Hambridge <[email protected]>
…ubeflow#1447)

Bumps google.golang.org/protobuf from 1.36.6 to 1.36.7.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-version: 1.36.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
@openshift-ci
Copy link

openshift-ci bot commented Aug 14, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@coderabbitai
Copy link

coderabbitai bot commented Aug 14, 2025

Walkthrough

Repository-wide updates: workflows gain explicit permissions; a new fuzz test workflow is added. MLMD protos and related Makefile rules are removed; compose files switch to embedmd with MySQL. OpenAPI adds experiments, metrics, parameters, filters. Python client and UI (BFF/frontend) are extended. Catalog gains model exclusions. Go utilities refactored for ID validation.

Changes

Cohort / File(s) Summary
GitHub Actions permissions & workflow
.github/workflows/* (async-upload-test.yml, build*.yml, check-.yaml, controller-test.yml, csi-test.yml, fossa-license-scanning.yml, prepare.yml, python-tests.yml, ui--build.yml), .github/workflows/test-fuzz.yml
Added top/job-level permissions (contents: read; packages: write where building). Introduced “Fuzz Test” workflow with PR checkout, Python setup, KinD AppArmor tweak, and make test-fuzz.
MLMD removal & build/tooling
Makefile, api/grpc/ml_metadata/proto/metadata_store*.proto, Dockerfile, go.mod, cmd/proxy.go, cmd/root.go
Removed MLMD protos and generation/toolchain; simplified deps. Dockerfile copies go.work.sum. go.mod adds participle/v2, bumps protobuf, shifts grpc to indirect. Proxy/root help drops MLMD flags/narrative.
Runtime stack (compose)
docker-compose.yaml, docker-compose-local.yaml, docker-compose-mysql.yaml
Switched to embedmd datastore backed by MySQL; added mysql service, healthchecks, volumes. Removed mlmd-server. Deleted docker-compose-mysql.yaml.
Docs
README.md, docs/mr_go_library.md, clients/ui/bff/README.md
Updated narratives from MLMD/gRPC to REST/datastore; docker-compose instructions reference MySQL; added TLS verification guidance for BFF; Go client doc rewritten for OpenAPI client patterns.
OpenAPI specifications
api/openapi/model-registry.yaml, api/openapi/src/model-registry.yaml, api/openapi/catalog.yaml, api/openapi/src/lib/common.yaml
Added Experiment/ExperimentRun schemas and endpoints; new artifact subtypes (DataSet, Metric, Parameter); introduced query parameters (filterQuery, artifactType, stepIds) and ArtifactTypeQueryParam.
Catalog: exclusions & tests
catalog/internal/catalog/rhec_catalog.go, yaml_catalog.go, .../rhec_catalog_test.go, .../yaml_catalog_test.go, .../catalog_test.go, .../testdata/test-catalog-sources.yaml
Added excludedModels support and string-based models list; implemented exclusion matching; updated constructors and tests; added new catalog entry.
Server OpenAPI glue
catalog/internal/server/openapi/.openapi-generator/FILES, catalog/internal/server/openapi/type_asserts.go, catalog/pkg/openapi/.openapi-generator/FILES, catalog/pkg/openapi/model_artifact_type_query_param.go
Registered new generated file and added assertion stubs; introduced ArtifactTypeQueryParam type with validation and nullable wrapper.
Python client: version & models
clients/python/pyproject.toml, src/model_registry/__init__.py, src/mr_openapi/README.md, src/.openapi-generator/FILES, src/mr_openapi/__init__.py, src/mr_openapi/models/*
Bumped version to 0.2.23. Generated models for experiments, datasets, metrics, parameters, and ArtifactTypeQueryParam; expanded Artifact* oneOfs; minor description tweak.
Python tests
clients/python/tests/constants.py, clients/python/tests/test_api.py
Added artifact types to constants. Introduced helpers; added experiment/run artifact tests; refactored artifact POST/PATCH flows.
UI BFF TLS flag
clients/ui/bff/cmd/main.go, .../cmd/helpers.go, .../internal/config/environment.go, .../internal/api/middleware.go, .../internal/integrations/mrserver/http.go, .../http_test.go
Added InsecureSkipVerify config/flag and env parsing; threaded into HTTP client; updated constructor signatures and tests.
UI frontend: filters & UX
clients/ui/frontend/src/app/shared/components/FilterToolbar.tsx, .../screens/const.ts, .../screens/utils.ts, .../ModelVersionDetails/*.tsx, .../components/ViewAllVersionsButton.tsx, .../ModelVersions*/**.tsx, .../RegisteredModels*/**.tsx
Introduced generic FilterToolbar and filter data types; refactored filtering logic; updated version selector with search and Latest badge; changed ViewAllVersionsButton props/context.
UI frontend tests & pages
clients/ui/frontend/src/__tests__/cypress/cypress/pages/*, .../tests/mocked/modelRegistry/*.cy.ts, .../screens/__tests__/utils.spec.ts, src/app/types.ts, __mocks__/mockRegisteredModel.ts
Updated page objects to toolbar-based filters; added owner to RegisteredModel type/mock; expanded tests for combined filters, archive flows, latest badge, and navigation.
Catalog server openapi manifests
catalog/internal/server/openapi/.openapi-generator/FILES, catalog/pkg/openapi/.openapi-generator/FILES
Added file entries for artifact type query param model.
Internal Go utils
internal/apiutils/api_utils.go
Removed list options builders; added ID validation helpers for int32/int64 (and pointers).
Ownership
clients/ui/OWNERS
Reordered approvers; added mturley.

Sequence Diagram(s)

sequenceDiagram
  actor User
  participant BFF as UI BFF
  participant CFG as Env/Flags
  participant HTTP as HTTP Client
  participant MR as Model Registry

  User->>BFF: Start with -insecure-skip-verify / INSECURE_SKIP_VERIFY
  BFF->>CFG: Load EnvConfig.InsecureSkipVerify
  BFF->>HTTP: NewHTTPClient(..., insecureSkipVerify)
  HTTP->>MR: HTTPS request (verify per flag)
  MR-->>HTTP: Response
  HTTP-->>BFF: Data
  BFF-->>User: API response
Loading
sequenceDiagram
  participant GH as GitHub Actions
  participant Script as github-script
  participant Repo as Repo
  participant Runner as ubuntu-latest

  GH->>Runner: workflow_dispatch(pr_number)
  Runner->>Script: Get PR details (sha, ref)
  Runner->>Repo: Checkout PR by SHA
  Runner->>Runner: Setup Python 3.12 + Poetry
  Runner->>Runner: Remove MySQL AppArmor
  Runner->>Runner: make test-fuzz
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~90 minutes

Poem

A rabbit taps its merge-approved key,
From MLMD burrows to MySQL sea.
New trails of Experiments run and gleam,
Filters bloom in the UI stream.
TLS ears perk—skip or verify?
Fuzz winds whistle, tests reply.
Hippity hop—release hops high! 🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@openshift-ci
Copy link

openshift-ci bot commented Aug 14, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Al-Pragliola

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Al-Pragliola Al-Pragliola marked this pull request as ready for review August 14, 2025 13:01
@Al-Pragliola
Copy link
Author

/ok-to-test

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 42

🔭 Outside diff range comments (14)
.github/workflows/python-tests.yml (1)

229-235: Gate Codecov upload to avoid failing PRs from forks without secrets

This step runs unconditionally for Python 3.12 and sets fail_ci_if_error: true. On forked PRs, secrets.CODECOV_TOKEN isn’t available, causing a hard failure. Gate the step to trusted contexts or the presence of the token.

Apply one of the guards below.

Option A: Skip on forks

-      - name: Upload coverage report # we upload coverage stats for py once, regardless if previous step failed
-        uses: codecov/[email protected]
-        if: always() && matrix.python == '3.12'
+      - name: Upload coverage report # upload once, skip forks without secrets
+        uses: codecov/[email protected]
+        if: always() && matrix.python == '3.12' && !github.event.pull_request.head.repo.fork
         with:
           files: coverage.xml
           fail_ci_if_error: true
           token: ${{ secrets.CODECOV_TOKEN }}

Option B: Conditional on token presence

-      - name: Upload coverage report # we upload coverage stats for py once, regardless if previous step failed
-        uses: codecov/[email protected]
-        if: always() && matrix.python == '3.12'
+      - name: Upload coverage report # upload once, only if token is available
+        uses: codecov/[email protected]
+        if: always() && matrix.python == '3.12' && (secrets.CODECOV_TOKEN != '')
         with:
           files: coverage.xml
           fail_ci_if_error: true
           token: ${{ secrets.CODECOV_TOKEN }}

Note: If you want tokenless uploads via OIDC in the future, add permissions: id-token: write at the job level and remove the token.

clients/ui/frontend/src/__tests__/cypress/cypress/pages/modelRegistryView/modelVersionArchive.ts (1)

96-98: Naming inconsistency — pick “Archived” or “Archive” (action required)

Toolbar helper uses "Archived" (findArchivedVersionTableToolbar) while the search/filter helpers use "Archive". Renaming to "Archived..." is reasonable, but it requires updating test call sites.

Files/locations to update:

  • Definition: clients/ui/frontend/src/tests/cypress/cypress/pages/modelRegistryView/modelVersionArchive.ts
    • findArchivedVersionTableToolbar() — lines ~96-98
    • findArchiveVersionTableSearch() — line ~116
    • findArchiveVersionTableFilterOption(name: string) — line ~120
  • Call sites to update: clients/ui/frontend/src/tests/cypress/cypress/tests/mocked/modelRegistry/modelVersionArchive.cy.ts — lines: 204, 236, 239, 240, 243, 245, 246, 247, 248, 251, 254, 255, 258, 260, 261, 262, 263, 266, 267, 270, 271, 274.

Suggested rename diff:

-  findArchiveVersionTableSearch() {
+  findArchivedVersionTableSearch() {
     return this.findArchivedVersionTableToolbar().findByTestId('filter-toolbar-text-field');
   }
 
-  findArchiveVersionTableFilterOption(name: string) {
+  findArchivedVersionTableFilterOption(name: string) {
     return this.findArchivedVersionTableToolbar()
       .findByTestId('filter-toolbar-dropdown')
       .findDropdownItem(name);
   }

Please update the listed call sites in modelVersionArchive.cy.ts if you adopt this rename.

clients/ui/bff/cmd/helpers.go (1)

57-59: Bug: default value ignored due to splitting the wrong variable

When s == "" and defaultVal != "", value is set correctly but you split s, which is empty, leading to appending a single empty string and ignoring the default. Split value instead.

Apply this diff:

-    for _, str := range strings.Split(s, ",") {
+    for _, str := range strings.Split(value, ",") {
       *allowList = append(*allowList, strings.TrimSpace(str))
     }
clients/ui/bff/internal/integrations/mrserver/http.go (1)

92-92: Missing package-local min function — will break build

Repo-wide search found no func min(...) definition, but clients/ui/bff/internal/integrations/mrserver/http.go calls min(len(...), 200) at multiple places. This will fail to compile.

Locations to fix:

  • clients/ui/bff/internal/integrations/mrserver/http.go:92 — "body_preview", string(body[:min(len(body), 200)]))
  • clients/ui/bff/internal/integrations/mrserver/http.go:149 — "body_preview", string(responseBody[:min(len(responseBody), 200)]))
  • clients/ui/bff/internal/integrations/mrserver/http.go:206 — "body_preview", string(responseBody[:min(len(responseBody), 200)]))

Suggested fixes (choose one):

  • Add a package-local helper in package mrserver, e.g.:
    func min(a, b int) int {
    if a < b { return a }
    return b
    }
    (place near the top of http.go or another file in the same package)
  • Or replace each call with explicit bounds-checking, e.g.:
    n := len(body); if n > 200 { n = 200 }; "body_preview", string(body[:n])
clients/ui/bff/internal/api/middleware.go (2)

95-99: Missing early return after bad request (handler continues with empty namespace).

If the namespace is missing, the handler still proceeds to create the K8s client and query the model registry, which can cause misleading errors and unnecessary work.

Apply this diff to return early:

   namespace, ok := r.Context().Value(constants.NamespaceHeaderParameterKey).(string)
   if !ok || namespace == "" {
     app.badRequestResponse(w, r, fmt.Errorf("missing namespace in the context"))
+    return
   }

233-237: Incorrect error message and stale ‘ok’ check for serviceName.

  • The condition uses ‘ok’ from a previous namespace lookup, which is unrelated here.
  • The error message mentions “namespace” while checking a missing path parameter.

Apply this diff:

-    serviceName := ps.ByName(ModelRegistryId)
-    if !ok || serviceName == "" {
-      app.badRequestResponse(w, r, fmt.Errorf("missing namespace in context"))
-      return
-    }
+    serviceName := ps.ByName(ModelRegistryId)
+    if serviceName == "" {
+      app.badRequestResponse(w, r, fmt.Errorf("missing path parameter: %s", ModelRegistryId))
+      return
+    }
clients/ui/frontend/src/__tests__/cypress/cypress/tests/mocked/modelRegistry/modelVersionArchive.cy.ts (1)

204-229: Fix label assertions to match the mock data (duplicate and non-existent label).

The test expects a label "Financial" which isn’t present in the mocked customProperties, and "Test label y" is listed twice. This can cause flaky or failing tests. Replace the duplicate with "Test label z" and remove "Financial".

Apply this diff:

-    labelModal.shouldContainsModalLabels([
-      'Financial',
-      'Financial data',
-      'Fraud detection',
-      'Test label',
-      'Machine learning',
-      'Next data to be overflow',
-      'Test label x',
-      'Test label y',
-      'Test label y',
-    ]);
+    labelModal.shouldContainsModalLabels([
+      'Financial data',
+      'Fraud detection',
+      'Test label',
+      'Machine learning',
+      'Next data to be overflow',
+      'Test label x',
+      'Test label y',
+      'Test label z',
+    ]);
catalog/internal/catalog/yaml_catalog.go (1)

153-171: Fix error messages to reference the actual file path

Error messages in load() reference yamlCatalogPath (a key) instead of the path to the file, which hinders debugging.

Apply this diff:

  bytes, err := os.ReadFile(path)
  if err != nil {
-    return fmt.Errorf("failed to read %s file: %v", yamlCatalogPath, err)
+    return fmt.Errorf("failed to read %s file: %v", path, err)
  }

  var contents yamlCatalog
  if err = yaml.UnmarshalStrict(bytes, &contents); err != nil {
-    return fmt.Errorf("failed to parse %s file: %v", yamlCatalogPath, err)
+    return fmt.Errorf("failed to parse %s file: %v", path, err)
  }
clients/ui/frontend/src/__tests__/cypress/cypress/tests/mocked/modelRegistry/modelVersions.cy.ts (1)

194-205: Duplicate expected label entry: likely should assert 'Test label z' instead of a second 'Test label y'.

The modal is expected to include both 'Test label y' and 'Test label z' per the mocked customProperties. Currently, 'Test label y' is listed twice.

Apply this diff to correct the expected labels:

     labelModal.shouldContainsModalLabels([
       'Financial',
       'Financial data',
       'Fraud detection',
       'Test label',
       'Machine learning',
       'Next data to be overflow',
       'Test label x',
-      'Test label y',
-      'Test label y',
+      'Test label y',
+      'Test label z',
     ]);
clients/ui/frontend/src/app/pages/modelRegistry/screens/ModelVersionDetails/ModelVersionSelector.tsx (1)

63-69: Fix “No results found” logic to check the filtered list, not the source list

Currently checks liveModelVersions.length, which won’t show the message when there are versions but none match the filter. Check menuListItems.length instead.

-  if (input.length > 0 && liveModelVersions.length === 0) {
+  if (input.length > 0 && menuListItems.length === 0) {
clients/python/src/mr_openapi/models/artifact.py (2)

113-116: Bug: from_dict mishandles string input (double-encoding), leading to type errors.

Passing a JSON string to from_dict causes json.dumps to double-encode; from_json then receives a JSON string literal and json.loads(...) returns a Python str, making .get(...) fail.

Apply this fix to correctly branch on input type:

 @classmethod
 def from_dict(cls, obj: str | dict[str, Any]) -> Self:
-    return cls.from_json(json.dumps(obj))
+    if isinstance(obj, (str, bytes)):
+        # Already a JSON string (or bytes); pass through
+        return cls.from_json(obj if isinstance(obj, str) else obj.decode())
+    # dict-like; serialize to JSON first
+    return cls.from_json(json.dumps(obj))

69-111: Validator rejects None despite the field allowing None.

actual_instance is typed to allow None, but the validator raises on None (no match). This makes constructing Artifact() or having a transitional None value invalid.

Apply a guard to permit None:

 @field_validator("actual_instance")
 def actual_instance_must_validate_oneof(cls, v):
-    Artifact.model_construct()
+    Artifact.model_construct()
+    if v is None:
+        return v
clients/python/src/mr_openapi/models/artifact_update.py (1)

127-130: Tighten from_dict signature and avoid json.dumps round-trip

from_dict should accept a dict (or None) and avoid treating strings as dicts. The current annotation can cause surprising behavior.

Apply this diff:

-    def from_dict(cls, obj: str | dict[str, Any]) -> Self:
-        return cls.from_json(json.dumps(obj))
+    def from_dict(cls, obj: dict[str, Any] | None) -> Self:
+        if obj is None:
+            raise ValueError("from_dict expected a dict, got None")
+        return cls.from_json(json.dumps(obj))
clients/python/src/mr_openapi/models/artifact_create.py (1)

138-143: from_json incorrectly hard-requires discriminator; breaks valid payloads without artifactType

Raising when artifactType is absent prevents the intended fallback deserialization path. Many generated payloads rely on defaults and don’t always send the discriminator explicitly. This will throw instead of trying the oneOf candidates.

Apply this diff to enable proper fallback when artifactType is missing:

-        if not _data_type:
-            msg = "Failed to lookup data type from the field `artifactType` in the input."
-            raise ValueError(msg)
+        if not _data_type:
+            # No discriminator provided; fall through to attempt deserialization
+            # into each candidate oneOf schema below.
+            _data_type = None
♻️ Duplicate comments (2)
docs/mr_go_library.md (2)

127-137: Same as above: use the enum for artifactType (if available).

Repeat the enum use here for consistency.

-artifactType := "model-artifact"
+artifactType := openapi.ARTIFACTTYPEQUERYPARAM_MODEL_ARTIFACT

145-152: Same as above: use the enum for artifactType (if available).

Keep the examples consistent and type-safe.

-    ArtifactType: &artifactType,
+    ArtifactType: &artifactType,

Note: Only change the definition line (artifactType := …) as shown in previous comments.

Comment on lines +26 to +28
permissions:
contents: read
packages: write
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Tighten permissions: drop packages: write (pushing to Quay doesn’t need it).

This job authenticates to quay.io using username/password secrets and does not interact with GH Packages. Keeping packages: write unnecessarily broadens token scope.

Apply:

     runs-on: ubuntu-latest
-    permissions:
-      contents: read
-      packages: write
+    permissions:
+      contents: read

If you plan to publish to GHCR later, reintroduce packages: write only for that step/job.

🤖 Prompt for AI Agents
In .github/workflows/build-and-push-csi-image.yml around lines 26 to 28, the
workflow currently grants packages: write under permissions which is unnecessary
for pushing to quay.io; remove the packages: write permission so the permissions
block only includes contents: read (or otherwise minimal scopes required), and
if in future you need to publish to GHCR reintroduce packages: write only for
the specific job/step that needs it rather than globally.

Comment on lines +25 to +27
permissions:
contents: read
packages: write
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reduce scope: packages: write likely unnecessary (publishing to Quay only).

This workflow uses QUAY credentials and does not interact with GitHub Packages. Consider removing packages: write to adhere to least-privilege.

Apply:

   build-image:
-    permissions:
-      contents: read
-      packages: write
+    permissions:
+      contents: read

If GHCR publishing is introduced later, add packages: write only where needed.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
permissions:
contents: read
packages: write
permissions:
contents: read
🤖 Prompt for AI Agents
In .github/workflows/build-and-push-image.yml around lines 25 to 27, the
workflow grants packages: write permission even though the job pushes images to
Quay and doesn't use GitHub Packages; remove the packages: write permission to
follow least-privilege (leave contents: read) and only re-add packages: write in
the specific workflow/job if/when publishing to GHCR or other GitHub Packages is
required.

Comment on lines +10 to +11
permissions:
contents: read
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Grant pull-requests: read to allow pulls.get on forks.

The workflow calls pulls.get via Octokit; with the top-level permissions tightened to contents: read only, GitHub may deny PR API reads. Add pull-requests: read to avoid permission errors.

 permissions:
-  contents: read
+  contents: read
+  pull-requests: read
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
permissions:
contents: read
permissions:
contents: read
pull-requests: read
🤖 Prompt for AI Agents
In .github/workflows/test-fuzz.yml around lines 10 to 11, the workflow only
grants contents: read which prevents Octokit calls to pulls.get on forked PRs;
add pull-requests: read to the permissions block so the workflow can perform
pulls.get (e.g., include "pull-requests: read" alongside contents: read) to
avoid permission-denied errors when reading PR data.

Comment on lines +27 to +45
- name: Get PR details
id: pr
uses: actions/github-script@v7
with:
script: |
try {
const pr = await github.rest.pulls.get({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: ${{ github.event.inputs.pr_number }}
});
return {
sha: pr.data.head.sha,
ref: pr.data.head.ref
};
} catch (error) {
console.log(`Error fetching PR #${{ github.event.inputs.pr_number }}: ${error.message}`);
throw error;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Checkout will fail for forked PRs — pass head repository to actions/checkout.

You’re fetching the PR’s head SHA but still checking out from the base repo (default). For PRs from forks, that SHA does not exist in the base repo, and checkout will fail. Return the head repo full name and use it in actions/checkout.

       - name: Get PR details
         id: pr
         uses: actions/github-script@v7
         with:
           script: |
             try {
               const pr = await github.rest.pulls.get({
                 owner: context.repo.owner,
                 repo: context.repo.repo,
                 pull_number: ${{ github.event.inputs.pr_number }}
               });
-              return {
-                sha: pr.data.head.sha,
-                ref: pr.data.head.ref
-              };
+              return {
+                sha: pr.data.head.sha,
+                ref: pr.data.head.ref,
+                headRepo: pr.data.head.repo.full_name
+              };
             } catch (error) {
               console.log(`Error fetching PR #${{ github.event.inputs.pr_number }}: ${error.message}`);
               throw error;
             }

       - name: Checkout PR
         uses: actions/checkout@v4
         with:
+          repository: ${{ fromJson(steps.pr.outputs.result).headRepo }}
           ref: ${{ fromJson(steps.pr.outputs.result).sha }}

Also applies to: 47-51

Comment on lines +61 to +65
- name: Remove AppArmor profile for mysql in KinD on GHA # https://github.com/kubeflow/manifests/issues/2507
run: |
set -x
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make AppArmor removal idempotent to avoid runner variability.

On some runner images the MySQL AppArmor profile may not exist. Guard the command to avoid hard failures.

       - name: Remove AppArmor profile for mysql in KinD on GHA # https://github.com/kubeflow/manifests/issues/2507
         run: |
           set -x
-          sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld
+          if [ -f /etc/apparmor.d/usr.sbin.mysqld ]; then
+            sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld
+          else
+            echo "AppArmor profile for mysqld not present; skipping removal."
+          fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Remove AppArmor profile for mysql in KinD on GHA # https://github.com/kubeflow/manifests/issues/2507
run: |
set -x
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld
- name: Remove AppArmor profile for mysql in KinD on GHA # https://github.com/kubeflow/manifests/issues/2507
run: |
set -x
if [ -f /etc/apparmor.d/usr.sbin.mysqld ]; then
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld
else
echo "AppArmor profile for mysqld not present; skipping removal."
fi

Comment on lines +174 to 178
allVersions, _, err := service.FindModelVersion(ctx).ParentResourceId(*regModelFound.Id).Execute()
if err != nil {
return fmt.Errorf("error retrieving model versions for model %s: %v", *registeredModel.Id, err)
return nil, fmt.Errorf("unable to find model version: %w", err)
}
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix variable name: use registeredModel from the previous snippet.

regModelFound isn’t defined in the doc; use the result from the “Get a RegisteredModel by name” example.

-allVersions, _, err := service.FindModelVersion(ctx).ParentResourceId(*regModelFound.Id).Execute()
+allVersions, _, err := service.FindModelVersion(ctx).ParentResourceId(*registeredModel.Id).Execute()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
allVersions, _, err := service.FindModelVersion(ctx).ParentResourceId(*regModelFound.Id).Execute()
if err != nil {
return fmt.Errorf("error retrieving model versions for model %s: %v", *registeredModel.Id, err)
return nil, fmt.Errorf("unable to find model version: %w", err)
}
```
allVersions, _, err := service.FindModelVersion(ctx).ParentResourceId(*registeredModel.Id).Execute()
if err != nil {
return nil, fmt.Errorf("unable to find model version: %w", err)
}
🤖 Prompt for AI Agents
In docs/mr_go_library.md around lines 174 to 178, the snippet references an
undefined variable regModelFound; replace it with the registeredModel variable
returned by the earlier "Get a RegisteredModel by name" example so the call
becomes ParentResourceId(*registeredModel.Id) (and ensure the surrounding code
uses the same registeredModel variable name from the previous snippet).


require (
github.com/Khan/genqlient v0.8.1
github.com/alecthomas/participle/v2 v2.1.4
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Confirm gRPC is no longer directly imported; align CI Go versions with go.mod

  • grpc moved to indirect; if any code still imports google.golang.org/grpc directly, keep it as a direct require to avoid accidental pruning. If unused, consider removing it entirely after tidy.
  • go directive is 1.24.4 while some workflows may use older go versions. Ensure CI uses 1.24.x to match toolchain expectations.

Run this script to verify both points:

Also applies to: 28-28, 149-149, 208-208


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "==> go.mod 'go' directive:"
awk '/^go [0-9]+\.[0-9]+/ {print; exit}' go.mod

echo
echo "==> Workflows go-version values:"
rg -n --glob ".github/workflows/**" 'go-version:\s*.*' -A 1 || true

echo
echo "==> Direct imports of google.golang.org/grpc in repo:"
rg -n $'^import \\(\n([\\s\\S]*?)"google\\.golang\\.org/grpc"[\\s\\S]*?\\)\n|^import\\s+"google\\.golang\\.org/grpc"' --multiline --glob '!**/vendor/**' || echo "No direct grpc imports found."

echo
echo "==> Suggestion:"
echo "If any direct grpc imports are found above, keep google.golang.org/grpc as a direct requirement."

Length of output: 1036


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "==> go.mod (first 200 lines):"
sed -n '1,200p' go.mod || true

echo
echo "==> go directive in go.mod:"
rg -n --no-line-number '^go [0-9]+\.[0-9]+' go.mod || true

echo
echo "==> Does go.mod mention google.golang.org/grpc?"
rg -n 'google\.golang\.org/grpc' go.mod || echo "No mention in go.mod"

echo
echo "==> Any reference to google.golang.org/grpc in repo:"
rg -n --hidden -S 'google\.golang\.org/grpc' || echo "No references found"

echo
echo "==> .github/workflows contents (if present):"
if [ -d .github/workflows ]; then
  ls -la .github/workflows || true
  echo
  echo "==> Search workflows for go-version or setup-go usage:"
  rg -n 'go-version|setup-go' .github/workflows || echo "No matches in workflows"
else
  echo ".github/workflows does not exist"
fi

Length of output: 15567


Confirm grpc is indirect; align CI Go versions to go 1.24.4

Verified:

  • go.mod has "go 1.24.4".
  • google.golang.org/grpc is listed as indirect (go.mod:149) and no direct imports were found in the repo — safe to leave indirect. If you do import grpc directly, add it as a direct require; otherwise run go mod tidy to prune unused entries.

Files needing attention (update to 1.24.4 or pin consistently):

  • .github/workflows/controller-test.yml — currently: go-version: '~1.22' → update to "1.24.4"
  • .github/workflows/ui-bff-build.yml — currently: go-version: "1.24.3" → update to "1.24.4"
  • .github/workflows/prepare.yml — currently: go-version: "1.24" → consider pinning to "1.24.4"
  • .github/workflows/build.yml — currently: go-version: "1.24" → consider pinning to "1.24.4"
  • .github/workflows/check-db-schema-structs.yaml — already uses "1.24.4" (OK)

Suggested minimal diffs:

Update controller-test.yml

-          go-version: '~1.22'
+          go-version: '1.24.4'

Update ui-bff-build.yml

-          go-version: "1.24.3"
+          go-version: "1.24.4"

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In go.mod around line 7, confirm that google.golang.org/grpc remains an indirect
requirement (no direct imports) and run `go mod tidy` to prune unused entries if
you don't import grpc directly; if you add direct imports, convert it to a
direct require. Then update CI workflow files to consistently pin Go to 1.24.4:
change .github/workflows/controller-test.yml go-version from '~1.22' to
'1.24.4', change .github/workflows/ui-bff-build.yml from '1.24.3' to '1.24.4',
and update .github/workflows/prepare.yml and .github/workflows/build.yml from
'1.24' to '1.24.4' (check check-db-schema-structs.yaml already OK).

Comment on lines +30 to +38
// ValidateIDAsInt32 validates and converts a string ID to int32
// Returns an error with api.ErrBadRequest if the ID is invalid
func ValidateIDAsInt32(id string, entityName string) (int32, error) {
convertedId, err := strconv.ParseInt(id, 10, 32)
if err != nil {
return 0, fmt.Errorf("invalid %s ID: %v: %w", entityName, err, api.ErrBadRequest)
}
var orderByString *string
if orderBy != "" {
orderByString = (*string)(&orderBy)
return int32(convertedId), nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Enforce positive IDs and tighten error message; fix initialism in var name

  • Validate that IDs are positive (> 0) to avoid accepting -1/0 as valid IDs unless negatives are explicitly supported.
  • Prefer including the offending ID in the error and putting the sentinel first for readability.
  • Follow Go initialism conventions (ID, not Id).

Proposed diff:

-func ValidateIDAsInt32(id string, entityName string) (int32, error) {
-	convertedId, err := strconv.ParseInt(id, 10, 32)
-	if err != nil {
-		return 0, fmt.Errorf("invalid %s ID: %v: %w", entityName, err, api.ErrBadRequest)
-	}
-	return int32(convertedId), nil
-}
+func ValidateIDAsInt32(id string, entityName string) (int32, error) {
+	convertedID, err := strconv.ParseInt(id, 10, 32)
+	if err != nil {
+		return 0, fmt.Errorf("%w: invalid %s ID %q: %v", api.ErrBadRequest, entityName, id, err)
+	}
+	if convertedID <= 0 {
+		return 0, fmt.Errorf("%w: invalid %s ID %q: must be a positive integer", api.ErrBadRequest, entityName, id)
+	}
+	return int32(convertedID), nil
+}

Enforce positive IDs, improve error messages, and fix initialism (ValidateIDAsInt32)

Reject zero/negative IDs, include the offending ID in errors, put the sentinel first, and use "ID" (not "Id").

  • File: internal/apiutils/api_utils.go — function ValidateIDAsInt32 (around lines 30–38)

Proposed diff:

-func ValidateIDAsInt32(id string, entityName string) (int32, error) {
-	convertedId, err := strconv.ParseInt(id, 10, 32)
-	if err != nil {
-		return 0, fmt.Errorf("invalid %s ID: %v: %w", entityName, err, api.ErrBadRequest)
-	}
-	return int32(convertedId), nil
-}
+func ValidateIDAsInt32(id string, entityName string) (int32, error) {
+	convertedID, err := strconv.ParseInt(id, 10, 32)
+	if err != nil {
+		return 0, fmt.Errorf("%w: invalid %s ID %q: %v", api.ErrBadRequest, entityName, id, err)
+	}
+	if convertedID <= 0 {
+		return 0, fmt.Errorf("%w: invalid %s ID %q: must be a positive integer", api.ErrBadRequest, entityName, id)
+	}
+	return int32(convertedID), nil
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// ValidateIDAsInt32 validates and converts a string ID to int32
// Returns an error with api.ErrBadRequest if the ID is invalid
func ValidateIDAsInt32(id string, entityName string) (int32, error) {
convertedId, err := strconv.ParseInt(id, 10, 32)
if err != nil {
return 0, fmt.Errorf("invalid %s ID: %v: %w", entityName, err, api.ErrBadRequest)
}
var orderByString *string
if orderBy != "" {
orderByString = (*string)(&orderBy)
return int32(convertedId), nil
}
// ValidateIDAsInt32 validates and converts a string ID to int32
// Returns an error with api.ErrBadRequest if the ID is invalid
func ValidateIDAsInt32(id string, entityName string) (int32, error) {
convertedID, err := strconv.ParseInt(id, 10, 32)
if err != nil {
return 0, fmt.Errorf("%w: invalid %s ID %q: %v", api.ErrBadRequest, entityName, id, err)
}
if convertedID <= 0 {
return 0, fmt.Errorf("%w: invalid %s ID %q: must be a positive integer", api.ErrBadRequest, entityName, id)
}
return int32(convertedID), nil
}

Comment on lines +54 to +62
// ValidateIDAsInt64 validates and converts a string ID to int64
// Returns an error with api.ErrBadRequest if the ID is invalid
func ValidateIDAsInt64(id string, entityName string) (int64, error) {
convertedId, err := strconv.ParseInt(id, 10, 64)
if err != nil {
return 0, fmt.Errorf("invalid %s ID: %v: %w", entityName, err, api.ErrBadRequest)
}
return convertedId, nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Mirror int32 changes: enforce positive IDs; improve error message; fix var name

Same rationale as for the int32 validator.

-func ValidateIDAsInt64(id string, entityName string) (int64, error) {
-	convertedId, err := strconv.ParseInt(id, 10, 64)
-	if err != nil {
-		return 0, fmt.Errorf("invalid %s ID: %v: %w", entityName, err, api.ErrBadRequest)
-	}
-	return convertedId, nil
-}
+func ValidateIDAsInt64(id string, entityName string) (int64, error) {
+	convertedID, err := strconv.ParseInt(id, 10, 64)
+	if err != nil {
+		return 0, fmt.Errorf("%w: invalid %s ID %q: %v", api.ErrBadRequest, entityName, id, err)
+	}
+	if convertedID <= 0 {
+		return 0, fmt.Errorf("%w: invalid %s ID %q: must be a positive integer", api.ErrBadRequest, entityName, id)
+	}
+	return convertedID, nil
+}

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "1) Searching for ValidateIDAsInt64 / ValidateIDAsInt32 definitions and usages..."
rg --hidden --no-ignore -n "ValidateIDAsInt64|ValidateIDAsInt32" || true

echo
echo "2) Showing internal/apiutils/api_utils.go (if present) with line numbers..."
if [ -f internal/apiutils/api_utils.go ]; then
  nl -ba internal/apiutils/api_utils.go | sed -n '1,240p'
else
  echo "internal/apiutils/api_utils.go not found"
fi

echo
echo "3) If ValidateIDAsInt32 is defined elsewhere, show its file(s)..."
rg -n "func ValidateIDAsInt32" || true
rg -n "func ValidateIDAsInt64" || true

# Show surrounding context for any matches to ValidateIDAsInt32
for f in $(rg -l "func ValidateIDAsInt32" || true); do
  echo
  echo "---- $f ----"
  nl -ba "$f" | sed -n '1,240p'
done

for f in $(rg -l "func ValidateIDAsInt64" || true); do
  echo
  echo "---- $f ----"
  nl -ba "$f" | sed -n '1,240p'
done

echo
echo "4) Search for tests or usages that expect non-positive IDs to be handled specially..."
rg -n "invalid .* ID|must be a positive|ValidateIDAsInt" || true

Length of output: 4467


🏁 Script executed:

#!/bin/bash
set -euo pipefail

FILE="internal/apiutils/api_utils.go"

if [ ! -f "$FILE" ]; then
  echo "$FILE not found"
  exit 0
fi

echo "---- $FILE (with line numbers) ----"
awk '{printf "%4d: %s\n", NR, $0}' "$FILE"

echo
echo "---- grep for ValidateID funcs ----"
rg -n "func ValidateIDAs(Int32|Int64|Int32Ptr|Int64Ptr)" "$FILE" || true

Length of output: 3377


Enforce positive IDs, fix var name & improve error messages in ValidateIDAsInt64 (make int32 consistent too)

Both validators currently use convertedId and don't reject non‑positive values. Please update these functions for clearer error messages (wrap api.ErrBadRequest), include the original input in the message, and reject non‑positive IDs.

Files to change:

  • internal/apiutils/api_utils.go: ValidateIDAsInt32 — lines ~32–37
  • internal/apiutils/api_utils.go: ValidateIDAsInt64 — lines ~56–62
  • (also review pointer variants) internal/apiutils/api_utils.go: ValidateIDAsInt32Ptr (42–51) and ValidateIDAsInt64Ptr (66–74)

Suggested diffs:

-func ValidateIDAsInt64(id string, entityName string) (int64, error) {
-	convertedId, err := strconv.ParseInt(id, 10, 64)
-	if err != nil {
-		return 0, fmt.Errorf("invalid %s ID: %v: %w", entityName, err, api.ErrBadRequest)
-	}
-	return convertedId, nil
-}
+func ValidateIDAsInt64(id string, entityName string) (int64, error) {
+	convertedID, err := strconv.ParseInt(id, 10, 64)
+	if err != nil {
+		return 0, fmt.Errorf("%w: invalid %s ID %q: %v", api.ErrBadRequest, entityName, id, err)
+	}
+	if convertedID <= 0 {
+		return 0, fmt.Errorf("%w: invalid %s ID %q: must be a positive integer", api.ErrBadRequest, entityName, id)
+	}
+	return convertedID, nil
+}

(and mirror the same changes for ValidateIDAsInt32:)

-func ValidateIDAsInt32(id string, entityName string) (int32, error) {
-	convertedId, err := strconv.ParseInt(id, 10, 32)
-	if err != nil {
-		return 0, fmt.Errorf("invalid %s ID: %v: %w", entityName, err, api.ErrBadRequest)
-	}
-	return int32(convertedId), nil
-}
+func ValidateIDAsInt32(id string, entityName string) (int32, error) {
+	convertedID, err := strconv.ParseInt(id, 10, 32)
+	if err != nil {
+		return 0, fmt.Errorf("%w: invalid %s ID %q: %v", api.ErrBadRequest, entityName, id, err)
+	}
+	if convertedID <= 0 {
+		return 0, fmt.Errorf("%w: invalid %s ID %q: must be a positive integer", api.ErrBadRequest, entityName, id)
+	}
+	return int32(convertedID), nil
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// ValidateIDAsInt64 validates and converts a string ID to int64
// Returns an error with api.ErrBadRequest if the ID is invalid
func ValidateIDAsInt64(id string, entityName string) (int64, error) {
convertedId, err := strconv.ParseInt(id, 10, 64)
if err != nil {
return 0, fmt.Errorf("invalid %s ID: %v: %w", entityName, err, api.ErrBadRequest)
}
return convertedId, nil
}
// ValidateIDAsInt64 validates and converts a string ID to int64
// Returns an error with api.ErrBadRequest if the ID is invalid
func ValidateIDAsInt64(id string, entityName string) (int64, error) {
convertedID, err := strconv.ParseInt(id, 10, 64)
if err != nil {
return 0, fmt.Errorf("%w: invalid %s ID %q: %v", api.ErrBadRequest, entityName, id, err)
}
if convertedID <= 0 {
return 0, fmt.Errorf("%w: invalid %s ID %q: must be a positive integer", api.ErrBadRequest, entityName, id)
}
return convertedID, nil
}
🤖 Prompt for AI Agents
internal/apiutils/api_utils.go lines ~32–37, ~42–51, ~56–62, ~66–74: Update
ValidateIDAsInt32, ValidateIDAsInt32Ptr, ValidateIDAsInt64, and
ValidateIDAsInt64Ptr to reject non‑positive IDs, fix variable naming, and
improve error messages: parse the input into a local variable named convertedID,
if ParseInt fails return fmt.Errorf("invalid %s ID %q: %w", entityName, id,
api.ErrBadRequest) (wrapping api.ErrBadRequest), and after parsing check if
convertedID <= 0 then return fmt.Errorf("invalid %s ID %q: must be a positive
integer: %w", entityName, id, api.ErrBadRequest); ensure pointer variants behave
the same and return pointers/values consistently.

Comment on lines +144 to +147
@echo "Running go vet on all packages..."
@${GO} vet $$(${GO} list ./... | grep -vF github.com/kubeflow/model-registry/internal/db/filter) && \
echo "Checking filter package (parser.go excluded due to participle struct tags)..." && \
cd internal/db/filter && ${GO} build -o /dev/null . 2>&1 | grep -E "vet:|error:" || echo "✓ Filter package builds successfully"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: vet target masks build failures due to pipeline exit code

go build | grep ... || echo ... causes the pipeline to return grep’s exit code (0 on match), ignoring go build failures and potentially passing CI despite compile errors. Remove the pipe or enforce pipefail.

Minimal safe fix:

 .PHONY: vet
 vet:
-	@echo "Running go vet on all packages..."
-	@${GO} vet $$(${GO} list ./... | grep -vF github.com/kubeflow/model-registry/internal/db/filter) && \
-	echo "Checking filter package (parser.go excluded due to participle struct tags)..." && \
-	cd internal/db/filter && ${GO} build -o /dev/null . 2>&1 | grep -E "vet:|error:" || echo "✓ Filter package builds successfully"
+	@echo "Running go vet on all packages..."
+	@${GO} vet $$(${GO} list ./... | grep -vF github.com/kubeflow/model-registry/internal/db/filter)
+	@echo "Checking filter package (parser.go excluded due to participle struct tags)..."
+	@cd internal/db/filter && ${GO} build -o /dev/null .

Alternatively, if you want to keep the grep, enforce pipefail:

-	cd internal/db/filter && ${GO} build -o /dev/null . 2>&1 | grep -E "vet:|error:" || echo "✓ Filter package builds successfully"
+	bash -o pipefail -c 'cd internal/db/filter && ${GO} build -o /dev/null . 2>&1 | grep -E "vet:|error:" || echo "✓ Filter package builds successfully"'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@echo "Running go vet on all packages..."
@${GO} vet $$(${GO} list ./... | grep -vF github.com/kubeflow/model-registry/internal/db/filter) && \
echo "Checking filter package (parser.go excluded due to participle struct tags)..." && \
cd internal/db/filter && ${GO} build -o /dev/null . 2>&1 | grep -E "vet:|error:" || echo "✓ Filter package builds successfully"
@echo "Running go vet on all packages..."
@${GO} vet $$(${GO} list ./... | grep -vF github.com/kubeflow/model-registry/internal/db/filter)
@echo "Checking filter package (parser.go excluded due to participle struct tags)..."
@cd internal/db/filter && ${GO} build -o /dev/null .
🤖 Prompt for AI Agents
Makefile lines 144-147: the current command pipes `go build` into `grep` so the
Make recipe returns grep's exit status and can mask build failures; fix by
either (A) running `go build -o /dev/null .` directly and checking its exit
status before any grep, or (B) ensure the shell uses pipefail (e.g., run the
whole pipeline under `bash -o pipefail -c "go build ... 2>&1 | grep -E
'vet:|error:' || echo '✓ Filter package builds successfully'"`) so a failing `go
build` causes the recipe to fail; update the Makefile to use one of these
approaches.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +95 to +97
"items": (
[Experiment.from_dict(_item) for _item in obj["items"]] if obj.get("items") is not None else None
),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make items default to [] in from_dict to satisfy non-optional field

items is typed as list[Experiment] (non-Optional). Passing None will fail validation if the field is missing in responses.

-                "items": (
-                    [Experiment.from_dict(_item) for _item in obj["items"]] if obj.get("items") is not None else None
-                ),
+                "items": (
+                    [Experiment.from_dict(_item) for _item in obj.get("items", [])]
+                    if obj.get("items") is not None
+                    else []
+                ),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"items": (
[Experiment.from_dict(_item) for _item in obj["items"]] if obj.get("items") is not None else None
),
"items": (
[Experiment.from_dict(_item) for _item in obj.get("items", [])]
if obj.get("items") is not None
else []
),
🤖 Prompt for AI Agents
In clients/python/src/mr_openapi/models/experiment_list.py around lines 95 to
97, the from_dict implementation sets "items" to None when obj.get("items") is
missing which violates the non-optional list[Experiment] type; change the logic
to default to an empty list (e.g., use obj.get("items", []) or check for None
and set []), then map each element with Experiment.from_dict so "items" is
always a list (possibly empty) rather than None.

Comment on lines +242 to 266
experiment_id, experiment_run_id = create_experiment_and_run(auth_headers)

endpoint = f"{REGISTRY_URL}/api/model_registry/v1alpha3/experiment_runs/{experiment_run_id}/artifacts"

# Build payload using helper function
payload = build_artifact_payload(
artifact_type=artifact_type,
uri_prefix=uri_prefix,
state=state,
name=f"my-test-experiment-artifact-post-{secrets.randbelow(1000000)}",
description="A test experiment artifact created via direct POST test."
)

# Make the API request
response = requests.post(endpoint, headers=auth_headers, json=payload, timeout=DEFAULT_API_TIMEOUT)

# Validate response and get artifact ID
artifact_id = validate_artifact_response(response, payload)

# Cleanup artifacts
cleanup_artifacts(artifact_id)

# Cleanup experiment and run
cleanup_experiment_and_run(auth_headers, experiment_id, experiment_run_id)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ensure experiment/run cleanup with try/finally to avoid leaked resources on assertion failures

If any assertion between POST and cleanup raises, experiments/runs won’t be cleaned up. Wrap with try/finally to guarantee cleanup.

Apply this diff:

-    # Create experiment and experiment run using helper
-    experiment_id, experiment_run_id = create_experiment_and_run(auth_headers)
-
-    endpoint = f"{REGISTRY_URL}/api/model_registry/v1alpha3/experiment_runs/{experiment_run_id}/artifacts"
-
-    # Build payload using helper function
-    payload = build_artifact_payload(
-        artifact_type=artifact_type,
-        uri_prefix=uri_prefix,
-        state=state,
-        name=f"my-test-experiment-artifact-post-{secrets.randbelow(1000000)}",
-        description="A test experiment artifact created via direct POST test."
-    )
-
-    # Make the API request
-    response = requests.post(endpoint, headers=auth_headers, json=payload, timeout=DEFAULT_API_TIMEOUT)
-
-    # Validate response and get artifact ID
-    artifact_id = validate_artifact_response(response, payload)
-
-    # Cleanup artifacts
-    cleanup_artifacts(artifact_id)
-
-    # Cleanup experiment and run
-    cleanup_experiment_and_run(auth_headers, experiment_id, experiment_run_id)
+    # Create experiment and experiment run using helper
+    experiment_id, experiment_run_id = create_experiment_and_run(auth_headers)
+    try:
+        endpoint = f"{REGISTRY_URL}/api/model_registry/v1alpha3/experiment_runs/{experiment_run_id}/artifacts"
+
+        # Build payload using helper function
+        payload = build_artifact_payload(
+            artifact_type=artifact_type,
+            uri_prefix=uri_prefix,
+            state=state,
+            name=f"my-test-experiment-artifact-post-{secrets.randbelow(1000000)}",
+            description="A test experiment artifact created via direct POST test."
+        )
+
+        # Make the API request
+        response = requests.post(endpoint, headers=auth_headers, json=payload, timeout=DEFAULT_API_TIMEOUT)
+
+        # Validate response and get artifact ID
+        artifact_id = validate_artifact_response(response, payload)
+
+        # Cleanup artifacts
+        cleanup_artifacts(artifact_id)
+    finally:
+        # Always cleanup experiment and run
+        cleanup_experiment_and_run(auth_headers, experiment_id, experiment_run_id)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
experiment_id, experiment_run_id = create_experiment_and_run(auth_headers)
endpoint = f"{REGISTRY_URL}/api/model_registry/v1alpha3/experiment_runs/{experiment_run_id}/artifacts"
# Build payload using helper function
payload = build_artifact_payload(
artifact_type=artifact_type,
uri_prefix=uri_prefix,
state=state,
name=f"my-test-experiment-artifact-post-{secrets.randbelow(1000000)}",
description="A test experiment artifact created via direct POST test."
)
# Make the API request
response = requests.post(endpoint, headers=auth_headers, json=payload, timeout=DEFAULT_API_TIMEOUT)
# Validate response and get artifact ID
artifact_id = validate_artifact_response(response, payload)
# Cleanup artifacts
cleanup_artifacts(artifact_id)
# Cleanup experiment and run
cleanup_experiment_and_run(auth_headers, experiment_id, experiment_run_id)
# Create experiment and experiment run using helper
experiment_id, experiment_run_id = create_experiment_and_run(auth_headers)
try:
endpoint = f"{REGISTRY_URL}/api/model_registry/v1alpha3/experiment_runs/{experiment_run_id}/artifacts"
# Build payload using helper function
payload = build_artifact_payload(
artifact_type=artifact_type,
uri_prefix=uri_prefix,
state=state,
name=f"my-test-experiment-artifact-post-{secrets.randbelow(1000000)}",
description="A test experiment artifact created via direct POST test."
)
# Make the API request
response = requests.post(endpoint, headers=auth_headers, json=payload, timeout=DEFAULT_API_TIMEOUT)
# Validate response and get artifact ID
artifact_id = validate_artifact_response(response, payload)
# Cleanup artifacts
cleanup_artifacts(artifact_id)
finally:
# Always cleanup experiment and run
cleanup_experiment_and_run(auth_headers, experiment_id, experiment_run_id)
🤖 Prompt for AI Agents
In clients/python/tests/test_api.py around lines 242 to 266, the test creates an
experiment and run, posts an artifact and then performs cleanup unguarded, which
can leak experiments/runs if assertions fail; wrap the POST/validation in a
try/finally: initialize artifact_id = None before the try, perform the POST and
set artifact_id after validate inside the try, and in the finally call
cleanup_artifacts(artifact_id) only if artifact_id is not None and then always
call cleanup_experiment_and_run(auth_headers, experiment_id, experiment_run_id)
so both artifact and experiment/run cleanup are guaranteed even on assertion
failures.

@openshift-merge-bot openshift-merge-bot bot merged commit d4317a9 into opendatahub-io:main Aug 14, 2025
27 of 29 checks passed
@Al-Pragliola Al-Pragliola deleted the al-pragliola-20250814-sync branch August 14, 2025 14:03
npanpaliya pushed a commit to odh-on-pz/model-registry that referenced this pull request Dec 22, 2025
…ontend (opendatahub-io#303)

Bumps [ts-jest](https://github.com/kulshekhar/ts-jest) from 29.2.4 to 29.2.5.
- [Release notes](https://github.com/kulshekhar/ts-jest/releases)
- [Changelog](https://github.com/kulshekhar/ts-jest/blob/main/CHANGELOG.md)
- [Commits](kulshekhar/ts-jest@v29.2.4...v29.2.5)

---
updated-dependencies:
- dependency-name: ts-jest
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants