fix: expose the actual model name instead of the resource name#143
Conversation
WalkthroughPrefer spec.model.name as the model ID for KServe InferenceService objects, falling back to metadata.name when missing; log read errors and include namespace/name in URL-retrieval error messages. Add test coverage and fixture support for optionally specifying spec.model.name via functional options and a local strptr test helper. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client as Caller
participant Handler as ModelsHandler
participant KS as KServe API
participant Conv as toModels()
Client->>Handler: GET /models
Handler->>KS: List InferenceServices
KS-->>Handler: Items[]
Handler->>Conv: Convert Items to []Model
rect rgba(220,235,255,0.25)
note right of Conv: For each InferenceService item
Conv->>Conv: Read spec.model.name (unstructured)
alt spec.model.name present & non-empty
Conv->>Conv: modelID = spec.model.name
else spec.model.name missing/empty or read error
Conv->>Conv: Log read issue (if any)\nmodelID = metadata.name
end
Conv->>Conv: Derive URL from status/annotations
alt URL retrieval fails
Conv->>Conv: Log error including namespace/name
end
Conv-->>Handler: Append Model{ID: modelID, URL, Ready}
end
Handler-->>Client: 200 OK, []Model
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
maas-api/internal/handlers/models_test.go(4 hunks)maas-api/internal/models/kserve.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
maas-api/internal/models/kserve.go (1)
maas-api/internal/models/types.go (1)
Model(14-18)
maas-api/internal/handlers/models_test.go (2)
maas-api/test/fixtures/const.go (1)
TestNamespace(4-4)maas-api/internal/models/types.go (1)
Model(14-18)
🪛 GitHub Actions: MaaS API
maas-api/internal/handlers/models_test.go
[error] 81-81: Go test failed: cannot use unsetSpecModelNameISVC (variable of type unstructured.Unstructured) as "k8s.io/apimachinery/pkg/runtime.Object" value in argument to append: unstructured.Unstructured does not implement runtime.Object (DeepCopyObject has a pointer receiver).
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Validate PR Title
🔇 Additional comments (5)
maas-api/internal/models/kserve.go (3)
81-83: LGTM! Enhanced error logging.The addition of namespace, name, and kind to the error log message improves debuggability when URL retrieval fails.
85-94: LGTM! Model ID derivation logic is correct.The implementation properly:
- Defaults to
metadata.namewhenspec.model.nameis absent- Overrides with
spec.model.namewhen present- Logs errors without disrupting the flow
- Uses appropriate error context (kind, namespace, name)
This aligns with the PR objective to expose the actual model name.
98-98: LGTM! Correctly uses the derived model ID.The change from
item.GetName()tomodelIDensures the Model.ID field reflects eitherspec.model.name(when present) ormetadata.name(as fallback).maas-api/internal/handlers/models_test.go (2)
8-8: LGTM! Required imports added.The
timeandunstructuredimports are necessary for constructing the test object withspec.model.nameunset.Also applies to: 17-17
134-146: LGTM! Test expectations correctly validate the fallback behavior.The test case properly verifies that when
spec.model.nameis absent, the model ID defaults tometadata.name(unsetSpecModelName). The expected model structure and assertions are correct.
| //Add a model where .spec.model.name is unset; should default to metadata.name | ||
| unsetSpecModelName := "unset-spec-model-name" | ||
| unsetSpecModelNameNamespace := fixtures.TestNamespace | ||
|
|
||
| unsetSpecModelNameISVC := unstructured.Unstructured{ | ||
| Object: map[string]any{ | ||
| "apiVersion": "serving.kserve.io/v1beta1", | ||
| "kind": "InferenceService", | ||
| "metadata": map[string]any{ | ||
| "name": unsetSpecModelName, | ||
| "namespace": unsetSpecModelNameNamespace, | ||
| "creationTimestamp": time.Now().UTC().Format(time.RFC3339), | ||
| }, | ||
| "spec": map[string]any{ | ||
| "model": map[string]any{ | ||
| //left out the "name" key | ||
| }, | ||
| }, | ||
| "status": map[string]any{ | ||
| "url": "http://" + unsetSpecModelName + "." + unsetSpecModelNameNamespace + ".acme.com/v1", | ||
| }, | ||
| }, | ||
| } | ||
|
|
||
| // Append the model to the objects used by the fake server | ||
| llmInferenceServices = append(llmInferenceServices, unsetSpecModelNameISVC) |
There was a problem hiding this comment.
We need to use LLMInferenceService not InferenceService
There was a problem hiding this comment.
we can reuse CreateLLMInferenceService to create the object
Co-authored-by: Pierangelo Di Pilato <pierangelodipilato@gmail.com>
…lling into maas-api/fix/model-name
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jland-redhat, mholder6, nerdalert The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Co-authored-by: Edgar Hernández <ehernand@redhat.com>
|
/label tide/merge-method-squash |
|
/lgtm |
…atahub-io#143) * expose the actual model name instead of the resource name for the /v1/models endpoint * Update maas-api/internal/models/kserve.go Co-authored-by: Pierangelo Di Pilato <pierangelodipilato@gmail.com> * updated testing isvc to be llmisvc * typo * squash * Update maas-api/internal/handlers/models_test.go Co-authored-by: Edgar Hernández <ehernand@redhat.com> * updated the test to test the fallback logic instead --------- Co-authored-by: Pierangelo Di Pilato <pierangelodipilato@gmail.com> Co-authored-by: Edgar Hernández <ehernand@redhat.com>
…konflux-fix fix: align Dockerfile.konflux with upstream (add FIPS compliance)
follow-up to PR #88 - This fixes the bug in
/v1/modelslisting. Previously, the/v1/modelsendpoint incorrectly used the resource name [metadata.name] instead of the actual model name.This fix populates
.spec.model.namefromLLMInferenceService, and ifspec.model.nameis not specified, it should fall back to usemetadata.name, as it aligns with the behaviour ofLLMInferenceServicecontroller.Summary by CodeRabbit
New Features
Tests