OLS-2378: Generic Provider Config for LCore supported providers.#1257
OLS-2378: Generic Provider Config for LCore supported providers.#1257sriroopar wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
@sriroopar: This pull request references OLS-2378 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
1 similar comment
|
@sriroopar: This pull request references OLS-2378 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@sriroopar: This pull request references OLS-2378 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
55a718f to
3453cd5
Compare
803c5c8 to
f2d3abf
Compare
|
The logics looks good :) providers:
- type: generic
config:
provider_id: openai
provider_type: remote::openai
config:
api_key: ${env.OPENAI_API_KEY}moreover, the |
f2dc73c to
8b02fb4
Compare
e59ab97 to
eeefe84
Compare
|
/retest |
eeefe84 to
5503903
Compare
|
/retest |
1 similar comment
|
/retest |
5503903 to
b6d6e93
Compare
|
/retest |
2 similar comments
|
/retest |
|
/retest |
b6d6e93 to
ff50c80
Compare
|
/retest |
blublinsky
left a comment
There was a problem hiding this comment.
Test Framework Inconsistency - Standard Go Tests vs Ginkgo
Let me show you the inconsistency:
The Problem:
The lcore package has mixed test frameworks:
Ginkgo/Gomega (BDD):
✅ reconciler_test.go - Uses Ginkgo
✅ suite_test.go - Ginkgo test suite setup
Standard Go testing:
❌ assets_test.go - 562 NEW lines for generic provider (this PR)
❌ deployment_test.go - 201 NEW lines for generic provider (this PR)
❌ config_test.go - Standard Go tests
api/v1alpha1/olsconfig_types.go
Outdated
| // +kubebuilder:validation:XValidation:message="'config' requires 'providerType' to be set",rule="!has(self.config) || has(self.providerType)" | ||
| // +kubebuilder:validation:XValidation:message="Llama Stack Generic mode (providerType set) requires type='llamaStackGeneric'",rule="!has(self.providerType) || self.type == \"llamaStackGeneric\"" | ||
| // +kubebuilder:validation:XValidation:message="Llama Stack Generic mode cannot use legacy provider-specific fields",rule="self.type != \"llamaStackGeneric\" || (!has(self.deploymentName) && !has(self.projectID))" | ||
| // +kubebuilder:validation:XValidation:message="credentialKey must not be empty string",rule="!has(self.credentialKey) || self.credentialKey != \"\"" |
There was a problem hiding this comment.
// +kubebuilder:validation:XValidation:message="credentialKey must not be empty or whitespace",rule="!has(self.credentialKey) || !self.credentialKey.matches('^\\s*$')"
// +kubebuilder:validation:XValidation:message="type 'llamaStackGeneric' requires 'providerType' and 'config' to be set",rule="self.type != "llamaStackGeneric" || (has(self.providerType) && has(self.config))"
| "config": map[string]interface{}{}, | ||
| }, | ||
| } | ||
|
|
There was a problem hiding this comment.
consider rewriting above:
// Always include sentence-transformers (required for embeddings)
providers := []map[string]interface{}{
map[string]interface{}{
"provider_id": "sentence-transformers",
"provider_type": "inline::sentence-transformers",
"config": map[string]interface{}{},
},
}
// Guard against nil LLMConfig or Providers
if cr == nil || cr.Spec.LLMConfig.Providers == nil {
return providers, nil
}
api/v1alpha1/olsconfig_types.go
Outdated
| // Secret key name for provider credentials (default: "apitoken") | ||
| // Specifies which key in credentialsSecretRef contains the API token. | ||
| // The operator creates an environment variable named {PROVIDER_NAME}_API_KEY. | ||
| // +kubebuilder:default:="apitoken" |
There was a problem hiding this comment.
// +kubebuilder:default="apitoken"
internal/controller/utils/utils.go
Outdated
|
|
||
| // Llama Stack Generic providers require LCore backend (AppServer does not support llamaStackGeneric providers) | ||
| // LCore is the future direction; AppServer is being deprecated | ||
| if provider.Type == "llamaStackGeneric" && !r.UseLCore() { |
There was a problem hiding this comment.
Use const instead of literal?
| } | ||
|
|
||
| t.Logf("✓ Missing credentialKey correctly returns error: %v", err) | ||
| } |
There was a problem hiding this comment.
Add one more test
func TestGenerateLCoreDeployment_GenericProvider(t *testing.T) {
// Create OLSConfig with generic provider
cr := &olsv1alpha1.OLSConfig{
ObjectMeta: metav1.ObjectMeta{
Name: "cluster",
},
Spec: olsv1alpha1.OLSConfigSpec{
LLMConfig: olsv1alpha1.LLMSpec{
Providers: []olsv1alpha1.ProviderSpec{
{
Name: "generic-openai",
Type: "llamaStackGeneric",
ProviderType: "remote::openai",
Config: &runtime.RawExtension{
Raw: []byte({"url": "https://api.openai.com/v1"}),
},
CredentialKey: "custom-key",
CredentialsSecretRef: corev1.LocalObjectReference{
Name: "openai-secret",
},
},
},
},
OLSConfig: olsv1alpha1.OLSSpec{
// ... minimal config
},
},
}
// Mock reconciler with the secret
secrets := map[string]*corev1.Secret{
"openai-secret": {
ObjectMeta: metav1.ObjectMeta{
Name: "openai-secret",
Namespace: "test-namespace",
},
Data: map[string][]byte{
"custom-key": []byte("sk-test-key"),
},
},
}
r := &mockReconcilerWithSecrets{
lcoreServerMode: true,
secrets: secrets,
}
// Generate deployment
deployment, err := GenerateLCoreDeployment(r, cr)
if err != nil {
t.Fatalf("GenerateLCoreDeployment with generic provider failed: %v", err)
}
// Verify deployment has expected structure
if deployment == nil {
t.Fatal("deployment is nil")
}
// Verify env vars include GENERIC_OPENAI_API_KEY
envVars := deployment.Spec.Template.Spec.Containers[0].Env
found := false
for _, env := range envVars {
if env.Name == "GENERIC_OPENAI_API_KEY" {
found = true
if env.ValueFrom == nil || env.ValueFrom.SecretKeyRef == nil {
t.Errorf("GENERIC_OPENAI_API_KEY should reference secret")
}
if env.ValueFrom.SecretKeyRef.Key != "custom-key" {
t.Errorf("Expected key 'custom-key', got %s", env.ValueFrom.SecretKeyRef.Key)
}
}
}
if !found {
t.Error("Expected GENERIC_OPENAI_API_KEY env var not found")
}
// Verify ConfigMap volume mount exists
// Verify container resources
// etc.
}
and another test
func TestBuildLlamaStackEnvVars_GenericProvider_SecretNotFound(t *testing.T) {
cr := &olsv1alpha1.OLSConfig{
Spec: olsv1alpha1.OLSConfigSpec{
LLMConfig: olsv1alpha1.LLMSpec{
Providers: []olsv1alpha1.ProviderSpec{
{
Name: "openai",
Type: "llamaStackGeneric",
ProviderType: "remote::openai",
CredentialsSecretRef: corev1.LocalObjectReference{
Name: "missing-secret",
},
Config: &runtime.RawExtension{
Raw: []byte({"url": "https://api.openai.com/v1"}),
},
},
},
},
},
}
// Mock reconciler with NO secrets
r := &mockReconcilerWithSecrets{
secrets: map[string]*corev1.Secret{}, // Empty - secret doesn't exist
}
ctx := context.Background()
envVars, err := buildLlamaStackEnvVars(r, ctx, cr)
// Expect an error
if err == nil {
t.Fatal("Expected error for missing secret, got nil")
}
// Verify error message mentions the secret
if !strings.Contains(err.Error(), "missing-secret") {
t.Errorf("Error should mention secret name 'missing-secret', got: %v", err)
}
// envVars should be nil or empty when error occurs
if envVars != nil && len(envVars) > 0 {
t.Errorf("Expected no env vars on error, got %d", len(envVars))
}
}
And another one
func TestBuildLlamaStackEnvVars_GenericProvider_NoSecret(t *testing.T) {
cr := &olsv1alpha1.OLSConfig{
Spec: olsv1alpha1.OLSConfigSpec{
LLMConfig: olsv1alpha1.LLMSpec{
Providers: []olsv1alpha1.ProviderSpec{
{
Name: "public-llm",
Type: "llamaStackGeneric",
ProviderType: "remote::public-provider",
Config: &runtime.RawExtension{
Raw: []byte({"url": "https://public.example.com"}),
},
// NO CredentialsSecretRef
},
},
},
},
}
r := &mockReconcilerWithSecrets{
secrets: map[string]*corev1.Secret{},
}
ctx := context.Background()
envVars, err := buildLlamaStackEnvVars(r, ctx, cr)
// Should succeed (no error)
if err != nil {
t.Fatalf("buildLlamaStackEnvVars failed: %v", err)
}
// Should NOT have PUBLIC_LLM_API_KEY env var
for _, env := range envVars {
if env.Name == "PUBLIC_LLM_API_KEY" {
t.Error("Did not expect PUBLIC_LLM_API_KEY env var when no secret configured")
}
}
}
| if !strings.Contains(yamlOutput, "TEST_PROVIDER_API_KEY") { | ||
| t.Errorf("Expected environment variable reference 'TEST_PROVIDER_API_KEY' in output") | ||
| } | ||
| } |
There was a problem hiding this comment.
Add a test
func TestBuildLlamaStackYAML_GenericProvider_NoCredentials(t *testing.T) {
cr := &olsv1alpha1.OLSConfig{
Spec: olsv1alpha1.OLSConfigSpec{
LLMConfig: olsv1alpha1.LLMSpec{
Providers: []olsv1alpha1.ProviderSpec{
{
Name: "public-llm",
Type: "llamaStackGeneric",
ProviderType: "remote::public-provider",
Config: &runtime.RawExtension{
Raw: []byte({"url": "https://public.example.com"}),
},
// NO CredentialsSecretRef
},
},
},
},
}
|
|
||
| // deepCopyMap creates a deep copy of a map[string]interface{}, including nested maps | ||
| // and slices. This prevents mutations of the copy from affecting the original. | ||
| func deepCopyMap(src map[string]interface{}) map[string]interface{} { |
There was a problem hiding this comment.
What happens if src is nil?
|
Missing Test Coverage: ❌ Generic provider with custom credentialKey |
|
The CRD enum includes fake_provider: // olsconfig_types.go:475 |
|
What happens when a generic provider has providerType set but Config is nil/empty |
|
Should generic providers work without credentialsSecretRef? |
30a3650 to
f9cbe43
Compare
|
@sriroopar: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
f9cbe43 to
f21174c
Compare

Description
This PR introduces Generic Provider Configuration support for LCore (Llama Stack) backend, enabling flexible LLM provider configuration beyond the predefined types.
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing
Please provide detailed steps to perform tests related to this code change.
How were the fix/results from this change verified? Please provide relevant screenshots or results.