Skip to content

OLS-2378: Generic Provider Config for LCore supported providers.#1257

Open
sriroopar wants to merge 1 commit intoopenshift:mainfrom
sriroopar:generic_provider_config
Open

OLS-2378: Generic Provider Config for LCore supported providers.#1257
sriroopar wants to merge 1 commit intoopenshift:mainfrom
sriroopar:generic_provider_config

Conversation

@sriroopar
Copy link
Contributor

@sriroopar sriroopar commented Feb 5, 2026

Description

This PR introduces Generic Provider Configuration support for LCore (Llama Stack) backend, enabling flexible LLM provider configuration beyond the predefined types.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up dependent library

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.

    • Configured OLS to use LCore as backend, as changes will not work with appserver backend.
    • Applied a valid config using remote:: openai as provider type.
    • Verified llama stack configmap generation and validation rules manually.
    • Checked liveliness of endpoint with openai provider and passed a query and verified that the response is valid.
    • Also verified with an invalid provider type.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Screenshot From 2026-02-05 15-50-07

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Feb 5, 2026
@openshift-ci-robot
Copy link

openshift-ci-robot commented Feb 5, 2026

@sriroopar: This pull request references OLS-2378 which is a valid jira issue.

Details

In response to this:

Description

This PR introduces Generic Provider Configuration support for LCore (Llama Stack) backend, enabling flexible LLM provider configuration beyond the predefined types.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up dependent library

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.

    • Configured OLS to use LCore as backend, as changes will not work with appserver backend.
    • Applied a valid config using remote:: openai as provider type.
    • Verified llama stack configmap generation and validation rules manually.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

1 similar comment
@openshift-ci-robot
Copy link

openshift-ci-robot commented Feb 5, 2026

@sriroopar: This pull request references OLS-2378 which is a valid jira issue.

Details

In response to this:

Description

This PR introduces Generic Provider Configuration support for LCore (Llama Stack) backend, enabling flexible LLM provider configuration beyond the predefined types.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up dependent library

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.

    • Configured OLS to use LCore as backend, as changes will not work with appserver backend.
    • Applied a valid config using remote:: openai as provider type.
    • Verified llama stack configmap generation and validation rules manually.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from bparees and xrajesh February 5, 2026 18:19
@openshift-ci
Copy link

openshift-ci bot commented Feb 5, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign raptorsun for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot
Copy link

openshift-ci-robot commented Feb 5, 2026

@sriroopar: This pull request references OLS-2378 which is a valid jira issue.

Details

In response to this:

Description

This PR introduces Generic Provider Configuration support for LCore (Llama Stack) backend, enabling flexible LLM provider configuration beyond the predefined types.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up dependent library

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.

    • Configured OLS to use LCore as backend, as changes will not work with appserver backend.
    • Applied a valid config using remote:: openai as provider type.
    • Verified llama stack configmap generation and validation rules manually.
    • Checked liveliness of endpoint with openai provider and passed a query and verified that the response is valid.
    • Also verified with an invalid provider type.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Screenshot From 2026-02-05 15-50-07

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@sriroopar sriroopar force-pushed the generic_provider_config branch 2 times, most recently from 55a718f to 3453cd5 Compare February 9, 2026 13:49
@sriroopar sriroopar force-pushed the generic_provider_config branch 2 times, most recently from 803c5c8 to f2d3abf Compare February 16, 2026 23:12
@raptorsun
Copy link
Contributor

The logics looks good :)
I have a doubt about having providers[].providerType beside providers[].type. This could be confusing for the users.
If the providers[].type is generic, we can put everything including the llamastack's providerType. the strucutre looks like

providers:
  - type: generic
    config: 
      provider_id: openai
      provider_type: remote::openai
      config:
        api_key: ${env.OPENAI_API_KEY}

moreover, the generic type is more llamaStackGeneric

@sriroopar sriroopar force-pushed the generic_provider_config branch 3 times, most recently from f2dc73c to 8b02fb4 Compare February 25, 2026 13:12
@sriroopar sriroopar force-pushed the generic_provider_config branch 3 times, most recently from e59ab97 to eeefe84 Compare March 2, 2026 22:15
@sriroopar
Copy link
Contributor Author

/retest

@sriroopar sriroopar force-pushed the generic_provider_config branch from eeefe84 to 5503903 Compare March 4, 2026 01:17
@sriroopar
Copy link
Contributor Author

/retest

1 similar comment
@sriroopar
Copy link
Contributor Author

/retest

@sriroopar sriroopar force-pushed the generic_provider_config branch from 5503903 to b6d6e93 Compare March 4, 2026 15:41
@sriroopar
Copy link
Contributor Author

/retest

2 similar comments
@sriroopar
Copy link
Contributor Author

/retest

@sriroopar
Copy link
Contributor Author

/retest

@sriroopar sriroopar force-pushed the generic_provider_config branch from b6d6e93 to ff50c80 Compare March 10, 2026 13:13
@blublinsky
Copy link
Contributor

/retest

Copy link
Contributor

@blublinsky blublinsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test Framework Inconsistency - Standard Go Tests vs Ginkgo
Let me show you the inconsistency:

The Problem:

The lcore package has mixed test frameworks:

Ginkgo/Gomega (BDD):

✅ reconciler_test.go - Uses Ginkgo
✅ suite_test.go - Ginkgo test suite setup
Standard Go testing:

❌ assets_test.go - 562 NEW lines for generic provider (this PR)
❌ deployment_test.go - 201 NEW lines for generic provider (this PR)
❌ config_test.go - Standard Go tests

// +kubebuilder:validation:XValidation:message="'config' requires 'providerType' to be set",rule="!has(self.config) || has(self.providerType)"
// +kubebuilder:validation:XValidation:message="Llama Stack Generic mode (providerType set) requires type='llamaStackGeneric'",rule="!has(self.providerType) || self.type == \"llamaStackGeneric\""
// +kubebuilder:validation:XValidation:message="Llama Stack Generic mode cannot use legacy provider-specific fields",rule="self.type != \"llamaStackGeneric\" || (!has(self.deploymentName) && !has(self.projectID))"
// +kubebuilder:validation:XValidation:message="credentialKey must not be empty string",rule="!has(self.credentialKey) || self.credentialKey != \"\""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// +kubebuilder:validation:XValidation:message="credentialKey must not be empty or whitespace",rule="!has(self.credentialKey) || !self.credentialKey.matches('^\\s*$')"
// +kubebuilder:validation:XValidation:message="type 'llamaStackGeneric' requires 'providerType' and 'config' to be set",rule="self.type != "llamaStackGeneric" || (has(self.providerType) && has(self.config))"

"config": map[string]interface{}{},
},
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider rewriting above:

// Always include sentence-transformers (required for embeddings)
providers := []map[string]interface{}{
	map[string]interface{}{
		"provider_id":   "sentence-transformers",
		"provider_type": "inline::sentence-transformers",
		"config":        map[string]interface{}{},
	},
}
// Guard against nil LLMConfig or Providers
if cr == nil || cr.Spec.LLMConfig.Providers == nil {
	return providers, nil
}

// Secret key name for provider credentials (default: "apitoken")
// Specifies which key in credentialsSecretRef contains the API token.
// The operator creates an environment variable named {PROVIDER_NAME}_API_KEY.
// +kubebuilder:default:="apitoken"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// +kubebuilder:default="apitoken"


// Llama Stack Generic providers require LCore backend (AppServer does not support llamaStackGeneric providers)
// LCore is the future direction; AppServer is being deprecated
if provider.Type == "llamaStackGeneric" && !r.UseLCore() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use const instead of literal?

}

t.Logf("✓ Missing credentialKey correctly returns error: %v", err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add one more test

func TestGenerateLCoreDeployment_GenericProvider(t *testing.T) {
// Create OLSConfig with generic provider
cr := &olsv1alpha1.OLSConfig{
ObjectMeta: metav1.ObjectMeta{
Name: "cluster",
},
Spec: olsv1alpha1.OLSConfigSpec{
LLMConfig: olsv1alpha1.LLMSpec{
Providers: []olsv1alpha1.ProviderSpec{
{
Name: "generic-openai",
Type: "llamaStackGeneric",
ProviderType: "remote::openai",
Config: &runtime.RawExtension{
Raw: []byte({"url": "https://api.openai.com/v1"}),
},
CredentialKey: "custom-key",
CredentialsSecretRef: corev1.LocalObjectReference{
Name: "openai-secret",
},
},
},
},
OLSConfig: olsv1alpha1.OLSSpec{
// ... minimal config
},
},
}
// Mock reconciler with the secret
secrets := map[string]*corev1.Secret{
"openai-secret": {
ObjectMeta: metav1.ObjectMeta{
Name: "openai-secret",
Namespace: "test-namespace",
},
Data: map[string][]byte{
"custom-key": []byte("sk-test-key"),
},
},
}
r := &mockReconcilerWithSecrets{
lcoreServerMode: true,
secrets: secrets,
}
// Generate deployment
deployment, err := GenerateLCoreDeployment(r, cr)
if err != nil {
t.Fatalf("GenerateLCoreDeployment with generic provider failed: %v", err)
}
// Verify deployment has expected structure
if deployment == nil {
t.Fatal("deployment is nil")
}
// Verify env vars include GENERIC_OPENAI_API_KEY
envVars := deployment.Spec.Template.Spec.Containers[0].Env
found := false
for _, env := range envVars {
if env.Name == "GENERIC_OPENAI_API_KEY" {
found = true
if env.ValueFrom == nil || env.ValueFrom.SecretKeyRef == nil {
t.Errorf("GENERIC_OPENAI_API_KEY should reference secret")
}
if env.ValueFrom.SecretKeyRef.Key != "custom-key" {
t.Errorf("Expected key 'custom-key', got %s", env.ValueFrom.SecretKeyRef.Key)
}
}
}
if !found {
t.Error("Expected GENERIC_OPENAI_API_KEY env var not found")
}
// Verify ConfigMap volume mount exists
// Verify container resources
// etc.
}

and another test

func TestBuildLlamaStackEnvVars_GenericProvider_SecretNotFound(t *testing.T) {
cr := &olsv1alpha1.OLSConfig{
Spec: olsv1alpha1.OLSConfigSpec{
LLMConfig: olsv1alpha1.LLMSpec{
Providers: []olsv1alpha1.ProviderSpec{
{
Name: "openai",
Type: "llamaStackGeneric",
ProviderType: "remote::openai",
CredentialsSecretRef: corev1.LocalObjectReference{
Name: "missing-secret",
},
Config: &runtime.RawExtension{
Raw: []byte({"url": "https://api.openai.com/v1"}),
},
},
},
},
},
}
// Mock reconciler with NO secrets
r := &mockReconcilerWithSecrets{
secrets: map[string]*corev1.Secret{}, // Empty - secret doesn't exist
}
ctx := context.Background()
envVars, err := buildLlamaStackEnvVars(r, ctx, cr)

// Expect an error
if err == nil {
	t.Fatal("Expected error for missing secret, got nil")
}
// Verify error message mentions the secret
if !strings.Contains(err.Error(), "missing-secret") {
	t.Errorf("Error should mention secret name 'missing-secret', got: %v", err)
}
// envVars should be nil or empty when error occurs
if envVars != nil && len(envVars) > 0 {
	t.Errorf("Expected no env vars on error, got %d", len(envVars))
}

}

And another one

func TestBuildLlamaStackEnvVars_GenericProvider_NoSecret(t *testing.T) {
cr := &olsv1alpha1.OLSConfig{
Spec: olsv1alpha1.OLSConfigSpec{
LLMConfig: olsv1alpha1.LLMSpec{
Providers: []olsv1alpha1.ProviderSpec{
{
Name: "public-llm",
Type: "llamaStackGeneric",
ProviderType: "remote::public-provider",
Config: &runtime.RawExtension{
Raw: []byte({"url": "https://public.example.com"}),
},
// NO CredentialsSecretRef
},
},
},
},
}
r := &mockReconcilerWithSecrets{
secrets: map[string]*corev1.Secret{},
}
ctx := context.Background()
envVars, err := buildLlamaStackEnvVars(r, ctx, cr)

// Should succeed (no error)
if err != nil {
	t.Fatalf("buildLlamaStackEnvVars failed: %v", err)
}
// Should NOT have PUBLIC_LLM_API_KEY env var
for _, env := range envVars {
	if env.Name == "PUBLIC_LLM_API_KEY" {
		t.Error("Did not expect PUBLIC_LLM_API_KEY env var when no secret configured")
	}
}

}

if !strings.Contains(yamlOutput, "TEST_PROVIDER_API_KEY") {
t.Errorf("Expected environment variable reference 'TEST_PROVIDER_API_KEY' in output")
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a test

func TestBuildLlamaStackYAML_GenericProvider_NoCredentials(t *testing.T) {
cr := &olsv1alpha1.OLSConfig{
Spec: olsv1alpha1.OLSConfigSpec{
LLMConfig: olsv1alpha1.LLMSpec{
Providers: []olsv1alpha1.ProviderSpec{
{
Name: "public-llm",
Type: "llamaStackGeneric",
ProviderType: "remote::public-provider",
Config: &runtime.RawExtension{
Raw: []byte({"url": "https://public.example.com"}),
},
// NO CredentialsSecretRef
},
},
},
},
}


// deepCopyMap creates a deep copy of a map[string]interface{}, including nested maps
// and slices. This prevents mutations of the copy from affecting the original.
func deepCopyMap(src map[string]interface{}) map[string]interface{} {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if src is nil?

@blublinsky
Copy link
Contributor

Missing Test Coverage:

❌ Generic provider with custom credentialKey
❌ Generic provider with default credentialKey (should use "apitoken")
❌ Generic provider with whitespace-only credentialKey (should fail)
❌ Generic provider with invalid JSON in Config (should fail)
❌ Generic provider secret missing the specified credential key (should fail)
❌ llamaStackGeneric type when LCore is disabled (should fail)
❌ llamaStackGeneric type when LCore is enabled (should pass)

@blublinsky
Copy link
Contributor

The CRD enum includes fake_provider:

// olsconfig_types.go:475
+kubebuilder:validation:Enum=azure_openai;bam;openai;watsonx;rhoai_vllm;rhelai_vllm;fake_provider;llamaStackGeneric
But fake_provider is not in the providerTypeMapping:

@blublinsky
Copy link
Contributor

What happens when a generic provider has providerType set but Config is nil/empty

@blublinsky
Copy link
Contributor

Should generic providers work without credentialsSecretRef?

@sriroopar sriroopar force-pushed the generic_provider_config branch 8 times, most recently from 30a3650 to f9cbe43 Compare March 12, 2026 13:43
@openshift-ci
Copy link

openshift-ci bot commented Mar 12, 2026

@sriroopar: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/bundle-e2e-4-22 b6d6e93 link true /test bundle-e2e-4-22

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@sriroopar sriroopar force-pushed the generic_provider_config branch from f9cbe43 to f21174c Compare March 13, 2026 17:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants