feat: add openai inference provider#46
Conversation
New optional provider. Relates to: RHAIENG-1198 Signed-off-by: Sébastien Han <seb@redhat.com>
WalkthroughAdds a new inference provider entry remote::openai to distribution configs and documents it. The provider is conditionally enabled at runtime via OPENAI_API_KEY and supports configurable base_url. No other configuration sections or code paths are changed. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Env as Environment
participant RunCfg as run.yaml
participant Dist as Provider Registry
participant OpenAI as remote::openai
Env->>RunCfg: OPENAI_API_KEY present?
alt Key present
RunCfg->>Dist: Register provider (id: openai, type: remote::openai)
note right of Dist: base_url default<br/>https://api.openai.com/v1
Dist->>OpenAI: Initialize with api_key, base_url
else No key
RunCfg-->>Dist: Skip registering remote::openai
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
distribution/run.yaml (2)
57-61: OpenAI provider gating looks good; consider optional org/project and timeouts.Nice conditional include via ${env.OPENAI_API_KEY:+openai}. To improve operability (multi‑org setups, network slowness), consider adding optional fields if supported by remote::openai:
- organization: ${env.OPENAI_ORG_ID:=}
- project: ${env.OPENAI_PROJECT_ID:=}
- connect_timeout/read_timeout (mirroring Bedrock defaults)
If the schema supports these, apply within this config block.
176-185: Optional: add a gated model entry for OpenAI to prevent provider/model mismatch.Currently the first model binds to provider_id vllm‑inference. If users only set OPENAI_API_KEY and INFERENCE_MODEL to an OpenAI model, resolution may still point at vllm. Recommend adding a second model entry gated on OPENAI_API_KEY (and OPENAI_MODEL), so switching providers is env‑only:
Example (place next to existing models):
- metadata: {} model_id: ${env.OPENAI_MODEL:=gpt-4o-mini} provider_id: ${env.OPENAI_API_KEY:+openai} model_type: llm
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
distribution/README.md(1 hunks)distribution/build.yaml(1 hunks)distribution/run.yaml(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-test-push (linux/amd64)
🔇 Additional comments (2)
distribution/README.md (1)
19-19: Docs entry added correctly.The new inference provider row for remote::openai is consistent with the config changes.
distribution/build.yaml (1)
12-12: Build spec updated; confirm provider runtime deps.Ensure remote::openai is available in the base stack without extra wheels. If it requires an external SDK (e.g., openai), add it under additional_pip_packages or vendor it in the provider image.
|
Merging with a single approval for the sake of moving fast. CI is green ;) - Thanks! |
What does this PR do?
New optional provider.
Relates to: RHAIENG-1198
Summary by CodeRabbit
New Features
Documentation