docs(agentic-orchestration): add easy-llm docs#8047
Conversation
|
👋 🤖 🤔 Hello, @bojtospeter! Did you make your changes in all the right places? These files were changed only in docs/. You might want to duplicate these changes in versioned_docs/version-8.8/.
You may have done this intentionally, but we wanted to point it out in case you didn't. You can read more about the versioning within our docs in our documentation guidelines. |
bojtospeter
left a comment
There was a problem hiding this comment.
I've challenged some ideas, will work on the easy llm page.
|
I went through the new added page. I am pending reviewing the Get started guide. |
|
@bojtospeter @urbanisierung I did a review pass and lgtm now. Please check the “Switch away” section to make sure everything is correct and that we’re not missing anything. Thanks! |
bojtospeter
left a comment
There was a problem hiding this comment.
LGTM, will add all the models which we support through easyllm, don't merge yet.
| - **API key**: `{{secrets.CAMUNDA_PROVIDED_LLM_API_KEY}}`. | ||
| - **Model**: Select a model from the [list of supported models](#supported-models). For example `us.anthropic.claude-3-7-sonnet-20250219-v1:0`. | ||
|
|
||
| <div style={{ display: "flex", justifyContent: "center" }}> |
There was a problem hiding this comment.
@bojtospeter Do we need this screenshot? I feel it is not really necessary, and always better to avoid UI screenshots for maintainability
| /> | ||
| </div> | ||
|
|
||
| ## Supported models |
There was a problem hiding this comment.
I placed it here in the end. Since we’re talking about supported models for this feature, it fits better. I’m linking to the other guide for benchmarking details.
|
@bojtospeter @urbanisierung Good to go from my side! Left one comment about the screenshot. |
| | **Cost vs. speed** | Larger models offer higher accuracy but often with higher latency and cost. Balance performance against Service Level Agreements (SLAs) and budgets. | | ||
| | **Accuracy vs. openness** | Proprietary models often lead in benchmark accuracy. Open-source models provide flexibility, fine-tuning, and offline use cases. | | ||
|
|
||
|
|
There was a problem hiding this comment.
[prettier] reported by reviewdog 🐶
|
The preview environment relating to the commit f4bb5db has successfully been deployed. You can access it at https://preview.docs.camunda.cloud/pr-8047/ |
Description
It adds documentation for the Easy-LLM SaaS epic.
When should this change go live?
bugorsupportlabel)available & undocumentedlabel)holdlabel)low priolabel)PR Checklist
{type}(scope): {description}commit message(s)/docsdirectory (version 8.9)./versioned_docsdirectory.@camunda/tech-writersunless working with an embedded writer.