-
Notifications
You must be signed in to change notification settings - Fork 230
docs(agentic-orchestration): add easy-llm docs #8047
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
afgambin
wants to merge
21
commits into
main
Choose a base branch
from
2883-easy-llm
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from all commits
Commits
Show all changes
21 commits
Select commit
Hold shift + click to select a range
ed7ac09
docs(agentic-orchestration): add easy-llm docs
afgambin 279002f
Merge branch 'main' into 2883-easy-llm
afgambin 770fc4a
docs(extend easyllm documentation)
bojtospeter 277088a
docs(extend getting started AI guide with easyllm)
bojtospeter baa6278
Review About and Budget sections
afgambin 1bc72e6
Review Camunda-provided LLM page
afgambin 8749051
Merge branch 'main' into 2883-easy-llm
afgambin 1bc9965
Merge branch 'main' into 2883-easy-llm
afgambin 1f0bf16
Review Get started guide
afgambin dbf00fa
Merge branch 'main' into 2883-easy-llm
afgambin 122af08
Merge branch 'main' into 2883-easy-llm
afgambin 987b3f6
Merge branch 'main' into 2883-easy-llm
afgambin 8086080
Update landing page
afgambin 1342c49
Tweak react component
afgambin 6316a3a
docs(available LLMs in camunda provided llm)
bojtospeter c01ac83
docs(clarification on camunda provided llm purpose)
bojtospeter 8b20bf7
Review from tech writer: Supported models section
afgambin fb1d21a
Merge branch 'main' into 2883-easy-llm
afgambin 7860fee
Merge branch 'main' into 2883-easy-llm
afgambin 56ce9db
Merge branch '2883-easy-llm' of https://github.com/camunda/camunda-do…
bojtospeter d5227c4
docs(remove screenshot as suggested by Angel)
bojtospeter File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
107 changes: 107 additions & 0 deletions
107
docs/components/agentic-orchestration/camunda-provided-llm.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,107 @@ | ||
| --- | ||
| id: camunda-provided-llm | ||
| title: Camunda-provided LLM | ||
| sidebar_label: Camunda-provided LLM | ||
| description: "Run AI agents quickly in Camunda SaaS with Camunda-provided LLM." | ||
| keywords: [agentic orchestration, ai agent] | ||
| --- | ||
|
|
||
| Run AI agents quickly in Camunda SaaS with Camunda-provided LLM. | ||
|
|
||
| ## About | ||
|
|
||
| Camunda-provided LLM is a Camunda-managed LLM provider option that comes with automatically configured credentials. With it, you can run AI agents in your processes right away without additional setup. | ||
|
|
||
| :::info | ||
| Camunda-provided LLM is free to use within the provided budget, and is intended for testing and experimentation. When you're ready for production or need more control, switch to a customer-managed provider. | ||
| ::: | ||
|
|
||
| Camunda-provided LLM is available in Camunda SaaS for: | ||
|
|
||
| - **SaaS trial organizations**: Includes Camunda-managed credentials and a free budget. | ||
| - **SaaS enterprise organizations**: Includes a larger budget to support multiple proofs of concept. You must explicitly enable AI features. When you enable them, Camunda-provided LLM is enabled automatically. If Camunda-provided LLM is unavailable, disable AI features and then re-enable them. | ||
|
|
||
| :::note | ||
| Availability, budgets, and UI may vary by environment and rollout stage. | ||
| ::: | ||
|
|
||
| See [Trial vs. enterprise budgets](#trial-vs-enterprise-budgets) for more details. | ||
|
|
||
| ## Set up Camunda-provided LLM | ||
|
|
||
| Once Camunda-provided LLM is available in your organization, its credentials are populated automatically as cluster secrets. | ||
|
|
||
| - If you are using an AI agent blueprint, no additional configuration is needed, since most AI agent blueprints default to use Camunda-provided LLM. Explore selected AI agent blueprints in the [Camunda Marketplace](https://marketplace.camunda.com/en-US/home). | ||
| - If you are building your own agent from scratch, enable Camunda-provided LLM by configuring your AI agent connector with the following parameters: | ||
| - **Provider**: `OpenaAI Compatible`. | ||
| - **API endpoint**: `{{secrets.CAMUNDA_PROVIDED_LLM_API_ENDPOINT}}`. | ||
| - **API key**: `{{secrets.CAMUNDA_PROVIDED_LLM_API_KEY}}`. | ||
| - **Model**: Select a model from the [list of supported models](#supported-models). For example `us.anthropic.claude-3-7-sonnet-20250219-v1:0`. | ||
|
|
||
| ## Supported models | ||
|
|
||
| Camunda-provided LLM supports LLMs from multiple providers. The available models available may change over time, but typically include popular general-purpose models from major providers: | ||
|
|
||
| - **Anthropic models**: `us.anthropic.claude-3-7-sonnet-20250219-v1:0` is a versatile model suitable for a wide range of agentic orchestration tasks, with strong reasoning and language capabilities. | ||
| - **OpenAI models**: `gpt-5.2` is a powerful model with advanced reasoning, coding, and language skills, ideal for complex workflows requiring high accuracy. | ||
| - **Google models**: `gemini-3-pro` is a strong performer in reasoning and language tasks, making it a good choice for customer support and content generation workflows. | ||
|
|
||
| When selecting a model, consider your agentic process requirements, such as advanced reasoning, coding capabilities, or language understanding. | ||
| You can also benchmark different models to find the best fit. See [Choose the right LLM](./choose-right-model-agentic.md) for more details. | ||
|
|
||
| ## Trial vs. enterprise budgets | ||
|
|
||
| The budgets, measured in **dollars (USD) spent**, differ depending on your SaaS plan: | ||
|
|
||
| - **Trial**: A smaller budget intended for quick evaluation and early experiments by individuals and small teams. | ||
| - **Enterprise**: A larger budget intended for broader team experimentation and proofs of concept. | ||
|
|
||
| :::important | ||
| Budgets are topped up automatically and enforced at the organization level (not per user). This means multiple users in the same organization draw from the same budget. | ||
| ::: | ||
|
|
||
| ### What the budget cover | ||
|
|
||
| The Camunda-provided LLM budget covers LLM provider calls during AI agent execution: | ||
|
|
||
| - **Trial budget**: Allows for a hundred to a few thousand agent runs, depending on the model used and the agent complexity. | ||
| - **Enterprise budget**: Is significantly larger to support more extensive experimentation. | ||
|
|
||
| Other Camunda AI features, such as Camunda Copilot, do not consume your Camunda-provided LLM budget and can be used independently. | ||
|
|
||
| :::note | ||
| The total cost of an agent run depends on how many LLM calls it makes, which can vary based on the agent’s design and task complexity. Cost also depends on the model used, since different models have different per-token pricing. | ||
| ::: | ||
|
|
||
| ### When budget is exhausted | ||
|
|
||
| When your organization reaches its Camunda-provided LLM budget cap: | ||
|
|
||
| - Additional LLM calls are **blocked**. | ||
| - Your process execution may fail with an “out of budget” error, such as `COST_LIMIT_EXCEEDED`, depending on how your process handles errors. | ||
|
|
||
| :::tip | ||
| If your process model doesn’t handle LLM failures, an exhausted budget may result in incidents or failed instances. Consider adding BPMN error handling to provide a user-friendly fallback path. | ||
| ::: | ||
|
|
||
| ## Switch away from Camunda-provided LLM | ||
|
|
||
| As you move from evaluation to production, you may want to switch to your own LLM provider. This gives you: | ||
|
|
||
| - Direct control over provider choice. | ||
| - Your own billing and quota management. | ||
| - The ability to scale beyond the Camunda-provided LLM budget caps. | ||
|
|
||
| :::important Before you begin | ||
|
|
||
| - Ensure your organization has access to the LLM provider you plan to use. | ||
| - Gather credentials and any required configuration. | ||
| - Identify where your current AI agent models rely on Camunda-provided LLM defaults. | ||
| ::: | ||
|
|
||
| To switch away, follow these steps: | ||
|
|
||
| 1. Add your LLM provider credentials in the appropriate Camunda location for managing secrets and credentials. | ||
| 2. Update your AI Agent connector configuration to use the new LLM provider. | ||
| 3. Re-deploy your process. | ||
| 4. Test a process instance end-to-end and verify results. | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change | ||
|---|---|---|---|---|
|
|
@@ -20,6 +20,23 @@ Consider the following aspects regarding your model requirements and setup const | |||
| | **Cost vs. speed** | Larger models offer higher accuracy but often with higher latency and cost. Balance performance against Service Level Agreements (SLAs) and budgets. | | ||||
| | **Accuracy vs. openness** | Proprietary models often lead in benchmark accuracy. Open-source models provide flexibility, fine-tuning, and offline use cases. | | ||||
|
|
||||
|
|
||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. [prettier] reported by reviewdog 🐶
Suggested change
|
||||
| ## Models available in Camunda Provided LLM | ||||
|
|
||||
| Camunda-provided LLM provides access to a set of LLMs from multiple providers. This is provied for experimentation and evaluation purposes, allowing you to test different models without needing to set up your own LLM provider. | ||||
|
|
||||
| When configuring your agent to use Camunda-provided LLM, you can select from the available models provided by Camunda. The specific models available may change over time as new models are added or removed, but typically include popular general-purpose models from major providers, such as: | ||||
|
|
||||
| - **Anthropic models**: `us.anthropic.claude-3-7-sonnet-20250219-v1:0` is a versatile model suitable for a wide range of agentic orchestration tasks, with strong reasoning and language capabilities. | ||||
|
|
||||
| - **OpenAI models**: `gpt-5.2` is a powerful model with advanced reasoning, coding, and language skills, ideal for complex workflows requiring high accuracy. | ||||
|
|
||||
| - **Google models**: `gemini-3-pro` is a strong performer in reasoning and language tasks, making it a good choice for customer support and content generation workflows. | ||||
|
|
||||
| When selecting a model from Camunda-provided LLM, consider the specific requirements of your agentic process, such as the need for advanced reasoning, coding capabilities, or language understanding. You can also benchmark different models using the LiveBench metrics to see which one performs best for your use case. | ||||
|
|
||||
| Learn more about [Camunda-provided LLM](./camunda-provided-llm.md). | ||||
|
|
||||
| ## Measure agent performance | ||||
|
|
||||
| The ideal model should handle tools effectively, follow instructions consistently, and complete actions successfully. | ||||
|
|
||||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I placed it here in the end. Since we’re talking about supported models for this feature, it fits better. I’m linking to the other guide for benchmarking details.