Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Get started with Camunda agentic orchestration by building and running your firs

Understand the fundamental concepts of Camunda agentic orchestration.

<AoGrid ao={fundamentalCards} columns={3}/>
<AoGrid ao={fundamentalCards} columns={2}/>

## Explore further resources

Expand Down
107 changes: 107 additions & 0 deletions docs/components/agentic-orchestration/camunda-provided-llm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
---
id: camunda-provided-llm
title: Camunda-provided LLM
sidebar_label: Camunda-provided LLM
description: "Run AI agents quickly in Camunda SaaS with Camunda-provided LLM."
keywords: [agentic orchestration, ai agent]
---

Run AI agents quickly in Camunda SaaS with Camunda-provided LLM.

## About

Camunda-provided LLM is a Camunda-managed LLM provider option that comes with automatically configured credentials. With it, you can run AI agents in your processes right away without additional setup.

:::info
Camunda-provided LLM is free to use within the provided budget, and is intended for testing and experimentation. When you're ready for production or need more control, switch to a customer-managed provider.
:::

Camunda-provided LLM is available in Camunda SaaS for:

- **SaaS trial organizations**: Includes Camunda-managed credentials and a free budget.
- **SaaS enterprise organizations**: Includes a larger budget to support multiple proofs of concept. You must explicitly enable AI features. When you enable them, Camunda-provided LLM is enabled automatically. If Camunda-provided LLM is unavailable, disable AI features and then re-enable them.

:::note
Availability, budgets, and UI may vary by environment and rollout stage.
:::

See [Trial vs. enterprise budgets](#trial-vs-enterprise-budgets) for more details.

## Set up Camunda-provided LLM

Once Camunda-provided LLM is available in your organization, its credentials are populated automatically as cluster secrets.

- If you are using an AI agent blueprint, no additional configuration is needed, since most AI agent blueprints default to use Camunda-provided LLM. Explore selected AI agent blueprints in the [Camunda Marketplace](https://marketplace.camunda.com/en-US/home).
- If you are building your own agent from scratch, enable Camunda-provided LLM by configuring your AI agent connector with the following parameters:
- **Provider**: `OpenaAI Compatible`.
- **API endpoint**: `{{secrets.CAMUNDA_PROVIDED_LLM_API_ENDPOINT}}`.
- **API key**: `{{secrets.CAMUNDA_PROVIDED_LLM_API_KEY}}`.
- **Model**: Select a model from the [list of supported models](#supported-models). For example `us.anthropic.claude-3-7-sonnet-20250219-v1:0`.

## Supported models
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I placed it here in the end. Since we’re talking about supported models for this feature, it fits better. I’m linking to the other guide for benchmarking details.


Camunda-provided LLM supports LLMs from multiple providers. The available models available may change over time, but typically include popular general-purpose models from major providers:

- **Anthropic models**: `us.anthropic.claude-3-7-sonnet-20250219-v1:0` is a versatile model suitable for a wide range of agentic orchestration tasks, with strong reasoning and language capabilities.
- **OpenAI models**: `gpt-5.2` is a powerful model with advanced reasoning, coding, and language skills, ideal for complex workflows requiring high accuracy.
- **Google models**: `gemini-3-pro` is a strong performer in reasoning and language tasks, making it a good choice for customer support and content generation workflows.

When selecting a model, consider your agentic process requirements, such as advanced reasoning, coding capabilities, or language understanding.
You can also benchmark different models to find the best fit. See [Choose the right LLM](./choose-right-model-agentic.md) for more details.

## Trial vs. enterprise budgets

The budgets, measured in **dollars (USD) spent**, differ depending on your SaaS plan:

- **Trial**: A smaller budget intended for quick evaluation and early experiments by individuals and small teams.
- **Enterprise**: A larger budget intended for broader team experimentation and proofs of concept.

:::important
Budgets are topped up automatically and enforced at the organization level (not per user). This means multiple users in the same organization draw from the same budget.
:::

### What the budget cover

The Camunda-provided LLM budget covers LLM provider calls during AI agent execution:

- **Trial budget**: Allows for a hundred to a few thousand agent runs, depending on the model used and the agent complexity.
- **Enterprise budget**: Is significantly larger to support more extensive experimentation.

Other Camunda AI features, such as Camunda Copilot, do not consume your Camunda-provided LLM budget and can be used independently.

:::note
The total cost of an agent run depends on how many LLM calls it makes, which can vary based on the agent’s design and task complexity. Cost also depends on the model used, since different models have different per-token pricing.
:::

### When budget is exhausted

When your organization reaches its Camunda-provided LLM budget cap:

- Additional LLM calls are **blocked**.
- Your process execution may fail with an “out of budget” error, such as `COST_LIMIT_EXCEEDED`, depending on how your process handles errors.

:::tip
If your process model doesn’t handle LLM failures, an exhausted budget may result in incidents or failed instances. Consider adding BPMN error handling to provide a user-friendly fallback path.
:::

## Switch away from Camunda-provided LLM

As you move from evaluation to production, you may want to switch to your own LLM provider. This gives you:

- Direct control over provider choice.
- Your own billing and quota management.
- The ability to scale beyond the Camunda-provided LLM budget caps.

:::important Before you begin

- Ensure your organization has access to the LLM provider you plan to use.
- Gather credentials and any required configuration.
- Identify where your current AI agent models rely on Camunda-provided LLM defaults.
:::

To switch away, follow these steps:

1. Add your LLM provider credentials in the appropriate Camunda location for managing secrets and credentials.
2. Update your AI Agent connector configuration to use the new LLM provider.
3. Re-deploy your process.
4. Test a process instance end-to-end and verify results.
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,23 @@ Consider the following aspects regarding your model requirements and setup const
| **Cost vs. speed** | Larger models offer higher accuracy but often with higher latency and cost. Balance performance against Service Level Agreements (SLAs) and budgets. |
| **Accuracy vs. openness** | Proprietary models often lead in benchmark accuracy. Open-source models provide flexibility, fine-tuning, and offline use cases. |


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[prettier] reported by reviewdog 🐶

Suggested change

## Models available in Camunda Provided LLM

Camunda-provided LLM provides access to a set of LLMs from multiple providers. This is provied for experimentation and evaluation purposes, allowing you to test different models without needing to set up your own LLM provider.

When configuring your agent to use Camunda-provided LLM, you can select from the available models provided by Camunda. The specific models available may change over time as new models are added or removed, but typically include popular general-purpose models from major providers, such as:

- **Anthropic models**: `us.anthropic.claude-3-7-sonnet-20250219-v1:0` is a versatile model suitable for a wide range of agentic orchestration tasks, with strong reasoning and language capabilities.

- **OpenAI models**: `gpt-5.2` is a powerful model with advanced reasoning, coding, and language skills, ideal for complex workflows requiring high accuracy.

- **Google models**: `gemini-3-pro` is a strong performer in reasoning and language tasks, making it a good choice for customer support and content generation workflows.

When selecting a model from Camunda-provided LLM, consider the specific requirements of your agentic process, such as the need for advanced reasoning, coding capabilities, or language understanding. You can also benchmark different models using the LiveBench metrics to see which one performs best for your use case.

Learn more about [Camunda-provided LLM](./camunda-provided-llm.md).

## Measure agent performance

The ideal model should handle tools effectively, follow instructions consistently, and complete actions successfully.
Expand Down
9 changes: 8 additions & 1 deletion docs/components/react-components/_ao-card-data.js
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,13 @@ export const fundamentalCards = [
description:
"Build and integrate AI agents into your end-to-end processes.",
},
{
link: "../camunda-provided-llm/",
title: "Camunda-provided LLM",
image: IconAoLlmImg,
description:
"Run AI agents quickly in Camunda SaaS with Camunda-provided LLM.",
},
{
link: "../ao-design/",
title: "Design and architecture",
Expand All @@ -33,7 +40,7 @@ export const fundamentalCards = [
link: "../monitor-ai-agents/",
title: "Monitor your AI agents",
image: IconAoAgentImg,
description: "Monitor your AI agents with Operate.",
description: "Monitor and troubleshoot your AI agents.",
},
];

Expand Down
39 changes: 28 additions & 11 deletions docs/guides/getting-started-agentic-orchestration.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,20 +51,21 @@ To run your agent, you must have Camunda 8 (version 8.8 or newer) running, using
The AI Agent connector makes it easy to integrate LLMs into your process workflows, with out-of-the-box support for popular model providers such as Anthropic and Amazon Bedrock. It can also connect to any additional LLM that exposes an OpenAI-compatible API.
See [supported model providers](/components/connectors/out-of-the-box-connectors/agentic-ai-aiagent-subprocess.md#model-provider) for more details.

In this guide, you can try two use cases:
In this guide, you can try three use cases:

| Setup | Model provider | Model used | Prerequisites |
| :---- | :------------- | :-------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Cloud | AWS Bedrock | Claude Sonnet 4 | <p><ul><li> An AWS account with permissions for the [Bedrock Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html).</li><li><p> Anthropic Claude foundation models using the AWS console. See [AWS documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html) for details.</p></li></ul></p> |
| Local | Ollama | GPT-OSS:20b | <p><ul><li> [Camunda 8 Run](/self-managed/quickstart/developer-quickstart/c8run.md) running locally.</li><li><p> Ollama and GPT-OSS:20b installed. See [Set up Ollama](#set-up-ollama) for details.</p></li></ul></p> |
| Setup | Model provider | Model used | Prerequisites |
| :----------------------- | :-------------------------------------------------------------------------------- | :---------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| SaaS | [Camunda-provided LLM](/components/agentic-orchestration/camunda-provided-llm.md) | Camunda-managed model (for example, Claude 3.7) | <p><ul><li> Camunda 8 SaaS trial or enterprise organization.</li><li><p> Camunda-provided LLM available in your organization. No additional LLM provider credentials are required to run this guide.</p></li></ul></p> |
| Cloud (customer-managed) | AWS Bedrock | Claude Sonnet 4 | <p><ul><li> An AWS account with permissions for the [Bedrock Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html).</li><li><p> Anthropic Claude foundation models using the AWS console. See [AWS documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html) for details.</p></li></ul></p> |
| Local | Ollama | GPT-OSS:20b | <p><ul><li> [Camunda 8 Run](/self-managed/quickstart/developer-quickstart/c8run.md) running locally.</li><li><p> Ollama and GPT-OSS:20b installed. See [Set up Ollama](#set-up-ollama) for details.</p></li></ul></p> |

:::important
Running LLMs locally requires substantial disk space and memory. GPT-OSS:20b requires more than 20GB of RAM to function and 14GB of free disk space to download.
:::

## Step 1: Install the model blueprint

To start building your first AI agent, you can use a Camunda model blueprint from [Camunda marketplace](https://marketplace.camunda.com/en-US/home).
To start building your first AI agent, you can use a Camunda model blueprint from [Camunda Marketplace](https://marketplace.camunda.com/en-US/home).

In this guide, you will use the [AI Agent Chat Quick Start](https://marketplace.camunda.com/en-US/apps/587865) model blueprint.
Depending on your working environment, follow the corresponding steps below.
Expand Down Expand Up @@ -122,12 +123,28 @@ For prompt configuration details, see [AI Agent connector: System prompt, user p

Depending on your model choice, configure the AI Agent connector accordingly.

<Tabs groupId="setup" defaultValue="aws" values={
:::info
With Camunda-provided LLM in SaaS, you do not need additional LLM setup.
:::

<Tabs groupId="setup" defaultValue="easyllm" values={
[
{ label: 'Camunda-provided LLM', value: 'easyllm', },
{ label: 'AWS Bedrock', value: 'aws', },
{ label: 'Ollama', value: 'local', },
]}>

<TabItem value="easyllm">

1. Verify your organization has **AI features enabled**. Camunda-provided LLM is available automatically when AI features are enabled.
1. Keep the AI Agent connector's default settings from the blueprint.
Most AI blueprints default to use Camunda-provided LLM in SaaS.
You only need to configure a customer-managed provider if you want custom billing, quotas, or provider control.

See [Camunda-provided LLM](/components/agentic-orchestration/camunda-provided-llm.md) for more details.

</TabItem>

<TabItem value="aws">

Configure the connector's authentication and template for AWS Bedrock.
Expand Down Expand Up @@ -170,6 +187,10 @@ You can keep the default configuration or adjust it to test other setups. To do

<img src={AiAgentPropertiesPanelImg} alt="AI agent properties panel"/>

:::tip
When configuring connectors, use [FEEL expressions](/components/modeler/feel/language-guide/feel-expressions-introduction.md), by clicking the `fx` icon, to reference process variables and create dynamic prompts based on runtime data.
:::

</TabItem>

<TabItem value="local">
Expand Down Expand Up @@ -206,10 +227,6 @@ The example blueprint downloaded in step one is preconfigured to use AWS Bedrock
</TabItem>
</Tabs>

:::tip
When configuring connectors, use [FEEL expressions](/components/modeler/feel/language-guide/feel-expressions-introduction.md), by clicking the `fx` icon, to reference process variables and create dynamic prompts based on runtime data.
:::

## Step 3: Test your AI agent

Deploy and run your AI agent in your Camunda cluster.
Expand Down
1 change: 1 addition & 0 deletions sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -212,6 +212,7 @@ module.exports = {
},
items: [
"components/agentic-orchestration/ai-agents",
"components/agentic-orchestration/camunda-provided-llm",
"components/agentic-orchestration/ao-design",
"components/agentic-orchestration/monitor-ai-agents",
"components/agentic-orchestration/choose-right-model-agentic",
Expand Down
Loading