diff --git a/docs/best-practices/cost-optimization.mdx b/docs/best-practices/cost-optimization.mdx index 6d80cc85d2..eec5884af6 100644 --- a/docs/best-practices/cost-optimization.mdx +++ b/docs/best-practices/cost-optimization.mdx @@ -62,7 +62,7 @@ See [Spooky Stories: Chilling Temporal Anti-Patterns](https://temporal.io/blog/s ### Large payloads in Workflow History Passing multi-megabyte payloads through Workflows when external storage (S3, blob storage) is more appropriate. -Use [compression](/troubleshooting/blob-size-limit-error#why-does-this-error-occur) or the [claim check pattern](https://dataengineering.wiki/Concepts/Software+Engineering/Claim+Check+Pattern) for large data. +Use [compression](/troubleshooting/blob-size-limit-error#payload-size-limit) or the [claim check pattern](https://dataengineering.wiki/Concepts/Software+Engineering/Claim+Check+Pattern) for large data. ### Over-optimization at the expense of observability diff --git a/docs/cli/activity.mdx b/docs/cli/activity.mdx index b6a0380711..7dd5b717a4 100644 --- a/docs/cli/activity.mdx +++ b/docs/cli/activity.mdx @@ -57,8 +57,6 @@ Use the following options to change the behavior of this command. You can also u | `--reason` | No | **string** Reason for cancellation. | | `--run-id`, `-r` | No | **string** Activity Run ID. If not set, targets the latest run. | -To use the CLI with Standalone Activities, see the CLI commands in the [Go](/docs/develop/go/activities/standalone-activities.mdx) and [Python](/docs/develop/python/activities/standalone-activities.mdx) Standalone Activity guides. - ## complete Complete an Activity, marking it as successfully finished. Specify the @@ -212,9 +210,7 @@ time it fails, completes, or times out, at which point the pause will kick in. If the Activity is on its last retry attempt and fails, the failure will be returned to the caller, just as if the Activity had not been paused. -Activities should be specified either by their Activity ID or Activity Type. - -For example, specify the Activity and Workflow IDs like this: +Specify the Activity and Workflow IDs: ``` temporal activity pause \ @@ -229,9 +225,9 @@ Use the following options to change the behavior of this command. You can also u | Flag | Required | Description | |------|----------|-------------| -| `--activity-id`, `-a` | No | **string** The Activity ID to pause. Either `activity-id` or `activity-type` must be provided, but not both. | -| `--activity-type` | No | **string** All activities of the Activity Type will be paused. Either `activity-id` or `activity-type` must be provided, but not both. Note: Pausing Activity by Type is an experimental feature and may change in the future. | +| `--activity-id`, `-a` | No | **string** The Activity ID to pause. Required. | | `--identity` | No | **string** The identity of the user or client submitting this request. | +| `--reason` | No | **string** Reason for pausing the Activity. | | `--run-id`, `-r` | No | **string** Run ID. | | `--workflow-id`, `-w` | Yes | **string** Workflow ID. | @@ -255,7 +251,7 @@ If the activity is paused and the `keep_paused` flag is not provided, it will be unpaused. If the activity is paused and `keep_paused` flag is provided - it will stay paused. -Activities can be specified by their Activity ID or Activity Type. +Either `--activity-id` (with `--workflow-id`) or `--query` must be specified. ### Resetting activities that heartbeat {#reset-heartbeats} @@ -267,7 +263,7 @@ reset, handle this error and then re-throw it when you've cleaned up. If the `reset_heartbeats` flag is set, the heartbeat details will also be cleared. -Specify the Activity Type of ID and Workflow IDs: +Specify the Activity and Workflow IDs: ``` temporal activity reset \ @@ -277,30 +273,25 @@ temporal activity reset \ --reset-heartbeats ``` -Either `activity-id`, `activity-type`, or `--match-all` must be specified. - -Activities can be reset in bulk with a visibility query list filter. -For example, if you want to reset activities of type Foo: +Activities can be reset in bulk with a visibility query list filter: ``` temporal activity reset \ - --query 'TemporalResetInfo="property:activityType=Foo"' + --query 'WorkflowType="YourWorkflow"' ``` Use the following options to change the behavior of this command. You can also use any of the [global flags](#global-flags) that apply to all subcommands. | Flag | Required | Description | |------|----------|-------------| -| `--activity-id`, `-a` | No | **string** The Activity ID to reset. Mutually exclusive with `--query`, `--match-all`, and `--activity-type`. Requires `--workflow-id` to be specified. | -| `--activity-type` | No | **string** Activities of this Type will be reset. Mutually exclusive with `--match-all` and `activity-id`. Note: Resetting Activity by Type is an experimental feature and may change in the future. | +| `--activity-id`, `-a` | No | **string** The Activity ID to reset. Mutually exclusive with `--query`. Requires `--workflow-id` to be specified. | | `--headers` | No | **string[]** Temporal workflow headers in 'KEY=VALUE' format. Keys must be identifiers, and values must be JSON values. May be passed multiple times to set multiple Temporal headers. Note: These are workflow headers, not gRPC headers. | | `--jitter` | No | **duration** The activity will reset at random a time within the specified duration. Can only be used with --query. | | `--keep-paused` | No | **bool** If the activity was paused, it will stay paused. | -| `--match-all` | No | **bool** Every activity should be reset. Every activity should be updated. Mutually exclusive with `--activity-id` and `--activity-type`. Note: This is an experimental feature and may change in the future. | | `--query`, `-q` | No | **string** Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. Note: Using --query for batch activity operations is an experimental feature and may change in the future. | | `--reason` | No | **string** Reason for batch operation. Only use with --query. Defaults to user name. | | `--reset-attempts` | No | **bool** Reset the activity attempts. | -| `--reset-heartbeats` | No | **bool** Reset the Activity's heartbeats. Only works with --reset-attempts. | +| `--reset-heartbeats` | No | **bool** Reset the Activity's heartbeats. | | `--restore-original-options` | No | **bool** Restore the original options of the activity. | | `--rps` | No | **float** Limit batch's requests per second. Only allowed if query is present. | | `--run-id`, `-r` | No | **string** Run ID. Only use with --workflow-id. Cannot use with --query. | @@ -403,10 +394,9 @@ Activity to be retried another N times after unpausing. Use `--reset-heartbeat` to reset the Activity's heartbeats. -Activities can be specified by their Activity ID or Activity Type. -One of those parameters must be provided. +Either `--activity-id` (with `--workflow-id`) or `--query` must be specified. -Specify the Activity ID or Type and Workflow IDs: +Specify the Activity and Workflow IDs: ``` temporal activity unpause \ @@ -416,28 +406,24 @@ temporal activity unpause \ --reset-heartbeats ``` -Activities can be unpaused in bulk via a visibility Query list filter. -For example, if you want to unpause activities of type Foo that you -previously paused, do: +Activities can be unpaused in bulk via a visibility Query list filter: ``` temporal activity unpause \ - --query 'TemporalPauseInfo="property:activityType=Foo"' + --query 'TemporalPauseInfo IS NOT NULL' ``` Use the following options to change the behavior of this command. You can also use any of the [global flags](#global-flags) that apply to all subcommands. | Flag | Required | Description | |------|----------|-------------| -| `--activity-id`, `-a` | No | **string** The Activity ID to unpause. Mutually exclusive with `--query`, `--match-all`, and `--activity-type`. Requires `--workflow-id` to be specified. | -| `--activity-type` | No | **string** Activities of this Type will unpause. Can only be used without --match-all. Either `activity-id` or `activity-type` must be provided, but not both. Note: Unpausing Activity by Type is an experimental feature and may change in the future. | +| `--activity-id`, `-a` | No | **string** The Activity ID to unpause. Mutually exclusive with `--query`. Requires `--workflow-id` to be specified. | | `--headers` | No | **string[]** Temporal workflow headers in 'KEY=VALUE' format. Keys must be identifiers, and values must be JSON values. May be passed multiple times to set multiple Temporal headers. Note: These are workflow headers, not gRPC headers. | | `--jitter` | No | **duration** The activity will start at random a time within the specified duration. Can only be used with --query. | -| `--match-all` | No | **bool** Every paused activity should be unpaused. This flag is ignored if activity-type is provided. Note: This is an experimental feature and may change in the future. | | `--query`, `-q` | No | **string** Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. Note: Using --query for batch activity operations is an experimental feature and may change in the future. | | `--reason` | No | **string** Reason for batch operation. Only use with --query. Defaults to user name. | | `--reset-attempts` | No | **bool** Reset the activity attempts. | -| `--reset-heartbeats` | No | **bool** Reset the Activity's heartbeats. Only works with --reset-attempts. | +| `--reset-heartbeats` | No | **bool** Reset the Activity's heartbeats. | | `--rps` | No | **float** Limit batch's requests per second. Only allowed if query is present. | | `--run-id`, `-r` | No | **string** Run ID. Only use with --workflow-id. Cannot use with --query. | | `--workflow-id`, `-w` | No | **string** Workflow ID. You must set either --workflow-id or --query. | @@ -468,26 +454,23 @@ temporal activity update-options \ You may follow this command with `temporal activity reset`, and the new values will apply after the reset. -Either `activity-id`, `activity-type`, or `--match-all` must be specified. +Either `--activity-id` or `--query` must be specified. -Activity options can be updated in bulk with a visibility query list filter. -For example, if you want to reset for activities of type Foo, do: +Activity options can be updated in bulk with a visibility query list filter: ``` temporal activity update-options \ - --query 'TemporalPauseInfo="property:activityType=Foo"' - ... + --query 'WorkflowType="YourWorkflow"' \ + --task-queue NewTaskQueueName ``` Use the following options to change the behavior of this command. You can also use any of the [global flags](#global-flags) that apply to all subcommands. | Flag | Required | Description | |------|----------|-------------| -| `--activity-id`, `-a` | No | **string** The Activity ID to update options. Mutually exclusive with `--query`, `--match-all`, and `--activity-type`. Requires `--workflow-id` to be specified. | -| `--activity-type` | No | **string** Activities of this Type will be updated. Mutually exclusive with `--match-all` and `activity-id`. Note: Updating Activity options by Type is an experimental feature and may change in the future. | +| `--activity-id`, `-a` | No | **string** The Activity ID to update options. Mutually exclusive with `--query`. Requires `--workflow-id` to be specified. | | `--headers` | No | **string[]** Temporal workflow headers in 'KEY=VALUE' format. Keys must be identifiers, and values must be JSON values. May be passed multiple times to set multiple Temporal headers. Note: These are workflow headers, not gRPC headers. | | `--heartbeat-timeout` | No | **duration** Maximum permitted time between successful worker heartbeats. | -| `--match-all` | No | **bool** Every activity should be updated. Mutually exclusive with `--activity-id` and `--activity-type`. Note: This is an experimental feature and may change in the future. | | `--query`, `-q` | No | **string** Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. Note: Using --query for batch activity operations is an experimental feature and may change in the future. | | `--reason` | No | **string** Reason for batch operation. Only use with --query. Defaults to user name. | | `--restore-original-options` | No | **bool** Restore the original options of the activity. | @@ -519,19 +502,19 @@ The following options can be used with any command. | `--codec-header` | No | **string[]** HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. | | | `--color` | No | **string-enum** Output coloring. Accepted values: always, never, auto. | `auto` | | `--command-timeout` | No | **duration** The command execution timeout. 0s means no timeout. | | -| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. _(Experimental)_ | | -| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. _(Experimental)_ | | -| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. _(Experimental)_ | | +| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. | | +| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. | | +| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. | | | `--env` | No | **string** Active environment name (`ENV`). | `default` | | `--env-file` | No | **string** Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. | | | `--grpc-meta` | No | **string[]** HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. | | | `--identity` | No | **string** The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". | | | `--log-format` | No | **string-enum** Log format. Accepted values: text, json. | `text` | -| `--log-level` | No | **string-enum** Log level. Default is "info" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `info` | +| `--log-level` | No | **string-enum** Log level. Default is "never" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `never` | | `--namespace`, `-n` | No | **string** Temporal Service Namespace. | `default` | | `--no-json-shorthand-payloads` | No | **bool** Raw payload output, even if the JSON option was used. | | | `--output`, `-o` | No | **string-enum** Non-logging data output format. Accepted values: text, json, jsonl, none. | `text` | -| `--profile` | No | **string** Profile to use for config file. _(Experimental)_ | | +| `--profile` | No | **string** Profile to use for config file. | | | `--time-format` | No | **string-enum** Time format. Accepted values: relative, iso, raw. | `relative` | | `--tls` | No | **bool** Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. | | | `--tls-ca-data` | No | **string** Data for server CA certificate. Can't be used with --tls-ca-path. | | diff --git a/docs/cli/batch.mdx b/docs/cli/batch.mdx index c79731441a..2be1afcdd4 100644 --- a/docs/cli/batch.mdx +++ b/docs/cli/batch.mdx @@ -88,19 +88,19 @@ The following options can be used with any command. | `--codec-header` | No | **string[]** HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. | | | `--color` | No | **string-enum** Output coloring. Accepted values: always, never, auto. | `auto` | | `--command-timeout` | No | **duration** The command execution timeout. 0s means no timeout. | | -| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. _(Experimental)_ | | -| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. _(Experimental)_ | | -| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. _(Experimental)_ | | +| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. | | +| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. | | +| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. | | | `--env` | No | **string** Active environment name (`ENV`). | `default` | | `--env-file` | No | **string** Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. | | | `--grpc-meta` | No | **string[]** HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. | | | `--identity` | No | **string** The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". | | | `--log-format` | No | **string-enum** Log format. Accepted values: text, json. | `text` | -| `--log-level` | No | **string-enum** Log level. Default is "info" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `info` | +| `--log-level` | No | **string-enum** Log level. Default is "never" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `never` | | `--namespace`, `-n` | No | **string** Temporal Service Namespace. | `default` | | `--no-json-shorthand-payloads` | No | **bool** Raw payload output, even if the JSON option was used. | | | `--output`, `-o` | No | **string-enum** Non-logging data output format. Accepted values: text, json, jsonl, none. | `text` | -| `--profile` | No | **string** Profile to use for config file. _(Experimental)_ | | +| `--profile` | No | **string** Profile to use for config file. | | | `--time-format` | No | **string-enum** Time format. Accepted values: relative, iso, raw. | `relative` | | `--tls` | No | **bool** Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. | | | `--tls-ca-data` | No | **string** Data for server CA certificate. Can't be used with --tls-ca-path. | | diff --git a/docs/cli/config.mdx b/docs/cli/config.mdx index eaeff2dd98..646f7312d5 100644 --- a/docs/cli/config.mdx +++ b/docs/cli/config.mdx @@ -113,19 +113,19 @@ The following options can be used with any command. | `--codec-header` | No | **string[]** HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. | | | `--color` | No | **string-enum** Output coloring. Accepted values: always, never, auto. | `auto` | | `--command-timeout` | No | **duration** The command execution timeout. 0s means no timeout. | | -| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. _(Experimental)_ | | -| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. _(Experimental)_ | | -| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. _(Experimental)_ | | +| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. | | +| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. | | +| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. | | | `--env` | No | **string** Active environment name (`ENV`). | `default` | | `--env-file` | No | **string** Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. | | | `--grpc-meta` | No | **string[]** HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. | | | `--identity` | No | **string** The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". | | | `--log-format` | No | **string-enum** Log format. Accepted values: text, json. | `text` | -| `--log-level` | No | **string-enum** Log level. Default is "info" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `info` | +| `--log-level` | No | **string-enum** Log level. Default is "never" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `never` | | `--namespace`, `-n` | No | **string** Temporal Service Namespace. | `default` | | `--no-json-shorthand-payloads` | No | **bool** Raw payload output, even if the JSON option was used. | | | `--output`, `-o` | No | **string-enum** Non-logging data output format. Accepted values: text, json, jsonl, none. | `text` | -| `--profile` | No | **string** Profile to use for config file. _(Experimental)_ | | +| `--profile` | No | **string** Profile to use for config file. | | | `--time-format` | No | **string-enum** Time format. Accepted values: relative, iso, raw. | `relative` | | `--tls` | No | **bool** Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. | | | `--tls-ca-data` | No | **string** Data for server CA certificate. Can't be used with --tls-ca-path. | | diff --git a/docs/cli/env.mdx b/docs/cli/env.mdx index a019b6fb23..7f7b5814b3 100644 --- a/docs/cli/env.mdx +++ b/docs/cli/env.mdx @@ -124,19 +124,19 @@ The following options can be used with any command. | `--codec-header` | No | **string[]** HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. | | | `--color` | No | **string-enum** Output coloring. Accepted values: always, never, auto. | `auto` | | `--command-timeout` | No | **duration** The command execution timeout. 0s means no timeout. | | -| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. _(Experimental)_ | | -| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. _(Experimental)_ | | -| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. _(Experimental)_ | | +| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. | | +| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. | | +| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. | | | `--env` | No | **string** Active environment name (`ENV`). | `default` | | `--env-file` | No | **string** Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. | | | `--grpc-meta` | No | **string[]** HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. | | | `--identity` | No | **string** The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". | | | `--log-format` | No | **string-enum** Log format. Accepted values: text, json. | `text` | -| `--log-level` | No | **string-enum** Log level. Default is "info" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `info` | +| `--log-level` | No | **string-enum** Log level. Default is "never" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `never` | | `--namespace`, `-n` | No | **string** Temporal Service Namespace. | `default` | | `--no-json-shorthand-payloads` | No | **bool** Raw payload output, even if the JSON option was used. | | | `--output`, `-o` | No | **string-enum** Non-logging data output format. Accepted values: text, json, jsonl, none. | `text` | -| `--profile` | No | **string** Profile to use for config file. _(Experimental)_ | | +| `--profile` | No | **string** Profile to use for config file. | | | `--time-format` | No | **string-enum** Time format. Accepted values: relative, iso, raw. | `relative` | | `--tls` | No | **bool** Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. | | | `--tls-ca-data` | No | **string** Data for server CA certificate. Can't be used with --tls-ca-path. | | diff --git a/docs/cli/operator.mdx b/docs/cli/operator.mdx index 74727387bf..1590183c36 100644 --- a/docs/cli/operator.mdx +++ b/docs/cli/operator.mdx @@ -532,19 +532,19 @@ The following options can be used with any command. | `--codec-header` | No | **string[]** HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. | | | `--color` | No | **string-enum** Output coloring. Accepted values: always, never, auto. | `auto` | | `--command-timeout` | No | **duration** The command execution timeout. 0s means no timeout. | | -| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. _(Experimental)_ | | -| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. _(Experimental)_ | | -| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. _(Experimental)_ | | +| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. | | +| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. | | +| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. | | | `--env` | No | **string** Active environment name (`ENV`). | `default` | | `--env-file` | No | **string** Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. | | | `--grpc-meta` | No | **string[]** HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. | | | `--identity` | No | **string** The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". | | | `--log-format` | No | **string-enum** Log format. Accepted values: text, json. | `text` | -| `--log-level` | No | **string-enum** Log level. Default is "info" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `info` | +| `--log-level` | No | **string-enum** Log level. Default is "never" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `never` | | `--namespace`, `-n` | No | **string** Temporal Service Namespace. | `default` | | `--no-json-shorthand-payloads` | No | **bool** Raw payload output, even if the JSON option was used. | | | `--output`, `-o` | No | **string-enum** Non-logging data output format. Accepted values: text, json, jsonl, none. | `text` | -| `--profile` | No | **string** Profile to use for config file. _(Experimental)_ | | +| `--profile` | No | **string** Profile to use for config file. | | | `--time-format` | No | **string-enum** Time format. Accepted values: relative, iso, raw. | `relative` | | `--tls` | No | **bool** Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. | | | `--tls-ca-data` | No | **string** Data for server CA certificate. Can't be used with --tls-ca-path. | | diff --git a/docs/cli/schedule.mdx b/docs/cli/schedule.mdx index d1b1a1f9c2..277c97848e 100644 --- a/docs/cli/schedule.mdx +++ b/docs/cli/schedule.mdx @@ -316,19 +316,19 @@ The following options can be used with any command. | `--codec-header` | No | **string[]** HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. | | | `--color` | No | **string-enum** Output coloring. Accepted values: always, never, auto. | `auto` | | `--command-timeout` | No | **duration** The command execution timeout. 0s means no timeout. | | -| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. _(Experimental)_ | | -| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. _(Experimental)_ | | -| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. _(Experimental)_ | | +| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. | | +| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. | | +| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. | | | `--env` | No | **string** Active environment name (`ENV`). | `default` | | `--env-file` | No | **string** Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. | | | `--grpc-meta` | No | **string[]** HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. | | | `--identity` | No | **string** The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". | | | `--log-format` | No | **string-enum** Log format. Accepted values: text, json. | `text` | -| `--log-level` | No | **string-enum** Log level. Default is "info" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `info` | +| `--log-level` | No | **string-enum** Log level. Default is "never" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `never` | | `--namespace`, `-n` | No | **string** Temporal Service Namespace. | `default` | | `--no-json-shorthand-payloads` | No | **bool** Raw payload output, even if the JSON option was used. | | | `--output`, `-o` | No | **string-enum** Non-logging data output format. Accepted values: text, json, jsonl, none. | `text` | -| `--profile` | No | **string** Profile to use for config file. _(Experimental)_ | | +| `--profile` | No | **string** Profile to use for config file. | | | `--time-format` | No | **string-enum** Time format. Accepted values: relative, iso, raw. | `relative` | | `--tls` | No | **bool** Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. | | | `--tls-ca-data` | No | **string** Data for server CA certificate. Can't be used with --tls-ca-path. | | diff --git a/docs/cli/server.mdx b/docs/cli/server.mdx index 2524e7c052..1a07b5ba47 100644 --- a/docs/cli/server.mdx +++ b/docs/cli/server.mdx @@ -71,7 +71,7 @@ Use the following options to change the behavior of this command. You can also u | `--headless` | No | **bool** Disable the Web UI. | | `--http-port` | No | **int** Port for the HTTP API service. Defaults to a random free port. | | `--ip` | No | **string** IP address bound to the front-end Service. | -| `--log-config` | No | **bool** Log the server config to stderr. | +| `--log-config` | No | **bool** Print the server config to stderr. | | `--metrics-port` | No | **int** Port for the '/metrics' HTTP endpoint. Defaults to a random free port. | | `--namespace`, `-n` | No | **string[]** Namespaces to be created at launch. The "default" Namespace is always created automatically. | | `--port`, `-p` | No | **int** Port for the front-end gRPC Service. | @@ -98,19 +98,19 @@ The following options can be used with any command. | `--codec-header` | No | **string[]** HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. | | | `--color` | No | **string-enum** Output coloring. Accepted values: always, never, auto. | `auto` | | `--command-timeout` | No | **duration** The command execution timeout. 0s means no timeout. | | -| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. _(Experimental)_ | | -| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. _(Experimental)_ | | -| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. _(Experimental)_ | | +| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. | | +| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. | | +| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. | | | `--env` | No | **string** Active environment name (`ENV`). | `default` | | `--env-file` | No | **string** Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. | | | `--grpc-meta` | No | **string[]** HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. | | | `--identity` | No | **string** The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". | | | `--log-format` | No | **string-enum** Log format. Accepted values: text, json. | `text` | -| `--log-level` | No | **string-enum** Log level. Default is "info" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `info` | +| `--log-level` | No | **string-enum** Log level. Default is "never" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `never` | | `--namespace`, `-n` | No | **string** Temporal Service Namespace. | `default` | | `--no-json-shorthand-payloads` | No | **bool** Raw payload output, even if the JSON option was used. | | | `--output`, `-o` | No | **string-enum** Non-logging data output format. Accepted values: text, json, jsonl, none. | `text` | -| `--profile` | No | **string** Profile to use for config file. _(Experimental)_ | | +| `--profile` | No | **string** Profile to use for config file. | | | `--time-format` | No | **string-enum** Time format. Accepted values: relative, iso, raw. | `relative` | | `--tls` | No | **bool** Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. | | | `--tls-ca-data` | No | **string** Data for server CA certificate. Can't be used with --tls-ca-path. | | diff --git a/docs/cli/task-queue.mdx b/docs/cli/task-queue.mdx index dfb542a040..e03e1ee25a 100644 --- a/docs/cli/task-queue.mdx +++ b/docs/cli/task-queue.mdx @@ -69,7 +69,9 @@ temporal task-queue config set \ --queue-rps-limit \ --queue-rps-limit-reason \ --fairness-key-rps-limit-default \ - --fairness-key-rps-limit-reason + --fairness-key-rps-limit-reason \ + --fairness-key-weight-set HighPriority=2.0 \ + --fairness-key-weight-set LowPriority=0.5 ``` This command supports updating: @@ -79,8 +81,12 @@ This command supports updating: - Fairness key rate limit defaults: Sets default rate limits for fairness keys. If set, each individual fairness key will be limited to this rate, scaled by the weight of the fairness key. +- Fairness key weight overrides: Set custom weights for specific fairness keys. + Weights control the relative share of capacity each key receives. To unset a rate limit, pass in 'default', for example: --queue-rps-limit default +To unset specific fairness weights, use --fairness-key-weight-unset \ +To unset all fairness weights, use --fairness-key-weight-unset-all Use the following options to change the behavior of this command. You can also use any of the [global flags](#global-flags) that apply to all subcommands. @@ -88,6 +94,8 @@ Use the following options to change the behavior of this command. You can also u |------|----------|-------------| | `--fairness-key-rps-limit-default` | No | **float\|default** Fairness key rate limit default in requests per second. Accepts a float; or 'default' to unset. | | `--fairness-key-rps-limit-reason` | No | **string** Reason for fairness key rate limit update. | +| `--fairness-key-weight` | No | **string[]** Set or unset fairness key weight overrides in format key=weight or key=default. Use key=weight to set a positive weight value; use key=default to unset. Can be specified multiple times. Example: --fairness-key-weight HighPriority=2.0 --fairness-key-weight LowPriority=default. | +| `--fairness-key-weight-clear-all` | No | **bool** Unset all fairness key weight overrides. Cannot be used with --fairness-key-weight. | | `--queue-rps-limit` | No | **float\|default** Queue rate limit in requests per second. Accepts a float; or 'default' to unset. | | `--queue-rps-limit-reason` | No | **string** Reason for queue rate limit update. | | `--task-queue`, `-t` | Yes | **string** Task Queue name. | @@ -685,19 +693,19 @@ The following options can be used with any command. | `--codec-header` | No | **string[]** HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. | | | `--color` | No | **string-enum** Output coloring. Accepted values: always, never, auto. | `auto` | | `--command-timeout` | No | **duration** The command execution timeout. 0s means no timeout. | | -| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. _(Experimental)_ | | -| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. _(Experimental)_ | | -| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. _(Experimental)_ | | +| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. | | +| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. | | +| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. | | | `--env` | No | **string** Active environment name (`ENV`). | `default` | | `--env-file` | No | **string** Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. | | | `--grpc-meta` | No | **string[]** HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. | | | `--identity` | No | **string** The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". | | | `--log-format` | No | **string-enum** Log format. Accepted values: text, json. | `text` | -| `--log-level` | No | **string-enum** Log level. Default is "info" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `info` | +| `--log-level` | No | **string-enum** Log level. Default is "never" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `never` | | `--namespace`, `-n` | No | **string** Temporal Service Namespace. | `default` | | `--no-json-shorthand-payloads` | No | **bool** Raw payload output, even if the JSON option was used. | | | `--output`, `-o` | No | **string-enum** Non-logging data output format. Accepted values: text, json, jsonl, none. | `text` | -| `--profile` | No | **string** Profile to use for config file. _(Experimental)_ | | +| `--profile` | No | **string** Profile to use for config file. | | | `--time-format` | No | **string-enum** Time format. Accepted values: relative, iso, raw. | `relative` | | `--tls` | No | **bool** Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. | | | `--tls-ca-data` | No | **string** Data for server CA certificate. Can't be used with --tls-ca-path. | | diff --git a/docs/cli/worker.mdx b/docs/cli/worker.mdx index 57534bf374..849b7423ef 100644 --- a/docs/cli/worker.mdx +++ b/docs/cli/worker.mdx @@ -46,6 +46,72 @@ temporal worker deployment set-current-version \ Sets the current Deployment Version for a given Deployment. +### create + +Create a new Worker Deployment: + +``` +temporal worker deployment create [options] +``` + +Worker Deployments are lazily created the first time a Worker polls the +Temporal Server and specifies a VersionOverride. However, if you need to +pre-define a compute configuration (for instance to set up a serverless +Worker), you need to call `temporal worker deployment create-version` and +pass in the name of the Worker Deployment. The `temporal worker +deployment create` command allows you to pre-define a Worker Deployment +so that calls to `temporal worker deployment create-version` will +succeed. + +If a Worker Deployment with the supplied name already exists, this +command will return an error. + +Note: This is an experimental feature and may change in the future. + +Use the following options to change the behavior of this command. You can also use any of the [global flags](#global-flags) that apply to all subcommands. + +| Flag | Required | Description | +|------|----------|-------------| +| `--name`, `-d` | Yes | **string** Name for a Worker Deployment. | + +### create-version + + +Create a new Worker Deployment Version: + +``` +temporal worker deployment create-version [options] +``` + +Configure a Worker Deployment Version's compute configuration as needed. +For example, pass compute provider information for an AWS Lambda function +that spawns a Worker in the Worker Deployment: + +``` +temporal worker deployment create-version \ + --namespace YourNamespaceName \ + --deployment-name YourDeploymentName \ + --build-id YourBuildID \ + --aws-lambda-function-arn LambdaFunctionARN \ + --aws-lambda-assume-role-arn LambdaAssumeRoleARN \ + --aws-lambda-assume-role-external-id LambdaAssumeRoleExternalID +``` + +If a Worker Deployment Version with the supplied BuildID already exists, +this command will return an error. + +Note: This is an experimental feature and may change in the future. + +Use the following options to change the behavior of this command. You can also use any of the [global flags](#global-flags) that apply to all subcommands. + +| Flag | Required | Description | +|------|----------|-------------| +| `--aws-lambda-assume-role-arn` | No | **string** AWS IAM role ARN that the Temporal server will assume when invoking the Lambda function that spawns a new Worker in this Worker Deployment Version. Required when --aws-lambda-function-arn is specified. | +| `--aws-lambda-assume-role-external-id` | No | **string** Temporal server will enforce that the AWS IAM trust policy associated with the AWS IAM role specified in --aws-lambda-assume-role-arn has an aws:ExternalId condition that matches the supplied value. Required when --aws-lambda-function-arn is specified. | +| `--aws-lambda-function-arn` | No | **string** Qualified (contains version suffix) or unqualified AWS Lambda function ARN to invoke when there are no active pollers for task queue targets in the Worker Deployment. | +| `--build-id` | Yes | **string** Build ID of the Worker Deployment Version. | +| `--deployment-name` | Yes | **string** Name of the Worker Deployment. | + ### delete Remove a Worker Deployment given its Deployment Name. @@ -448,19 +514,19 @@ The following options can be used with any command. | `--codec-header` | No | **string[]** HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. | | | `--color` | No | **string-enum** Output coloring. Accepted values: always, never, auto. | `auto` | | `--command-timeout` | No | **duration** The command execution timeout. 0s means no timeout. | | -| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. _(Experimental)_ | | -| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. _(Experimental)_ | | -| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. _(Experimental)_ | | +| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. | | +| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. | | +| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. | | | `--env` | No | **string** Active environment name (`ENV`). | `default` | | `--env-file` | No | **string** Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. | | | `--grpc-meta` | No | **string[]** HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. | | | `--identity` | No | **string** The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". | | | `--log-format` | No | **string-enum** Log format. Accepted values: text, json. | `text` | -| `--log-level` | No | **string-enum** Log level. Default is "info" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `info` | +| `--log-level` | No | **string-enum** Log level. Default is "never" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `never` | | `--namespace`, `-n` | No | **string** Temporal Service Namespace. | `default` | | `--no-json-shorthand-payloads` | No | **bool** Raw payload output, even if the JSON option was used. | | | `--output`, `-o` | No | **string-enum** Non-logging data output format. Accepted values: text, json, jsonl, none. | `text` | -| `--profile` | No | **string** Profile to use for config file. _(Experimental)_ | | +| `--profile` | No | **string** Profile to use for config file. | | | `--time-format` | No | **string-enum** Time format. Accepted values: relative, iso, raw. | `relative` | | `--tls` | No | **bool** Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. | | | `--tls-ca-data` | No | **string** Data for server CA certificate. Can't be used with --tls-ca-path. | | diff --git a/docs/cli/workflow.mdx b/docs/cli/workflow.mdx index f722ca6286..c6970a7472 100644 --- a/docs/cli/workflow.mdx +++ b/docs/cli/workflow.mdx @@ -436,6 +436,7 @@ Use the following options to change the behavior of this command. You can also u |------|----------|-------------| | `--detailed` | No | **bool** Display events as detailed sections instead of table. Does not apply to JSON output. | | `--follow`, `-f` | No | **bool** Follow the Workflow Execution progress in real time. Does not apply to JSON output. | +| `--reverse` | No | **bool** Fetch Event History newest-event-first. Cannot be combined with --follow. | | `--run-id`, `-r` | No | **string** Run ID. | | `--workflow-id`, `-w` | Yes | **string** Workflow ID. | @@ -883,19 +884,19 @@ The following options can be used with any command. | `--codec-header` | No | **string[]** HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. | | | `--color` | No | **string-enum** Output coloring. Accepted values: always, never, auto. | `auto` | | `--command-timeout` | No | **duration** The command execution timeout. 0s means no timeout. | | -| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. _(Experimental)_ | | -| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. _(Experimental)_ | | -| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. _(Experimental)_ | | +| `--config-file` | No | **string** File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, `$HOME/Library/Application Support` on macOS, and `%AppData%` on Windows. | | +| `--disable-config-env` | No | **bool** If set, disables loading environment config from environment variables. | | +| `--disable-config-file` | No | **bool** If set, disables loading environment config from config file. | | | `--env` | No | **string** Active environment name (`ENV`). | `default` | | `--env-file` | No | **string** Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. | | | `--grpc-meta` | No | **string[]** HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. | | | `--identity` | No | **string** The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". | | | `--log-format` | No | **string-enum** Log format. Accepted values: text, json. | `text` | -| `--log-level` | No | **string-enum** Log level. Default is "info" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `info` | +| `--log-level` | No | **string-enum** Log level. Default is "never" for most commands and "warn" for "server start-dev". Accepted values: debug, info, warn, error, never. | `never` | | `--namespace`, `-n` | No | **string** Temporal Service Namespace. | `default` | | `--no-json-shorthand-payloads` | No | **bool** Raw payload output, even if the JSON option was used. | | | `--output`, `-o` | No | **string-enum** Non-logging data output format. Accepted values: text, json, jsonl, none. | `text` | -| `--profile` | No | **string** Profile to use for config file. _(Experimental)_ | | +| `--profile` | No | **string** Profile to use for config file. | | | `--time-format` | No | **string-enum** Time format. Accepted values: relative, iso, raw. | `relative` | | `--tls` | No | **bool** Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. | | | `--tls-ca-data` | No | **string** Data for server CA certificate. Can't be used with --tls-ca-path. | | diff --git a/docs/cloud/audit-logs.mdx b/docs/cloud/audit-logs.mdx index 21c7ff3668..71cb029b08 100644 --- a/docs/cloud/audit-logs.mdx +++ b/docs/cloud/audit-logs.mdx @@ -83,13 +83,7 @@ Instead, explore the [Export](/cloud/export) feature, which does let you send cl :::info DEPRECATION NOTICE -The following fields are deprecated and are planned for removal on or after April 1 2026. - -- `user_email`. This field is duplicated by `principal.name` for principals of type `user`. Other principal types do not have associated emails. -- `level`. This field is duplicated by `status`. -- `caller_ip_address`. This field is replaced by `x_forwarded_for`. -- `details`. This field is replaced by `raw_details` that includes request details. -- `category`. This field is no longer used. +The `request_id` field is deprecated and is planned for removal on or after November 1 2026. Use `async_operation_id` instead. ::: @@ -97,20 +91,16 @@ Audit Logs use the following JSON format: ```json { - "operation": // Operation that was performed - "principal": // Information about who initiated the operation - "details": // DEPRECATED, see raw_details - "raw_details": // details about the request - "user_email": // DEPRECATED, use principal.user where applicable - "x_forwarded_for": // the IP address making the call - "caller_ip_address": // DEPRECATED, use x_forwarded_for - "category": // DEPRECATED, no longer used - "emit_time": // Time the operation was recorded - "level": // DEPRECATED, use status - "log_id": // Unique ID of the log entry - "request_id": // Optional async request id set by the user when sending a request - "status": // Status, such as OK or ERROR - "version": // Version of the log entry + "operation": // Operation that was performed + "principal": // Information about who initiated the operation + "raw_details": // Details about the request + "x_forwarded_for": // The IP address(es) making the call + "emit_time": // Time the operation was recorded + "log_id": // Unique ID of the log entry + "async_operation_id": // Optional async operation id set by the user when sending a request + "request_id": // DEPRECATED, use async_operation_id + "status": // Status, such as OK or ERROR + "version": // Version of the log entry } ``` diff --git a/docs/cloud/connectivity/aws-connectivity.mdx b/docs/cloud/connectivity/aws-connectivity.mdx index 6f58e98cb6..380b0c1b03 100644 --- a/docs/cloud/connectivity/aws-connectivity.mdx +++ b/docs/cloud/connectivity/aws-connectivity.mdx @@ -108,14 +108,19 @@ This approach is **optional**; Temporal Cloud works without it. It simply stream ### Choose the override domain and endpoint -| Temporal Cloud setup | Use this PHZ domain | Example | -| ----------------------------------------- | ---------------------------------- | ----------------------------------------------- | -| Single-region namespace with mTLS auth | `.tmprl.cloud` | `payments.abcde.tmprl.cloud` ↔ `vpce-...` | -| Single-region namespace with API-key auth | `.api.temporal.io` | `us-east-1.aws.api.temporal.io` ↔ `vpce-...` | -| Multi-region namespace | `region.tmprl.cloud` | `aws-us-east-1.region.tmprl.cloud` ↔ `vpce-...` | +| Endpoint type | PHZ domain format | Example | +| ------------------ | ---------------------------------- | -------------------------------------- | +| Namespace endpoint | `.tmprl.cloud` | `payments.abcde.tmprl.cloud` | +| Regional endpoint | `-.region.tmprl.cloud` | `aws-ap-northeast-2.region.tmprl.cloud` | ### Step-by-step instructions +:::warning Order matters + +A Route 53 private hosted zone with no records causes DNS resolution to fail (NXDOMAIN) inside any associated VPC. If you create an empty PHZ for `.tmprl.cloud` and associate it with a VPC where Workers are running, **all Worker traffic to Temporal Cloud in that VPC stops** until you add the CNAME record. Follow the steps below in order to avoid this. + +::: + #### 1. Collect your PrivateLink endpoint DNS name ```bash @@ -128,15 +133,15 @@ aws ec2 describe-vpc-endpoints \ # vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com ``` -Save the **`vpce-*.amazonaws.com`** value -- you will target it in the CNAME record. +Save the **`vpce-*.amazonaws.com`** value — you will target it in the CNAME record. -#### 2. Create a Route 53 Private Hosted Zone +#### 2. Create a Route 53 Private Hosted Zone (do not yet attach Worker VPCs) -1. Open _Route 53 → Hosted zones → Create hosted zone_. -2. Enter the domain chosen from the table above, e.g., `payments.abcde.tmprl.cloud`. -3. Type: _Private hosted zone for Temporal Cloud_. -4. Associate the hosted zone with every VPC that contains Temporal Workers and/or SDK clients. -5. Create hosted zone. +a. Open _Route 53 → Hosted zones → Create hosted zone_. +b. Enter the domain chosen from the table above, e.g., `payments.abcde.tmprl.cloud`. +c. Type: _Private hosted zone for Temporal Cloud_. +d. Leave VPC associations empty for now (you'll add them in step 4). +e. Create the hosted zone. #### 3. Add a CNAME record @@ -144,12 +149,22 @@ Inside the new PHZ: | Field | Value | | --------------- | ------------------------------------------------------------------------------------- | -| **Record name** | the namespace endpoint (e.g., `payments.abcde.tmprl.cloud`). | +| **Record name** | the Namespace Endpoint (e.g., `payments.abcde.tmprl.cloud`). | | **Record type** | `CNAME` | | **Value** | Your VPC Endpoint DNS name (`vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com`) | -| **TTL** | 60s is typical; 15s for MRN namespaces; adjust as needed. | +| **TTL** | 60s is typical; 15s for Namespaces with High Availability (to minimize recovery time after failover). | + +#### 4. Associate the PHZ with your Worker VPCs and verify -#### 4. Verify DNS resolution from inside the VPC +Now that the record exists, associate the PHZ with every VPC that contains Temporal Workers or SDK clients (Route 53 → your zone → _Edit settings_ → _Add VPC_). + +:::tip Test with a non-production VPC first + +We strongly recommend that you test with a non-production VPC first. Attach the PHZ to a non-production VPC, validate end-to-end resolution and connectivity from a host in that VPC, and only then attach production Worker VPCs. This catches misconfigured records before they affect production traffic. + +::: + +Verify DNS resolution from inside one of the associated VPCs: ```bash dig payments.abcde.tmprl.cloud @@ -171,62 +186,11 @@ clientOptions := client.Options{ The DNS resolver inside your VPC returns the private endpoint, while TLS still validates the original hostname—simplifying both code and certificate management. -## Configure Private DNS for Multi-Region Namespaces - -:::tip Namespaces with High Availability features and AWS PrivateLink - -Proper networking configuration is required for failover to be transparent to clients and workers when using PrivateLink. -This page describes how to configure routing for Namespaces with High Availability features on AWS PrivateLink. - -::: - -To use AWS PrivateLink with High Availability features, you may need to: - -- Override the regional DNS zone. -- Ensure network connectivity between the two regions. - -This page provides the details you need to set this up. +## Configure private DNS for Namespaces with High Availability -### Customer side solutions +For Namespaces with [High Availability features](/cloud/high-availability), you need to override DNS for `region.tmprl.cloud` so each region resolves to the local VPC Endpoint, and you need to ensure Workers can reach whichever region is active. Failover is transparent to clients only when this is set up correctly. -When using PrivateLink, you connect to Temporal Cloud through a VPC Endpoint, which uses addresses local to your network. -Temporal treats each `region.` as a separate zone. -This setup allows you to override the default zone, ensuring that traffic is routed internally for the regions you’re using. - -A Namespace's active region is reflected in the target of a CNAME record. -For example, if the active region of a Namespace is AWS us-west-2, the DNS configuration would look like this: - -| Record name | Record type | Value | -| ----------------------------------- | ----------- | -------------------------------- | -| ha-namespace.account-id.tmprl.cloud | CNAME | aws-us-west-2.region.tmprl.cloud | - -After a failover, the CNAME record will be updated to point to the failover region, for example: - -| Record name | Record type | Value | -| ----------------------------------- | ----------- | -------------------------------- | -| ha-namespace.account-id.tmprl.cloud | CNAME | aws-us-east-1.region.tmprl.cloud | - -The Temporal domain did not change, but the CNAME updated from us-west-2 to us-east-1. - - - -### Setting up the DNS override - -To set up the DNS override, configure specific regions to target the internal VPC Endpoint IP addresses. -For example, you might set aws-us-west-1.region.tmprl.cloud to target 192.168.1.2. -In AWS, this can be done using a Route 53 private hosted zone for `region.tmprl.cloud`. -Link that private zone to the VPCs you use for Workers. - -When your Workers connect to the Namespace, they first resolve the `..` record. -This points to `.region.tmprl.cloud`, which then resolves to your internal IP addresses. - -Consider how you’ll configure Workers for this setup. -You can either have Workers run in both regions continuously or establish connectivity between regions using Transit Gateway or VPC Peering. -This way, Workers can access the newly activated region once failover occurs. +The complete guidance — including single-cloud (AWS-only) HA, multi-cloud HA (AWS PrivateLink + GCP Private Service Connect), and a recommended failover-testing plan — lives on a single page: [Connectivity for High Availability](/cloud/high-availability/ha-connectivity). ## Direct VPCE targeting without per-Namespace DNS {#direct-vpce} @@ -248,6 +212,22 @@ For HA Namespaces, use [private DNS](#configuring-private-dns-for-aws-privatelin ::: +## Adding PrivateLink from additional AWS accounts + +A common pattern is to have separate AWS accounts for different lines of business, environments (staging, production), or compliance scopes (PCI vs non-PCI), each with its own VPC and Workers connecting to the same Temporal Cloud account. + +You can create as many AWS PrivateLink VPC endpoints as you need to the same Temporal Cloud regional service — there is nothing to register, approve, or open a ticket for on the Temporal side. + +For each additional AWS account or VPC: + +1. In that account, create the AWS PrivateLink VPC endpoint targeting the regional service name from the [regions table](#available-aws-regions-privatelink-endpoints-and-dns-record-overrides) — same as in the [creation steps](#creating-an-aws-privatelink-connection) above. +2. Configure DNS in that VPC. You have two options: + - Create a Route 53 Private Hosted Zone in that account scoped to the appropriate VPC(s), following the [private DNS steps](#configuring-private-dns-for-aws-privatelink) above. Each VPC's PHZ should point at the VPC Endpoint local to that VPC. + - Or, use [direct VPCE targeting](#direct-vpce) (single-region Namespaces only). +3. **Optional:** if you want to enforce private-only access for a Namespace, add a Connectivity Rule for each VPC endpoint and attach all of them (plus a public rule, if needed) to the Namespace. See [Connectivity Rules](/cloud/connectivity#connectivity-rules). + +There is no upper limit on the number of VPC endpoints you can connect from your side to a regional PrivateLink service. The default per-account limit on private Connectivity Rules is 50 — [contact support](/cloud/support#support-ticket) if you need to raise it. + ## Available AWS regions, PrivateLink endpoints, and DNS record overrides The following table lists the available Temporal regions, PrivateLink endpoints, and regional endpoints used for DNS record overrides: diff --git a/docs/cloud/connectivity/gcp-connectivity.mdx b/docs/cloud/connectivity/gcp-connectivity.mdx index fb6f4f9d01..fae379994c 100644 --- a/docs/cloud/connectivity/gcp-connectivity.mdx +++ b/docs/cloud/connectivity/gcp-connectivity.mdx @@ -73,21 +73,25 @@ Individual Namespaces do not use separate services. - For **IP address**, click the dropdown and select **Create IP address** to create an internal IP from your subnet dedicated to the endpoint. Select this IP. - Check **Enable global access** if you intend to connect the endpoint to virtual machines outside of the selected region. We recommend regional connectivity instead of global access, as it can be better in terms of latency for your workers. _**Note:** this requires the network routing mode to be set to **GLOBAL**._ -5. Click the **Add endpoint** button at the bottom of the screen. +5. Click the **Add endpoint** button at the bottom of the screen. The endpoint will appear with status **Pending**. This is expected — the next step is what flips it to **Accepted**. -6. [Create a Temporal Cloud Connectivity Rule](/cloud/connectivity#creating-a-connectivity-rule) using the Connection ID of the newly created endpoint and the corresponding GCP Project. +6. [Create a Temporal Cloud Connectivity Rule](/cloud/connectivity#creating-a-connectivity-rule) using the Connection ID of the newly created endpoint and the corresponding GCP project. Use the **Connection ID** from the endpoint's detail page in the Google Cloud console (a numeric string such as `1234567890123456789`). -7. Once the status is "Accepted", the GCP Private Service Connect endpoint is ready for use. +7. Once the status changes from "Pending" to "Accepted", the GCP Private Service Connect endpoint is ready for use. -:::tip Connectivity Rule required +:::warning PSC stays "Pending" until you create a Connectivity Rule -If your Private Service Connect connection status is not becoming "Active", verify that you have [created a Connectivity Rule](/cloud/connectivity#creating-a-connectivity-rule). -Connectivity Rules are mandatory for GCP Private Service Connect connections. -The connection will not become active without one. +For GCP Private Service Connect, the Connectivity Rule is what tells Temporal Cloud to accept your PSC connection. Until you [create a Connectivity Rule](/cloud/connectivity#creating-a-connectivity-rule) for the connection, the endpoint will remain in **Pending**. There is no separate producer-side approval step — creating the Connectivity Rule is the approval. + +If your endpoint is stuck Pending, the most common causes are: + +- No Connectivity Rule exists for the connection ID. (Most common.) +- The Connectivity Rule was created with the wrong `connection-id`, `region`, or `gcp-project-id`. +- The endpoint is in a region that is not a [supported Temporal Cloud region](/cloud/regions). ::: -- Take note of the **IP address** that has been assigned to your endpoint, as it will be used to connect to Temporal Cloud. +- Take note of the **IP address** assigned to your endpoint — you will use it to connect to Temporal Cloud. :::caution You still need to set up private DNS or override client configuration for your clients to actually use the new Private Service Connect connection to connect to Temporal Cloud. diff --git a/docs/cloud/connectivity/index.mdx b/docs/cloud/connectivity/index.mdx index 78ae0fd3e5..df17e245c8 100644 --- a/docs/cloud/connectivity/index.mdx +++ b/docs/cloud/connectivity/index.mdx @@ -23,7 +23,7 @@ import { LANGUAGE_TAB_GROUP, getLanguageLabel } from '@site/src/constants/langua ## Private network connectivity for namespaces -Temporal Cloud supports private connectivity to namespaces via AWS PrivateLink or GCP Private Services Connect in addition to the default internet endpoints. +Temporal Cloud supports private connectivity to Namespaces via AWS PrivateLink or GCP Private Service Connect, in addition to the default public internet endpoints. Namespace access is always securely authenticated via [API keys](/cloud/api-keys#overview) or [mTLS](/cloud/certificates), regardless of how you choose to connect. @@ -31,13 +31,13 @@ For information about IP address stability and allowlisting, see [IP addresses]( ### Required steps -To use private connectivity with Temporal Cloud: +Setting up private connectivity is a three-step process — and it's important to understand that **private connectivity** (the network path) and **Connectivity Rules** (Temporal's enforcement layer) are related but separate concepts: -1. Set up the private connection from your VPC to the region where your Temporal namespace is located. -1. Update your private DNS and/or worker configuration to use the private connection. -1. (Required to complete Google PSC setup, optional if using AWS PrivateLink): create a connectivity rule for the private connection and attach it to the target namespace(s). This will block all access to the namespace that is not over the private connection, but you can also add a public rule to also allow internet connectivity. +1. **Set up the private connection** from your VPC to the region where your Temporal Namespace is located. +1. **Update your private DNS and/or client configuration** to actually use the private connection. Activating private connectivity does not change your Namespace Endpoint or Regional Endpoint automatically — clients keep resolving the public addresses until you do this step. +1. **(GCP PSC: required. AWS PrivateLink: optional.) Create a Connectivity Rule** for the private connection and attach it to the target Namespace(s). This blocks all access to the Namespace that does not arrive over a configured connection. You can mix private and public rules to also allow internet connectivity. -For steps 1 and 2, follow our guides for the target namespace's cloud provider: +For steps 1 and 2, follow the guide for your Namespace's cloud provider: - [AWS PrivateLink](/cloud/connectivity/aws-connectivity) creation and private DNS setup - [Google Cloud Private Service Connect](/cloud/connectivity/gcp-connectivity) creation and private DNS setup @@ -47,22 +47,16 @@ After creating a private connection, you must set up private DNS or update the c We recommend using private DNS. -Without this step, your clients may connect to the namespace over the internet if they were previously using public connectivity, or they will not be able to connect at all. +Without this step, your clients may connect to the Namespace over the internet if they were previously using public connectivity, or they will not be able to connect at all. If that's not an option for you, refer to [our guide for updating the server and TLS settings on your clients](/cloud/connectivity#update-dns-or-clients-to-use-private-connectivity). ::: -For step 3, keep reading for details on [connectivity rules](/cloud/connectivity#connectivity-rules). +For step 3, keep reading for details on [Connectivity Rules](/cloud/connectivity#connectivity-rules). ## Connectivity rules -:::tip Support, stability, and dependency info - -Connectivity rules are currently in [public preview](/evaluate/development-production-features/release-stages#public-preview). - -::: - :::info Web UI Connectivity The Temporal Cloud Web UI is not currently subject to connectivity rule enforcement. @@ -72,28 +66,41 @@ Even if a namespace is configured with private connectivity rules, the Web UI fo ### Definition -Connectivity rules are Temporal Cloud's mechanism for limiting the network access paths that can be used to access a namespace. +Connectivity Rules are Temporal Cloud's mechanism for restricting the network paths that can reach a Namespace. They are enforced by Temporal Cloud — they do not create or modify the underlying network connection. + +By default, a Namespace has zero Connectivity Rules and is reachable over (1) the public internet and (2) any private connections you've already configured to the region containing the Namespace. Namespace access is always securely authenticated via [API keys](/cloud/api-keys#overview) or [mTLS](/cloud/certificates), regardless of Connectivity Rules. + +When you attach one or more Connectivity Rules to a Namespace, Temporal Cloud immediately blocks any traffic that does not match a rule on that Namespace. A Namespace can have multiple Connectivity Rules, and you can mix public and private rules. + +Each Connectivity Rule specifies either generic public (internet) access or a specific private connection. -By default, a namespace has zero connectivity rules, and is accessible from 1. the public internet and 2. all private connections you've configured to the region containing the namespace. Namespace access is always securely authenticated via [API keys](/cloud/api-keys#overview) or [mTLS](/cloud/certificates), regardless of connectivity rules. +#### When you need a Connectivity Rule -When you attach one or more connectivity rules to a namespace, Temporal Cloud will immediately block all traffic that does not have a corresponding connectivity rule from accessing the namespace. One namespace can have multiple connectivity rules, and may mix both public and private rules. +| Provider | Connectivity Rule for private access | Why | +| -------- | ------------------------------------ | --- | +| AWS PrivateLink | **Optional.** Add one only if you want to enforce private-only access (block internet traffic to that Namespace). | AWS PrivateLink connections become usable as soon as the VPC endpoint is `Available`. Adding a Connectivity Rule restricts access; it does not establish it. | +| GCP Private Service Connect | **Required.** The PSC endpoint stays in `Pending` until a matching Connectivity Rule is created. | The Connectivity Rule is what tells Temporal Cloud to accept the PSC connection. | -Each connectivity rule specifies either generic public (i.e. internet) access or a specific private connection. +A public Connectivity Rule takes no parameters. -A public connectivity rule takes no parameters. +An AWS PrivateLink (PL) private Connectivity Rule requires: -An AWS PrivateLink (PL) private connectivity rule requires the following parameters: +- `connection-id`: The **VPC endpoint identifier** of the PL connection — the `vpce-…` value from your AWS account, *not* the endpoint service or DNS name (ex: `vpce-00939a7ed9EXAMPLE`). +- `region`: The region of the PL connection, prefixed with `aws-` (ex: `aws-us-east-1`). Must be the same region as the Namespace. Refer to the [Temporal Cloud region list](/cloud/regions) for supported regions. -- `connection-id`: The VPC endpoint ID of the PL connection (ex: `vpce-00939a7ed9EXAMPLE`) -- `region`: The region of the PL connection, prefixed with aws (ex: `aws-us-east-1`). Must be the same region as the namespace. Refer to the [Temporal Cloud region list](/cloud/regions) for supported regions. +A GCP Private Service Connect (PSC) private Connectivity Rule requires: -A GCP Private Service Connect (PSC) private connectivity rule requires the following parameters: +- `connection-id`: The **PSC connection identifier** of the endpoint (ex: `1234567890123456789`). Find it on the endpoint's detail page in the Google Cloud console. +- `region`: The region of the PSC connection, prefixed with `gcp-` (ex: `gcp-us-east1`). Must be the same region as the Namespace. Refer to the [Temporal Cloud region list](/cloud/regions) for supported regions. +- `gcp-project-id`: The identifier of the GCP project where you created the PSC connection (ex: `my-example-project-123`). -- `connection-id`: The ID of the PSC connection (ex: `1234567890123456789`) -- `region`: The region of the PSC connection, prefixed with gcp (ex: `gcp-us-east1`). Must be the same region as the namespace. Refer to the [Temporal Cloud region list](/cloud/regions) for supported regions. -- `gcp-project-id`: The ID of the GCP project where you created the PSC connection (ex: `my-example-project-123`) +Connectivity Rules can be created and managed with [tcld](https://docs.temporal.io/cloud/tcld/), [Terraform](https://github.com/temporalio/terraform-provider-temporalcloud/), the Web UI (under **Connectivity** in your account settings), or the [Cloud Ops API](/ops). -Connectivity rules can be created and managed with [tcld](https://docs.temporal.io/cloud/tcld/), [Terraform](https://github.com/temporalio/terraform-provider-temporalcloud/), or the [Cloud Ops API](/ops) +:::tip Connectivity Rules give Temporal visibility into your private connections + +Without a Connectivity Rule, Temporal Cloud has no record that your PrivateLink or PSC endpoint exists. If you open a support ticket about a private-connectivity issue, having a Connectivity Rule attached to the affected Namespace lets us correlate the connection on our side and is the fastest path to debugging. + +::: ### Permissions and limits @@ -200,15 +207,25 @@ tcld connectivity-rule list -n "my-namespace.abc123" ## Update DNS or clients to use private connectivity -We strongly recommend using private DNS instead of updating client server and TLS settings: +We strongly recommend using private DNS instead of updating client server and TLS settings: -- [How to set up private DNS in AWS](/cloud/connectivity/aws-connectivity#configuring-private-dns-for-aws-privatelink) +- [How to set up private DNS in AWS](/cloud/connectivity/aws-connectivity#configuring-private-dns-for-aws-privatelink) - [How to set up private DNS in GCP](/cloud/connectivity/gcp-connectivity#configuring-private-dns-for-gcp-private-service-connect) If you are unable to configure private DNS, you must update two settings in your Temporal clients: -1. Set the endpoint server address to the PrivateLink or Private Services Connect endpoint (e.g. `vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com:7233` or `:7233`) -2. Set TLS configuration to override the TLS server name (e.g., my-namespace.my-account.tmprl.cloud) +1. Set the endpoint server address to the PrivateLink or Private Service Connect endpoint (e.g. `vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com:7233` or `:7233`). +2. Set TLS configuration to override the TLS server name (the Namespace Endpoint, e.g., `my-namespace.my-account.tmprl.cloud`). + +The TLS server name override depends on your authentication method: + +| Authentication | TLS server name to use | +| -------------- | ---------------------- | +| mTLS (single-region Namespace) | The Namespace Endpoint, e.g. `my-namespace.my-account.tmprl.cloud` | +| API key (single-region Namespace) | The regional API endpoint, e.g. `us-east-1.aws.api.temporal.io` or `us-central1.gcp.api.temporal.io` | +| Multi-region Namespace (mTLS or API key) | The active region endpoint, e.g. `aws-us-east-1.region.tmprl.cloud` | + +If you authenticate with an API key over PrivateLink/PSC and use the wrong server name, the TLS handshake will fail with errors such as `connection reset by peer` even though `nc` reports the port as open. Updating these settings depends on the client you're using. diff --git a/docs/cloud/high-availability/ha-connectivity.mdx b/docs/cloud/high-availability/ha-connectivity.mdx index 1646a86ea2..3d12931827 100644 --- a/docs/cloud/high-availability/ha-connectivity.mdx +++ b/docs/cloud/high-availability/ha-connectivity.mdx @@ -8,38 +8,53 @@ description: How to use private network connectivity with Temporal Cloud HA feat import { CaptionedImage, JsonTable } from '@site/src/components'; -:::tip Namespaces with High Availability features and AWS PrivateLink +:::tip Namespaces with High Availability features and private connectivity -Proper networking configuration is required for failover to be transparent to clients and workers when using PrivateLink. -This page describes how to configure routing for Namespaces with High Availability features on AWS PrivateLink. +Proper networking configuration is required for failover to be transparent to clients and Workers when using AWS PrivateLink or GCP Private Service Connect. + +This page covers single-cloud HA (both replicas on AWS, or both on GCP) and multi-cloud HA (one replica on AWS, one on GCP). ::: -To use AWS PrivateLink with High Availability features, you may need to: +These instructions assume you already have the private connections in place. If not, follow the [AWS PrivateLink](/cloud/connectivity/aws-connectivity) or [GCP Private Service Connect](/cloud/connectivity/gcp-connectivity) creation guides first. + +## How HA + private connectivity works + +A Namespace with High Availability features has two replicas — a primary and a secondary, in different regions or different cloud providers. At any moment, one is **active** and one is **passive**. On failover, Temporal Cloud changes the active replica. + +Temporal Cloud expresses the active replica through DNS: + +- The Namespace DNS record (`..tmprl.cloud`) is a CNAME. +- It points to the active region's regional record (`-.region.tmprl.cloud`). +- On failover, Temporal Cloud rewrites the CNAME target. + +Namespace DNS records have a 15-second TTL. Clients should converge to the new region within roughly 30 seconds (about twice the TTL) once their resolver cache expires. + +For private connectivity, your job is to make sure that: -- Override the regional DNS zone. -- Ensure network connectivity between the two regions. +1. Both regions resolve to the correct private endpoint inside your network — not the public internet. +2. Your Workers have a network path to whichever region becomes active. -These instructions assume you already have the PrivateLink connections in place. If not, follow our [guide for creating AWS PrivateLink connections and configuring private DNS](/cloud/connectivity/aws-connectivity). +## Single-cloud HA on AWS PrivateLink -## Customer side solutions +This is the most common setup: both replicas live in AWS regions, and Workers connect via AWS PrivateLink. When using PrivateLink, you connect to Temporal Cloud through a VPC Endpoint, which uses addresses local to your network. -Temporal treats each `region.` as a separate zone. -This setup allows you to override the default zone, ensuring that traffic is routed internally for the regions you’re using. +Temporal treats each `region.tmprl.cloud` zone as a separate zone, so you override resolution per region. -A Namespace's active region is reflected in the target of a CNAME record. -For example, if the active region of a Namespace is AWS us-west-2, the DNS configuration would look like this: +Before failover, with the active region being `aws-us-west-2`: -| ha-namespace.account-id.tmprl.cloud | CNAME | aws-us-west-2.region.tmprl.cloud | -| ----------------------------------- | ----- | -------------------------------- | +| Record name | Record type | Value | +| ----------------------------------- | ----------- | -------------------------------- | +| ha-namespace.account-id.tmprl.cloud | CNAME | aws-us-west-2.region.tmprl.cloud | -After a failover, the CNAME record will be updated to point to the failover region, for example: +After a failover to `aws-us-east-1`, Temporal Cloud rewrites the CNAME: -| ha-namespace.account-id.tmprl.cloud | CNAME | aws-us-east-1.region.tmprl.cloud | -| ----------------------------------- | ----- | -------------------------------- | +| Record name | Record type | Value | +| ----------------------------------- | ----------- | -------------------------------- | +| ha-namespace.account-id.tmprl.cloud | CNAME | aws-us-east-1.region.tmprl.cloud | -The Temporal domain did not change, but the CNAME updated from us-west-2 to us-east-1. +The Temporal-managed CNAME changed from us-west-2 to us-east-1 — your private DNS does not need to change. -## Setting up the DNS override +### Setting up the DNS override (AWS) -:::caution +In AWS, use a Route 53 private hosted zone for `region.tmprl.cloud` to override resolution per region: + +| Record name | Record type | Value (your VPC Endpoint DNS) | +| ------------------------------------ | ----------- | ------------------------------------------------------------ | +| `aws-us-west-2.region.tmprl.cloud` | CNAME | `vpce-...-us-west-2.vpce.amazonaws.com` | +| `aws-us-east-1.region.tmprl.cloud` | CNAME | `vpce-...-us-east-1.vpce.amazonaws.com` | + +Link the private zone to every VPC where Workers run. + +When your Workers connect to the Namespace, they first resolve `..tmprl.cloud`, which CNAMEs to `.region.tmprl.cloud`, which then resolves to your local VPC Endpoint. + +You also need to decide how Workers reach whichever region becomes active. Either: + +- Run Workers in **both** regions continuously (recommended), or +- Establish cross-region connectivity (Transit Gateway, VPC Peering) so Workers in one region can reach the VPC Endpoint in the other. + +## Single-cloud HA on GCP Private Service Connect -Private connectivity is not yet offered for GCP Multi-region Namespaces. +For GCP-only HA, the same model applies, but use a Cloud DNS private zone for `region.tmprl.cloud` and point each `gcp-.region.tmprl.cloud` record at the local PSC endpoint IP address. + +| Record name | Record type | Value (your PSC endpoint IP) | +| ---------------------------------------- | ----------- | ----------------------------------- | +| `gcp-us-central1.region.tmprl.cloud` | A | `10.x.x.x` (PSC endpoint IP) | +| `gcp-us-east1.region.tmprl.cloud` | A | `10.x.x.x` (PSC endpoint IP) | + +A Connectivity Rule is required for each PSC connection — see [GCP PSC setup](/cloud/connectivity/gcp-connectivity) and [Connectivity Rules](/cloud/connectivity#connectivity-rules). + +## Multi-cloud HA (AWS PrivateLink + GCP Private Service Connect) + +If your replicas span clouds — for example, AWS `us-east-1` (active) and GCP `us-east4` (passive) — your Workers need a way to reach the active replica regardless of which cloud it's in. The Temporal-managed CNAME rewrites still work the same way; the harder problems are on the client side. + +Plan for these three things: + +1. **DNS overrides for both clouds.** Your private DNS for `region.tmprl.cloud` needs entries for both the AWS region (CNAME → AWS VPCE) and the GCP region (A → PSC IP). This typically means a Route 53 private hosted zone in your AWS Worker VPCs *and* a Cloud DNS private zone in your GCP Worker network — both for the same `region.tmprl.cloud` parent — each with the records relevant to the cloud the Workers run in. +2. **Worker reachability across clouds.** Your AWS-resident Workers must be able to reach the GCP PSC endpoint when GCP is active, and vice versa. Options include: + - Run Workers in both clouds (preferred — simplest, lowest latency, matches the failover model). + - Establish cross-cloud connectivity (e.g., AWS Transit Gateway + GCP Cloud Interconnect, or a third-party transit) so Workers in one cloud can resolve and reach the other cloud's private endpoint. +3. **Connectivity Rules in both regions.** GCP PSC requires a Connectivity Rule. AWS PrivateLink does not, but if you want to enforce private-only access, add one for the AWS side as well so the Namespace is private-only in both regions. + +:::caution Alpine/musl + GCP PSC: missing AAAA records can break Workers + +GCP Private Service Connect endpoints return only A (IPv4) records — there is no AAAA (IPv6) record. Most Linux distributions handle a missing AAAA gracefully, but **Alpine Linux's musl resolver returns a SERVFAIL** when AAAA is missing, which can cause Temporal SDK clients to fail name resolution after a failover from AWS to GCP. + +If you run Workers on Alpine and use multi-cloud HA, either: + +- Switch the Worker base image to a glibc-based distribution (Debian, Ubuntu, distroless), or +- Configure your application/runtime to disable AAAA lookups (e.g., set `GODEBUG=netdns=go+v4` for Go, or prefer IPv4 in the Java/Node/Python runtimes you use). ::: -To set up the DNS override, configure specific regions to target the internal VPC Endpoint IP addresses. -For example, you might set aws-us-west-1.region.tmprl.cloud to target 192.168.1.2. -In AWS, this can be done using a Route 53 private hosted zone for `region.tmprl.cloud`. -Link that private zone to the VPCs you use for Workers. +## Test failover before you depend on it + +Failover is the only thing High Availability features exist to do — and DNS, cross-region or cross-cloud reachability, and Connectivity Rule coverage are exactly the kinds of configuration that look correct on paper and break under failover. Test it in a non-production Namespace first. -When your Workers connect to the Namespace, they first resolve the `..` record. -This points to `.region.tmprl.cloud`, which then resolves to your internal IP addresses. +A reasonable validation plan: -Consider how you’ll configure Workers for this setup. -You can either have Workers run in both regions continuously or establish connectivity between regions using Transit Gateway or VPC Peering. -This way, Workers can access the newly activated region once failover occurs. +1. Set up the HA Namespace and the private connectivity for both regions, including all DNS overrides. +2. Run Workers continuously in **both** regions (or arrange cross-region connectivity). +3. Trigger a manual failover from the Web UI or `tcld` and verify: + - DNS for `..tmprl.cloud` resolves to the new region within ~30 seconds. + - Workers in both regions are picking up tasks. + - SDK clients connect successfully (no `Name resolution failed`, `connection reset by peer`, or `context deadline exceeded` errors). +4. Trigger a failback to the original region and verify the same. +5. For multi-cloud HA, repeat with each cloud as the active replica, including from base images (Alpine, distroless) you actually use in production. + +If a real failover finds a configuration gap that wasn't tested, recovery typically requires changes on the client side that are hard to make under pressure. ## Available regions, PrivateLink endpoints, and DNS record overrides @@ -75,16 +139,16 @@ The `sa-east-1` region is not yet available for use with Multi-region Namespaces ::: -The following table lists the available Temporal regions, PrivateLink endpoints, and DNS record overrides: +The following tables list the available Temporal regions and the DNS record overrides used for HA + private connectivity: + +### AWS regions and PrivateLink endpoints +### GCP regions and Private Service Connect endpoints + + -When using a Namespace with High Availability features, the Namespace's DNS record `..` points to a regional DNS record in the format `.region.`. -Here, `` is the currently active region for your Namespace. +When using a Namespace with High Availability features, the Namespace's DNS record `..tmprl.cloud` points to a regional DNS record in the format `-.region.tmprl.cloud`, where `-` is the currently active region for your Namespace. -During failover, Temporal Cloud changes the target of the Namespace DNS record from one region to another. -Namespace DNS records are configured with a 15 second TTL. -Any DNS cache should re-resolve the record within this time. -As a rule of thumb, receiving an updated DNS record takes about twice (2x) the TTL. -Clients should converge to the newly targeted region within, at most, a 30-second delay. +During failover, Temporal Cloud changes the target of the Namespace DNS record from one region to another. Namespace DNS records are configured with a 15-second TTL. Any DNS cache should re-resolve the record within this time. As a rule of thumb, receiving an updated DNS record takes about twice (2x) the TTL — clients should converge to the newly targeted region within, at most, a 30-second delay, assuming their resolver and language runtime honor the TTL. diff --git a/docs/cloud/worker-health.mdx b/docs/cloud/worker-health.mdx index b20e41d59f..46203a1a34 100644 --- a/docs/cloud/worker-health.mdx +++ b/docs/cloud/worker-health.mdx @@ -400,33 +400,17 @@ Set it to a negative value to disable heartbeating. #### Enable host resource reporting By default, the Go SDK reports `0` for CPU and memory usage in Worker heartbeats. -To enable host resource reporting, provide a `SysInfoProvider` when creating your Worker. -You must use a resource based tuner to enable host resource reporting. - -The SDK includes a [gopsutil](https://github.com/shirou/gopsutil)-based implementation via the [sysinfo](https://pkg.go.dev/go.temporal.io/sdk/contrib/sysinfo) library that supports cgroup metrics in containerized Linux environments: +Set `SysInfoProvider` on [`worker.Options`](https://pkg.go.dev/go.temporal.io/sdk/worker#Options) to enable host resource reporting. +Host resource reporting is not included in the core SDK module. Add the [sysinfo](https://pkg.go.dev/go.temporal.io/sdk/contrib/sysinfo) contrib package to your imports - it provides a [gopsutil](https://github.com/shirou/gopsutil)-based implementation that supports cgroup metrics in containerized Linux environments: ```go import ( - "log" - - "go.temporal.io/sdk/client" - "go.temporal.io/sdk/contrib/envconfig" - "go.temporal.io/sdk/contrib/sysinfo" - "go.temporal.io/sdk/worker" + "go.temporal.io/sdk/contrib/sysinfo" + "go.temporal.io/sdk/worker" ) -c, err := client.Dial(envconfig.MustLoadDefaultClientOptions()) -if err != nil { - log.Fatalln("Unable to create client", err) -} -defer c.Close() -tuner, err := worker.NewResourceBasedTuner(worker.ResourceBasedTunerOptions{ - TargetMem: 0.8, - TargetCpu: 0.9, - InfoSupplier: sysinfo.SysInfoProvider(), -}) w := worker.New(c, "my-task-queue", worker.Options{ - Tuner: tuner, + SysInfoProvider: sysinfo.SysInfoProvider(), }) ``` @@ -456,6 +440,14 @@ Set the `WorkerHeartbeatInterval` property on [`TemporalRuntimeOptions`](https:/ Set it to `null` to disable heartbeating. + + +_Available since Java SDK v1.35.0_ + +Set the heartbeat interval on [`WorkflowClientOptions.Builder`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowClientOptions.Builder.html) with `setWorkerHeartbeatInterval(Duration)`. +Set it to a negative `Duration` to disable heartbeating. + + _Available since Ruby SDK v1.1.0_ diff --git a/docs/develop/dotnet/activities/standalone-activities.mdx b/docs/develop/dotnet/activities/standalone-activities.mdx index a2ba735056..76521c6684 100644 --- a/docs/develop/dotnet/activities/standalone-activities.mdx +++ b/docs/develop/dotnet/activities/standalone-activities.mdx @@ -228,7 +228,7 @@ You can pass the Activity as either a lambda expression or a string Activity typ // Using a lambda expression (type-safe) var result = await client.ExecuteActivityAsync( () => MyActivities.ComposeGreetingAsync(new ComposeGreetingInput("Hello", "World")), - new("my-activity-id", "my-task-queue") + new("standalone-activity-id", "standalone-activity-sample") { ScheduleToCloseTimeout = TimeSpan.FromSeconds(10), }); @@ -237,7 +237,7 @@ var result = await client.ExecuteActivityAsync( var result = await client.ExecuteActivityAsync( "ComposeGreeting", new object?[] { new ComposeGreetingInput("Hello", "World") }, - new("my-activity-id", "my-task-queue") + new("standalone-activity-id", "standalone-activity-sample") { ScheduleToCloseTimeout = TimeSpan.FromSeconds(10), }); diff --git a/docs/develop/dotnet/nexus/feature-guide.mdx b/docs/develop/dotnet/nexus/feature-guide.mdx index 1ad628afca..6e887c0e0a 100644 --- a/docs/develop/dotnet/nexus/feature-guide.mdx +++ b/docs/develop/dotnet/nexus/feature-guide.mdx @@ -172,6 +172,32 @@ A common pattern is to use the Temporal Client from within a sync handler to Sig You can also use Signal-With-Start or Update-With-Start to ensure the Workflow is started and send it a Signal or Update. All calls must complete within the [Nexus request timeout](/cloud/limits#nexus-operation-request-timeout). Updates should be short-lived to stay within this deadline. +The [nexus_messaging](https://github.com/temporalio/samples-dotnet/tree/main/src/NexusMessaging) sample shows how to create a Nexus Service that uses synchronous operations to send Updates and Queries: + +Use `NexusOperationExecutionContext`, like below, to get the Client that the Worker was initialized with. In this example, the Workflow Id is derived from the client Id using the `WorkflowIdForUser` method. This converts a given client Id (in this case, the client is passing in a user Id) to generate a Workflow Id from it. +This way the client only needs the identifier it cares about. + +[NexusMessaging/CallerPattern/Handler/NexusGreetingService.cs](https://github.com/temporalio/samples-dotnet/tree/main/src/NexusMessaging/CallerPattern/Handler/NexusGreetingService.cs) + +```csharp +private static string WorkflowIdForUser(string userId) => $"GreetingWorkflow_for_{userId}"; + +[NexusOperationHandler] +public IOperationHandler GetLanguages() => + OperationHandler.Sync( + async (ctx, input) => + { + // Access the Temporal client from the Nexus operation context + var client = NexusOperationExecutionContext.Current.TemporalClient; + var handle = client.GetWorkflowHandle(WorkflowIdForUser(input.UserId)); + return await handle.QueryAsync(wf => wf.QueryLanguages(input.IncludeUnsupported)); + }); + ... +``` + +There are two examples of messaging through Nexus in the sample code: the [caller pattern](https://github.com/temporalio/samples-dotnet/tree/main/src/NexusMessaging/CallerPattern) and the [on-demand pattern](https://github.com/temporalio/samples-dotnet/tree/main/src/NexusMessaging/OnDemandPattern). +The caller pattern shows how to send messages to an existing Workflow, while the on-demand pattern shows how to start a Workflow through Nexus and then send Signals to it. + ### Develop an Asynchronous Nexus Operation handler to start a Workflow Use the `WorkflowRunOperationHandler.FromHandleFactory` method, which is the easiest way to expose a Workflow as an operation. diff --git a/docs/develop/dotnet/workflows/basics.mdx b/docs/develop/dotnet/workflows/basics.mdx index 2045fc9d03..9c77541090 100644 --- a/docs/develop/dotnet/workflows/basics.mdx +++ b/docs/develop/dotnet/workflows/basics.mdx @@ -44,11 +44,9 @@ All Workflow Definition parameters must be serializable. ## Workflow logic requirements {#workflow-logic-requirements} -Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). -Therefore, each language is limited to the use of certain idiomatic techniques. -However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. +Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). Each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with application code outside the Workflow. -This means there are several things Workflows cannot do such as: +This means there are several things Workflows shouldn't do such as: - Perform IO (network, disk, stdio, etc) - Access/alter external mutable state @@ -65,27 +63,20 @@ This is especially true with `Task`s. Temporal requires that the deterministic `TaskScheduler.Current` is used, but many .NET async calls will use `TaskScheduler.Default` implicitly (and some analyzers even encourage this). Here are some known gotchas to avoid with .NET tasks inside of Workflows: -- Do not use `Task.Run` - this uses the default scheduler and puts work on the thread pool. - - Use `Workflow.RunTaskAsync` instead. - - Can also use `Task.Factory.StartNew` with current scheduler or instantiate the `Task` and run `Task.Start` on it. -- Do not use `Task.ConfigureAwait(false)` - this will not use the current context. - - If you must use `Task.ConfigureAwait`, use `Task.ConfigureAwait(true)`. - - There is no significant performance benefit to `Task.ConfigureAwait` in workflows anyways due to how the scheduler works. -- Do not use anything that defaults to the default task scheduler. -- Do not use `Task.Delay`, `Task.Wait`, timeout-based `CancellationTokenSource`, or anything that uses .NET built-in timers. - - `Workflow.DelayAsync`, `Workflow.WaitConditionAsync`, or non-timeout-based cancellation token source is suggested. -- Do not use `Task.WhenAny`. - - Use `Workflow.WhenAnyAsync` instead. +- Use `Workflow.RunTaskAsync` instead of `Task.Run`. `Task.Run` uses the default scheduler and puts work on the thread pool. + - You can also use `Task.Factory.StartNew` with current scheduler or instantiate the `Task` and run `Task.Start` on it. +- If you need to use `Task.ConfigureAwait`, use `Task.ConfigureAwait(true)`. `Task.ConfigureAwait(false)` won't use the current context. + - There is no significant performance benefit to `Task.ConfigureAwait` in workflows because of how the scheduler works. +- Avoid anything that defaults to the default task scheduler. +- Use `Workflow.DelayAsync`, `Workflow.WaitConditionAsync`, or non-timeout-based cancellation token sources instead of `Task.Delay`, `Task.Wait`, timeout-based `CancellationTokenSource`, or anything that uses .NET built-in timers. +- Use `Workflow.WhenAnyAsync` instead of `Task.WhenAny`. - Technically this only applies to an enumerable set of tasks with results or more than 2 tasks with results. Other uses are safe. See [this issue](https://github.com/dotnet/runtime/issues/87481). -- Do not use `Task.WhenAll` - - Use `Workflow.WhenAllAsync` instead. +- Use `Workflow.WhenAllAsync` instead of `Task.WhenAll`. - Technically `Task.WhenAll` is currently deterministic in .NET and safe, but it is better to use the wrapper to be sure. -- Do not use `CancellationTokenSource.CancelAsync`. - - Use `CancellationTokenSource.Cancel` instead. -- Do not use `System.Threading.Semaphore` or `System.Threading.SemaphoreSlim` or `System.Threading.Mutex`. - - Use `Temporalio.Workflows.Semaphore` or `Temporalio.Workflows.Mutex` instead. +- Use `CancellationTokenSource.Cancel` instead of `CancellationTokenSource.CancelAsync`. +- Use `Temporalio.Workflows.Semaphore` or `Temporalio.Workflows.Mutex` instead of `System.Threading.Semaphore`, `System.Threading.SemaphoreSlim`, or `System.Threading.Mutex`. - _Technically_ `SemaphoreSlim` does work if only the async form of `WaitAsync` is used without no timeouts and `Release` is used. But anything else can deadlock the workflow and its use is cumbersome since it must be disposed. - Be wary of additional libraries' implicit use of the default scheduler. @@ -93,6 +84,7 @@ Here are some known gotchas to avoid with .NET tasks inside of Workflows: In order to help catch wrong scheduler use, by default the Temporal .NET SDK adds an event source listener for info-level task events. While this technically receives events from all uses of tasks in the process, we make sure to ignore anything that is not running in a Workflow in a high performant way (basically one thread local check). + For code that does run in a Workflow and accidentally starts a task in another scheduler, an `InvalidWorkflowOperationException` will be thrown which "pauses" the Workflow (fails the Workflow Task which continually retries until the code is fixed). This is unfortunately a runtime-only check, but can help catch mistakes early. If this needs to be turned off for any reason, set `DisableWorkflowTracingEventListener` to `true` in Worker options. diff --git a/docs/develop/dotnet/workflows/versioning.mdx b/docs/develop/dotnet/workflows/versioning.mdx index b154c7ab23..cf13f90481 100644 --- a/docs/develop/dotnet/workflows/versioning.mdx +++ b/docs/develop/dotnet/workflows/versioning.mdx @@ -31,8 +31,8 @@ import { CaptionedImage } from '@site/src/components'; Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. -The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). -If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. +The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. This only applies to Workflow orchestration logic. Non-deterministic work such as API calls, and database queries should be placed in Activities, which Temporal retries reliably. + With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: diff --git a/docs/develop/go/nexus/feature-guide.mdx b/docs/develop/go/nexus/feature-guide.mdx index 6543342254..4388e9a6c9 100644 --- a/docs/develop/go/nexus/feature-guide.mdx +++ b/docs/develop/go/nexus/feature-guide.mdx @@ -174,6 +174,27 @@ All calls must complete within the [Nexus request timeout](/cloud/limits#nexus-o The ctx provided to the handler is automatically set with this deadline, so passing it directly to Temporal Client calls will correctly propagate the timeout. Updates should be short-lived to stay within this deadline. +The [nexus_messaging](https://github.com/temporalio/samples-go/tree/main/nexus-messaging) sample shows how to create a Nexus Service that uses synchronous operations to send Updates and Queries. + +Use the Nexus library, as shown below, to get the Client that the Worker was initialized with. In this example, the Workflow Id is derived from the client Id, with the `GetWorkflowID` method. This converts a given client Id (in this case, the client is passing in a user Id) to generate a Workflow Id from it. +This way the client only needs the identifier it cares about. + +[nexus-messaging/callerpattern/handler/app.go](https://github.com/temporalio/samples-go/blob/main/nexus-messaging/callerpattern/handler/app.go) + +```go +func GetWorkflowID(userID string) string { + return WorkflowIDPrefix + userID +} + +var GetLanguagesOperation = nexus.NewSyncOperation(service.GetLanguagesOperationName, func(ctx context.Context, input service.GetLanguagesInput, options nexus.StartOperationOptions) (service.GetLanguagesOutput, error) { + c := temporalnexus.GetClient(ctx) + workflowID := GetWorkflowID(input.UserID) + ... +``` + +There are two examples of messaging through Nexus in the sample code, [caller pattern](https://github.com/temporalio/samples-go/tree/main/nexus-messaging/callerpattern/) and [on-demand pattern](https://github.com/temporalio/samples-go/tree/main/nexus-messaging/ondemandpattern/). +The caller pattern shows how to send messages to an existing Workflow, while the on-demand pattern shows how to start a Workflow through Nexus and then send Signals to it. + ### Develop an Asynchronous Nexus Operation handler to start a Workflow Use the `NewWorkflowRunOperation` constructor, which is the easiest way to expose a Workflow as an operation. diff --git a/docs/develop/go/workflows/basics.mdx b/docs/develop/go/workflows/basics.mdx index 46202dabed..f266e529da 100644 --- a/docs/develop/go/workflows/basics.mdx +++ b/docs/develop/go/workflows/basics.mdx @@ -183,9 +183,7 @@ func main() { ### How to develop Workflow logic {#workflow-logic-requirements} -Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). -Therefore, each language is limited to the use of certain idiomatic techniques. -However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. +Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). Each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with application code outside the Workflow. In Go, Workflow Definition code cannot directly do the following: diff --git a/docs/develop/go/workflows/versioning.mdx b/docs/develop/go/workflows/versioning.mdx index f6ed674944..f90e12a20a 100644 --- a/docs/develop/go/workflows/versioning.mdx +++ b/docs/develop/go/workflows/versioning.mdx @@ -20,8 +20,8 @@ tags: Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. -The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). -If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. +The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. This only applies to Workflow orchestration logic. Non-deterministic work such as API calls, and database queries should be placed in Activities, which Temporal retries reliably. + With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: diff --git a/docs/develop/index.mdx b/docs/develop/index.mdx index 37ebc77f56..4196222a1c 100644 --- a/docs/develop/index.mdx +++ b/docs/develop/index.mdx @@ -18,3 +18,4 @@ The Temporal SDK developer guides provide a comprehensive overview of the struct - TypeScript SDK [developer guide](/develop/typescript) and [API reference](https://typescript.temporal.io) - .NET SDK [developer guide](/develop/dotnet) and [API reference](https://dotnet.temporal.io/) - Ruby SDK [developer guide](/develop/ruby) and [API reference](https://ruby.temporal.io/) +- Rust SDK [developer guide](/develop/rust) and [API reference](https://docs.rs/temporalio-sdk/latest/temporalio_sdk/) diff --git a/docs/develop/java/activities/index.mdx b/docs/develop/java/activities/index.mdx index ab1ff54e6c..fb971e14d4 100644 --- a/docs/develop/java/activities/index.mdx +++ b/docs/develop/java/activities/index.mdx @@ -19,6 +19,7 @@ import * as Components from '@site/src/components'; - [Activity basics](/develop/java/activities/basics) - [Activity execution](/develop/java/activities/execution) +- [Standalone Activities](/develop/java/activities/standalone-activities) - [Timeouts](/develop/java/activities/timeouts) - [Asynchronous Activity Completion](/develop/java/activities/asynchronous-activity) - [Benign exceptions](/develop/java/activities/benign-exceptions) diff --git a/docs/develop/java/activities/standalone-activities.mdx b/docs/develop/java/activities/standalone-activities.mdx new file mode 100644 index 0000000000..6f8646eefe --- /dev/null +++ b/docs/develop/java/activities/standalone-activities.mdx @@ -0,0 +1,442 @@ +--- +id: standalone-activities +title: Standalone Activities - Java SDK +sidebar_label: Standalone Activities +toc_max_heading_level: 4 +keywords: + - standalone activity + - activity execution + - execute activity + - activity handle + - list activities + - count activities + - java sdk +tags: + - Activities + - Temporal Client + - Java SDK + - Temporal SDKs +description: Execute Activities independently without a Workflow using the Temporal Java SDK. +--- + +:::tip SUPPORT, STABILITY, and DEPENDENCY INFO + +Temporal Java SDK support for [Standalone Activities](/standalone-activity) is at +[Pre-release](/evaluate/development-production-features/release-stages#pre-release). + +::: + +Standalone Activities are Activities that run independently, without being orchestrated by a +Workflow. Instead of starting an Activity from within a Workflow Definition, you start a Standalone +Activity directly from a Temporal Client using `ActivityClient`. + +The way you write the Activity and register it with a Worker is identical to [Workflow +Activities](/develop/java/activities/basics). The only difference is that you execute a Standalone +Activity directly from your Temporal Client. + +This page covers the following: + +- [Get Started with Standalone Activities](#get-started) +- [Define your Activity](#define-activity) +- [Run a Worker with the Activity registered](#run-worker) +- [Execute a Standalone Activity](#execute-activity) +- [Start a Standalone Activity without waiting for the result](#start-activity) +- [Get a handle to an existing Standalone Activity](#get-activity-handle) +- [Wait for the result of a Standalone Activity](#get-activity-result) +- [List Standalone Activities](#list-activities) +- [Count Standalone Activities](#count-activities) +- [Run Standalone Activities with Temporal Cloud](#run-standalone-activities-temporal-cloud) + +:::note + +This documentation uses source code from the +[standaloneactivities](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/standaloneactivities) +sample. + +::: + +## Get Started with Standalone Activities {#get-started} + +Prerequisites: + +- **Java** 8+ + +- **Temporal Java SDK** (v1.35.0 or higher). See the [Java Quickstart](/develop/java/set-up-your-local-java) for + install instructions. + +- **Temporal CLI** v1.7.0 or higher + + Install with Homebrew: + + ```bash + brew install temporal + ``` + + Or see the [Temporal CLI install guide](/cli#install) for other platforms. + + Verify the installation: + + ```bash + temporal --version + ``` + +Start the Temporal development server: + +``` +temporal server start-dev +``` + +This command automatically starts the Temporal development server with the Web UI, and creates the `default` Namespace. +It uses an in-memory database, so do not use it for real use cases. + + +:::info Temporal Cloud + +All code samples on this page use +[`ClientConfigProfile.load()`](https://www.javadoc.io/doc/io.temporal/temporal-envconfig/latest/io/temporal/envconfig/ClientConfigProfile.html) +to configure the Temporal Client connection. It responds to [environment +variables](/references/client-environment-configuration) and [TOML configuration +files](/references/client-environment-configuration), so the same code works against a local dev +server and Temporal Cloud without changes. See [Run Standalone Activities with Temporal +Cloud](#run-standalone-activities-temporal-cloud) below. + +::: + +The Temporal Server will now be available for client connections on `localhost:7233`, and the +Temporal Web UI will now be accessible at [http://localhost:8233](http://localhost:8233). Standalone +Activities are available from the nav bar item located towards the top left of the page: + +Standalone Activities Web UI nav bar item + +Clone the [samples-java](https://github.com/temporalio/samples-java) repository to follow along: + +```bash +git clone https://github.com/temporalio/samples-java.git +cd samples-java +``` + +The sample consists of separate programs in the `standaloneactivities` package: + +``` +core/src/main/java/io/temporal/samples/standaloneactivities/ +├── GreetingActivities.java # Activity interface +├── GreetingActivitiesImpl.java # Activity implementation +├── StandaloneActivityWorker.java # Worker that processes activity tasks +├── ExecuteActivity.java # Starts an activity and waits for the result +├── StartActivity.java # Starts an activity without blocking +├── ListActivities.java # Lists activity executions +└── CountActivities.java # Counts activity executions +``` + +## Define your Activity {#define-activity} + +An Activity in the Temporal Java SDK is an interface annotated with `@ActivityInterface`, with +methods annotated with `@ActivityMethod`. The way you define a Standalone Activity is identical to +how you define an Activity orchestrated by a Workflow. In fact, the same Activity can be executed +both as a Standalone Activity and as a Workflow Activity. + +[GreetingActivities.java](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/standaloneactivities/GreetingActivities.java) + +```java +@ActivityInterface +public interface GreetingActivities { + + @ActivityMethod + String composeGreeting(String greeting, String name); +} +``` + +[GreetingActivitiesImpl.java](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/standaloneactivities/GreetingActivitiesImpl.java) + +```java +public class GreetingActivitiesImpl implements GreetingActivities { + + private static final Logger log = LoggerFactory.getLogger(GreetingActivitiesImpl.class); + + @Override + public String composeGreeting(String greeting, String name) { + log.info("Composing greeting..."); + return greeting + ", " + name + "!"; + } +} +``` + +## Run a Worker with the Activity registered {#run-worker} + +Running a Worker for Standalone Activities is the same as running a Worker for Workflow Activities — +you create a `WorkerFactory`, register the Activity implementation, and call `factory.start()`. The +Worker doesn't need to know whether the Activity will be invoked from a Workflow or as a Standalone +Activity. See [How to run a Worker](/develop/java/workers/run-worker-process) for more details on +Worker setup and configuration options. + +[StandaloneActivityWorker.java](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/standaloneactivities/StandaloneActivityWorker.java) + +```java +ClientConfigProfile profile = ClientConfigProfile.load(); +WorkflowServiceStubs service = + WorkflowServiceStubs.newServiceStubs(profile.toWorkflowServiceStubsOptions()); + +WorkflowClient client = WorkflowClient.newInstance(service, profile.toWorkflowClientOptions()); +WorkerFactory factory = WorkerFactory.newInstance(client); +Worker worker = factory.newWorker(TASK_QUEUE); +worker.registerActivitiesImplementations(new GreetingActivitiesImpl()); +factory.start(); +System.out.println("Worker running on task queue: " + TASK_QUEUE); +``` + +Open a new terminal, navigate to the `samples-java` directory, and run the Worker: + +```bash +./gradlew -q execute -PmainClass=io.temporal.samples.standaloneactivities.StandaloneActivityWorker +``` + +Leave this terminal running — the Worker needs to stay up to process activities. + +## Execute a Standalone Activity {#execute-activity} + +Use +[`ActivityClient.execute()`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/ActivityClient.html) +to execute a Standalone Activity and block until it completes. Call this from your application code, +not from inside a Workflow Definition. This durably enqueues your Standalone Activity in the Temporal +Server, waits for it to be executed on your Worker, and then returns the typed result. + +[ExecuteActivity.java](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/standaloneactivities/ExecuteActivity.java) + +```java +ActivityClient client = + ActivityClient.newInstance( + service, + ActivityClientOptions.newBuilder().setNamespace(profile.getNamespace()).build()); + +StartActivityOptions options = + StartActivityOptions.newBuilder() + .setId(ACTIVITY_ID) + .setTaskQueue(TASK_QUEUE) + .setStartToCloseTimeout(Duration.ofSeconds(10)) + .build(); + +String result = + client.execute( + GreetingActivities.class, + GreetingActivities::composeGreeting, + options, + "Hello", + "World"); +System.out.println("Activity result: " + result); +``` + +The typed `execute()` API takes the Activity interface class and an unbound method reference. The SDK +uses the method reference to infer the Activity type name and result type at runtime. You can also +call Activities by string type name: + +```java +// Using a string type name +String result = client.execute("ComposeGreeting", String.class, options, "Hello", "World"); +``` + +`StartActivityOptions` requires `id`, `taskQueue`, and at least one of `startToCloseTimeout` or +`scheduleToCloseTimeout`. + +To run it: + +1. Make sure the Temporal Server is running (from the [Get Started](#get-started) step above). +2. Make sure the Worker is running (from the [Run a Worker](#run-worker) step above). +3. Open a new terminal, navigate to the `samples-java` directory, and run: + +```bash +./gradlew -q execute -PmainClass=io.temporal.samples.standaloneactivities.ExecuteActivity +``` + +Or use the Temporal CLI: + +```bash +./temporal activity execute \ + --type ComposeGreeting \ + --activity-id standalone-activity-id \ + --task-queue standalone-activity-task-queue \ + --start-to-close-timeout 10s \ + --input '"Hello"' \ + --input '"World"' +``` + +## Start a Standalone Activity without waiting for the result {#start-activity} + +Starting a Standalone Activity means sending a request to the Temporal Server to durably enqueue +your Activity job, without waiting for it to be executed by your Worker. + +Use +[`ActivityClient.start()`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/ActivityClient.html) +to start a Standalone Activity and get a handle without waiting for the result: + +[StartActivity.java](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/standaloneactivities/StartActivity.java) + +```java +ActivityHandle handle = + client.start( + GreetingActivities.class, + GreetingActivities::composeGreeting, + options, + "Hello", + "World"); +System.out.println("Started activity ID: " + ACTIVITY_ID); + +// Wait for the result later +String result = handle.getResult(); +System.out.println("Activity result: " + result); +``` + +With the Temporal Server and Worker running, open a new terminal in the `samples-java` directory and +run: + +```bash +./gradlew -q execute -PmainClass=io.temporal.samples.standaloneactivities.StartActivity +``` + +Or use the Temporal CLI: + +```bash +./temporal activity start \ + --type ComposeGreeting \ + --activity-id standalone-activity-id \ + --task-queue standalone-activity-task-queue \ + --start-to-close-timeout 10s \ + --input '"Hello"' \ + --input '"World"' +``` + +## Get a handle to an existing Standalone Activity {#get-activity-handle} + +Use `client.getHandle()` to create a typed handle to a previously started Standalone Activity: + +```java +ActivityHandle handle = + client.getHandle("standalone-activity-id", null, String.class); +``` + +Pass `null` as the run ID to target the latest run of the given activity ID. You can then use the +handle to wait for the result, describe, cancel, or terminate the Activity. + +## Wait for the result of a Standalone Activity {#get-activity-result} + +Under the hood, calling `client.execute()` is the same as calling `client.start()` to durably +enqueue the Standalone Activity, and then calling `handle.getResult()` to block until the Activity +completes and return the result: + +```java +String result = handle.getResult(); +``` + +To wait asynchronously without blocking the calling thread, use `handle.getResultAsync()`, which +returns a `CompletableFuture`: + +```java +CompletableFuture future = handle.getResultAsync(); +``` + +Or use the Temporal CLI to wait for a result by Activity ID: + +```bash +./temporal activity result --activity-id standalone-activity-id +``` + +## List Standalone Activities {#list-activities} + +Use +[`client.listExecutions()`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/ActivityClient.html) +to list Standalone Activity Executions that match a [List Filter](/list-filter) query. The result is +a `Stream` that fetches pages from the server on demand as the stream is +consumed. + +These APIs return only Standalone Activity Executions. Activities running inside Workflows are not +included. + +[ListActivities.java](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/standaloneactivities/ListActivities.java) + +```java +client + .listExecutions("TaskQueue = '" + TASK_QUEUE + "'") + .forEach( + info -> + System.out.printf( + "ActivityID: %s, Type: %s, Status: %s%n", + info.getActivityId(), info.getActivityType(), info.getStatus())); +``` + +Run it: + +```bash +./gradlew -q execute -PmainClass=io.temporal.samples.standaloneactivities.ListActivities +``` + +Or use the Temporal CLI: + +```bash +./temporal activity list +``` + +The query parameter accepts the same [List Filter](/list-filter) syntax used for [Workflow +Visibility](/visibility). For example, `ActivityType = 'composeGreeting' AND Status = 'Running'`. + +## Count Standalone Activities {#count-activities} + +Use +[`client.countExecutions()`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/ActivityClient.html) +to count Standalone Activity Executions that match a [List Filter](/list-filter) query. This returns +the total count of executions (running, completed, failed, etc.) — not the number of queued tasks. +It works the same way as counting Workflow Executions. + +[CountActivities.java](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/standaloneactivities/CountActivities.java) + +```java +ActivityExecutionCount resp = client.countExecutions("TaskQueue = '" + TASK_QUEUE + "'"); +System.out.println("Total activities: " + resp.getCount()); +resp.getGroups() + .forEach( + group -> + System.out.println("Group " + group.getGroupValues() + ": " + group.getCount())); +``` + +Run it: + +```bash +./gradlew -q execute -PmainClass=io.temporal.samples.standaloneactivities.CountActivities +``` + +Or use the Temporal CLI: + +```bash +./temporal activity count +``` + +## Run Standalone Activities with Temporal Cloud {#run-standalone-activities-temporal-cloud} + +The code samples on this page use `ClientConfigProfile.load()`, so the same code works against +Temporal Cloud — just configure the connection via environment variables or a TOML profile. No code +changes are needed. + +For a step-by-step guide on connecting to Temporal Cloud, including Namespace creation, certificate +generation, and authentication setup in the Cloud UI, see +[Connect to Temporal Cloud](/develop/java/client/temporal-client#connect-to-temporal-cloud). + +### Connect with mTLS + +Set these environment variables with values from your Temporal Cloud Namespace settings: + +``` +export TEMPORAL_ADDRESS=..tmprl.cloud:7233 +export TEMPORAL_NAMESPACE=. +export TEMPORAL_TLS_CLIENT_CERT_PATH='path/to/your/client.pem' +export TEMPORAL_TLS_CLIENT_KEY_PATH='path/to/your/client.key' +``` + +### Connect with an API key + +Set these environment variables with values from your Temporal Cloud API key settings: + +``` +export TEMPORAL_ADDRESS=..api.temporal.io:7233 +export TEMPORAL_NAMESPACE=. +export TEMPORAL_API_KEY= +``` + +Then run the Worker and starter code as shown in the earlier sections. diff --git a/docs/develop/java/index.mdx b/docs/develop/java/index.mdx index 119c8b397f..46f73981be 100644 --- a/docs/develop/java/index.mdx +++ b/docs/develop/java/index.mdx @@ -47,6 +47,7 @@ From there, you can dive deeper into any of the Temporal primitives to start bui - [Activity basics](/develop/java/activities/basics) - [Activity execution](/develop/java/activities/execution) +- [Standalone Activities](/develop/java/activities/standalone-activities) - [Timeouts](/develop/java/activities/timeouts) - [Asynchronous Activity Completion](/develop/java/activities/asynchronous-activity) - [Benign exceptions](/develop/java/activities/benign-exceptions) diff --git a/docs/develop/java/nexus/feature-guide.mdx b/docs/develop/java/nexus/feature-guide.mdx index 031506e919..2f9bc01a3f 100644 --- a/docs/develop/java/nexus/feature-guide.mdx +++ b/docs/develop/java/nexus/feature-guide.mdx @@ -250,6 +250,30 @@ can also use Signal-With-Start or Update-With-Start to ensure the Workflow is st All calls must complete within the [Nexus request timeout](/cloud/limits#nexus-operation-request-timeout). Updates should be short-lived to stay within this deadline. +The [nexus_messaging](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/nexusmessaging) sample shows how to create a Nexus Service that uses synchronous operations to send Updates and Queries. + +Use the Nexus library, as shown below, to get the Client that the Worker was initialized with. In this example, the Workflow Id is derived from the client Id, with the "getWorkflowId" method. This converts a given client Id (in this case, the client is passing in a user Id) to generate a Workflow Id from it. +This way the client only needs the identifier it cares about. + +[nexusmessaging/callerpattern/handler/NexusGreetingServiceImpl.java](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/nexusmessaging/callerpattern/handler/NexusGreetingServiceImpl.java) +```java +static final String WORKFLOW_ID_PREFIX = "GreetingWorkflow_for_"; + + public static String getWorkflowId(String userId) { + return WORKFLOW_ID_PREFIX + userId; + } + + private GreetingWorkflow getWorkflowStub(String userId) { + return Nexus.getOperationContext() + .getWorkflowClient() + .newWorkflowStub(GreetingWorkflow.class, getWorkflowId(userId)); + } + ... +``` + +There are two examples of messaging through Nexus in the sample code, [caller pattern](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/nexusmessaging/callerpattern/) and [on-demand pattern](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/nexusmessaging/ondemandpattern/). +The caller pattern shows how to send messages to an existing Workflow, while the on-demand pattern shows how to start a Workflow through Nexus and then send Signals to it. + ### Develop an Asynchronous Nexus Operation handler to start a Workflow Use the `WorkflowRunOperation.fromWorkflowMethod` method, which is the easiest way to expose a Workflow as an operation. diff --git a/docs/develop/java/workflows/basics.mdx b/docs/develop/java/workflows/basics.mdx index 2aacfaf6ca..bc5b23b705 100644 --- a/docs/develop/java/workflows/basics.mdx +++ b/docs/develop/java/workflows/basics.mdx @@ -211,34 +211,23 @@ When you set the Workflow Type this way, the value of the `name` parameter does ## Workflow logic requirements {#workflow-logic-requirements} -Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). -Therefore, each language is limited to the use of certain idiomatic techniques. -However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. +Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). Each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. When defining Workflows using the Temporal Java SDK, the Workflow code must be written to execute effectively once and to completion. The following constraints apply when writing Workflow Definitions: -- Do not use mutable global variables in your Workflow implementations. - This will ensure that multiple Workflow instances are fully isolated. -- Your Workflow code must be deterministic. - Do not call non-deterministic functions (such as non-seeded random or `UUID.randomUUID()`) directly from the Workflow code. - The Temporal SDK provides specific API for calling non-deterministic code in your Workflows. -- Do not use programming language constructs that rely on system time. - For example, only use `Workflow.currentTimeMillis()` to get the current time inside a Workflow. -- Do not use native Java `Thread` or any other multi-threaded classes like `ThreadPoolExecutor`. - Use `Async.function` or `Async.procedure`, provided by the Temporal SDK, to execute code asynchronously. -- Do not use synchronization, locks, or other standard Java blocking concurrency-related classes besides those provided by the Workflow class. - There is no need for explicit synchronization because multi-threaded code inside a Workflow is executed one thread at a time and under a global lock. +- Do not use mutable global variables in your Workflow implementations. This will ensure that multiple Workflow instances are fully isolated. +- Workflow code must be deterministic. If you need to call non-deterministic functions (such as non-seeded random or `UUID.randomUUID()`) directly from the Workflow code, the Temporal SDK provides specific API for calling non-deterministic code in your Workflows. + - For operations like calling external APIs, invoking LLMs, querying databases, or performing I/O, use Activities. Activities run outside Workflow replay and are retried reliably. +- Use Temporal-provided functions instead of that rely on system time. For example, use only `Workflow.currentTimeMillis()` to get the current time inside a Workflow. +- Use `Async.function` or `Async.procedure`, provided by the Temporal SDK, to execute code asynchronously instead of native Java `Thread` or any other multi-threaded classes like `ThreadPoolExecutor`. +- Use only the concurrency features provided by the Workflow class. Multi-threaded code inside a Workflow is executed one thread at a time and under a global lock, so there is no need for explicit synchronization. - Call `Workflow.sleep` instead of `Thread.sleep`. - Use `Promise` and `CompletablePromise` instead of `Future` and `CompletableFuture`. - Use `WorkflowQueue` instead of `BlockingQueue`. -- Use `Workflow.getVersion` when making any changes to the Workflow code. - Without this, any deployment of updated Workflow code might break already running Workflows. -- Do not access configuration APIs directly from a Workflow because changes in the configuration might affect a Workflow Execution path. - Pass it as an argument to a Workflow function or use an Activity to load it. -- Use `DynamicWorkflow` when you need a default Workflow that can handle all Workflow Types that are not registered with a Worker. - A single implementation can implement a Workflow Type which by definition is dynamically loaded from some external source. - All standard `WorkflowOptions` and determinism rules apply to Dynamic Workflow implementations. +- Use `Workflow.getVersion` when making any changes to the Workflow code to avoid deployment of updated Workflow code interfering with already-running Workflows. +- Do not access configuration APIs directly from a Workflow because changes in the configuration might affect a Workflow Execution path. Instead, pass it as an argument to a Workflow function or use an Activity to load it. +- Use `DynamicWorkflow` when you need a default Workflow that can handle all Workflow Types that are not registered with a Worker. A single implementation can implement a Workflow Type which by definition is dynamically loaded from some external source. All standard `WorkflowOptions` and determinism rules apply to Dynamic Workflow implementations. Java Workflow reference: [https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/package-summary.html](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/package-summary.html) diff --git a/docs/develop/java/workflows/versioning.mdx b/docs/develop/java/workflows/versioning.mdx index 3832d6a461..f20b717363 100644 --- a/docs/develop/java/workflows/versioning.mdx +++ b/docs/develop/java/workflows/versioning.mdx @@ -21,8 +21,8 @@ tags: Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. -The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). -If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. +The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. This only applies to Workflow orchestration logic. Non-deterministic work such as API calls, and database queries should be placed in Activities, which Temporal retries reliably. + With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: diff --git a/docs/develop/php/workflows/basics.mdx b/docs/develop/php/workflows/basics.mdx index 6477252172..f11c171b2d 100644 --- a/docs/develop/php/workflows/basics.mdx +++ b/docs/develop/php/workflows/basics.mdx @@ -96,9 +96,7 @@ interface YourWorkflowDefinitionInterface ### How to develop Workflow logic {#workflow-logic-requirements} -Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). -Therefore, each language is limited to the use of certain idiomatic techniques. -However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. +Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). Each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with application code outside the Workflow. used inside your Workflow to interact with external (to the Workflow) application code. \*\*Temporal uses the [Microsoft Azure Event Sourcing pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing) to recover the state of a Workflow object including its local variable values. diff --git a/docs/develop/php/workflows/versioning.mdx b/docs/develop/php/workflows/versioning.mdx index 85bbcde310..965542e30d 100644 --- a/docs/develop/php/workflows/versioning.mdx +++ b/docs/develop/php/workflows/versioning.mdx @@ -29,8 +29,8 @@ tags: Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. -The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). -If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. +The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. This only applies to Workflow orchestration logic. Non-deterministic work such as API calls, and database queries should be placed in Activities, which Temporal retries reliably. + With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: diff --git a/docs/develop/python/integrations/index.mdx b/docs/develop/python/integrations/index.mdx index 75e04eae88..93272662fe 100644 --- a/docs/develop/python/integrations/index.mdx +++ b/docs/develop/python/integrations/index.mdx @@ -1,7 +1,7 @@ --- id: index -title: AI integrations -sidebar_label: AI integrations +title: Integrations +sidebar_label: Integrations toc_max_heading_level: 2 keywords: - integrations @@ -18,13 +18,15 @@ tags: description: Integrations with other tools and services. --- -The following AI framework integrations are available for the Temporal Python SDK: +The following integrations are available between the Temporal Python SDK and third-party AI frameworks and tooling: -| Framework | SDK docs | Integration guide | -| --- | --- | --- | -| Braintrust | [braintrust.dev](https://braintrust.dev/docs) | [Guide](./braintrust.mdx) | -| Google ADK | [adk.dev](https://adk.dev/) | [Guide](https://adk.dev/integrations/temporal/) | -| OpenAI Agents SDK | [openai.github.io](https://openai.github.io/openai-agents-python/) | [Guide](https://github.com/temporalio/sdk-python/blob/main/temporalio/contrib/openai_agents/README.md) | -| Pydantic AI | [ai.pydantic.dev](https://ai.pydantic.dev/) | [Guide](https://ai.pydantic.dev/durable_execution/temporal/) | +| Framework | Tags | SDK docs | Integration guide | +| ----------------- | --------------- | ------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------ | +| Braintrust | Observability | [braintrust.dev](https://braintrust.dev/docs) | [Guide](./braintrust.mdx) | +| Google ADK | Agent framework | [adk.dev](https://adk.dev/) | [Guide](https://adk.dev/integrations/temporal/) | +| OpenAI Agents SDK | Agent framework | [openai.github.io](https://openai.github.io/openai-agents-python/) | [Guide](https://github.com/temporalio/sdk-python/blob/main/temporalio/contrib/openai_agents/README.md) | +| Pydantic AI | Agent framework | [ai.pydantic.dev](https://ai.pydantic.dev/) | [Guide](https://ai.pydantic.dev/durable_execution/temporal/) | +| Tenuo | Governance | [tenuo.ai](https://tenuo.ai/docs) | [Guide](https://tenuo.ai/temporal) | -These integrations are built on the Temporal Python SDK's [Plugin system](/develop/plugins-guide), which you can also use to build your own integrations. +These integrations are built on the Temporal Python SDK's [Plugin system](/develop/plugins-guide), which you can also +use to build your own integrations. diff --git a/docs/develop/python/nexus/feature-guide.mdx b/docs/develop/python/nexus/feature-guide.mdx index afad0f58d9..a1ef798861 100644 --- a/docs/develop/python/nexus/feature-guide.mdx +++ b/docs/develop/python/nexus/feature-guide.mdx @@ -162,26 +162,35 @@ A common pattern is to use the Temporal Client from within a sync handler to Sig You can also use Signal-With-Start or Update-With-Start to ensure the Workflow is started and send it a Signal or Update. All calls must complete within the [Nexus request timeout](/cloud/limits#nexus-operation-request-timeout). Updates should be short-lived to stay within this deadline. -Use `nexus.client()` to get the Client that the Worker was initialized with. -The [nexus_sync_operations](https://github.com/temporalio/samples-python/blob/main/nexus_sync_operations) sample shows how to create a Nexus Service that uses synchronous operations to send Updates and Queries: +The [nexus_messaging](https://github.com/temporalio/samples-python/tree/main/nexus_messaging) sample shows how to create a Nexus Service that uses synchronous operations to send Updates and Queries. -[nexus_sync_operations/handler/service_handler.py](https://github.com/temporalio/samples-python/blob/main/nexus_sync_operations/handler/service_handler.py) +Use `nexus.client()` to get the Client that the Worker was initialized with. In this example, the Workflow Id is derived from the client Id, with the "get_workflow_id" method. This takes a given client Id (in this case, the client is passing in a user ID) to generate a Workflow Id from it. +This way the client only needs the identifier it cares about. + +[nexus_messaging/callerpattern/handler/service_handler.py](https://github.com/temporalio/samples-python/blob/main/nexus_messaging/callerpattern/handler/service_handler.py) ```python from temporalio import nexus -@nexusrpc.handler.service_handler(service=GreetingService) -class GreetingServiceHandler: +def get_workflow_id(user_id: str) -> str: + return f"{WORKFLOW_ID_PREFIX}{user_id}" + +@nexusrpc.handler.service_handler(service=NexusGreetingService) +class NexusGreetingServiceHandler: - @property - def greeting_workflow_handle(self) -> WorkflowHandle[GreetingWorkflow, str]: + def _get_workflow_handle( + self, user_id: str + ) -> WorkflowHandle[GreetingWorkflow, str]: return nexus.client().get_workflow_handle_for( - GreetingWorkflow.run, self.workflow_id + GreetingWorkflow.run, get_workflow_id(user_id) ) ... ``` +There are two examples of messaging through Nexus in the sample code, [caller pattern](https://github.com/temporalio/samples-python/blob/main/nexus_messaging/callerpattern/) and [on demand pattern](https://github.com/temporalio/samples-python/blob/main/nexus_messaging/ondemandpattern/). +The caller pattern shows how to send messages to an existing Workflow, while the on-demand pattern shows how to start a Workflow through Nexus and then send Signals to it. + In addition to `nexus.client()`, you can use `nexus.info()` to access information about the currently-executing Nexus Operation including its Task Queue. diff --git a/docs/develop/python/workflows/basics.mdx b/docs/develop/python/workflows/basics.mdx index af0b9ee2d8..d0f1eeefc5 100644 --- a/docs/develop/python/workflows/basics.mdx +++ b/docs/develop/python/workflows/basics.mdx @@ -163,7 +163,7 @@ class YourWorkflow: **How to develop Workflow logic using the Temporal Python SDK.** Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). -Therefore, each language is limited to the use of certain idiomatic techniques. However, each Temporal SDK provides a +Each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. Workflow code must be deterministic because the Temporal Server may [replay](/develop/python/best-practices/testing-suite#replay) your Workflow to reconstruct its state. This means: diff --git a/docs/develop/python/workflows/versioning.mdx b/docs/develop/python/workflows/versioning.mdx index a35cb8e0f0..44fa9e4974 100644 --- a/docs/develop/python/workflows/versioning.mdx +++ b/docs/develop/python/workflows/versioning.mdx @@ -32,8 +32,8 @@ import { CaptionedImage } from '@site/src/components'; Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. -The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). -If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. +The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. This only applies to Workflow orchestration logic. Non-deterministic work such as API calls, and database queries should be placed in Activities, which Temporal retries reliably. + With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: diff --git a/docs/develop/ruby/workflows/versioning.mdx b/docs/develop/ruby/workflows/versioning.mdx index daead38d1d..ee96830709 100644 --- a/docs/develop/ruby/workflows/versioning.mdx +++ b/docs/develop/ruby/workflows/versioning.mdx @@ -26,8 +26,8 @@ tags: Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. -The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). -If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. +The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. This only applies to Workflow orchestration logic. Non-deterministic work such as API calls, and database queries should be placed in Activities, which Temporal retries reliably. + With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: diff --git a/docs/develop/rust/workflows/basics.mdx b/docs/develop/rust/workflows/basics.mdx index a6d8091138..c1357ef412 100644 --- a/docs/develop/rust/workflows/basics.mdx +++ b/docs/develop/rust/workflows/basics.mdx @@ -176,7 +176,7 @@ pub struct GreetingWorkflow { ## Workflow logic requirements {#workflow-logic-requirements} -Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). For non-deterministic operations like API calls, LLM invocations, and database queries, use [Activities](/develop/rust/activities/basics). +Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). For non-deterministic operations like API calls, and database queries, use [Activities](/develop/rust/activities/basics). Workflow code must be deterministic because the Temporal Server may replay your Workflow to reconstruct its state. This means: diff --git a/docs/develop/task-queue-priority-fairness.mdx b/docs/develop/task-queue-priority-fairness.mdx index 15adeadc40..c84d4d835b 100644 --- a/docs/develop/task-queue-priority-fairness.mdx +++ b/docs/develop/task-queue-priority-fairness.mdx @@ -29,7 +29,7 @@ You can use Priority and Fairness individually or combine them to express Fairne ### When to use Priority @@ -250,7 +250,7 @@ To use Fairness, you need to set fairness keys and optionally fairness weights a You can set a Workflow's fairness key and weight via the CLI like so: @@ -521,13 +521,18 @@ When you use Priority and Fairness together, Priority determines which "sub-queu ### Inheritance Each field of Priority (`priority_key`, `fairness_key`, `fairness_weight`) is resolved independently. + + **Activity inheritance order** (highest precedence first): 1. [Fairness weight overrides](#fairness-weight-overrides) on the Task Queue (`fairness_weight` only) diff --git a/docs/develop/typescript/best-practices/debugging.mdx b/docs/develop/typescript/best-practices/debugging.mdx index 03a435a22c..3570cdef3c 100644 --- a/docs/develop/typescript/best-practices/debugging.mdx +++ b/docs/develop/typescript/best-practices/debugging.mdx @@ -51,16 +51,12 @@ If something isn't behaving the way you expect, make sure to check both location If you are developing Workflows and finding that code isn't executing as expected, the first place to look is whether old Workflows are still running. -If those old Workflows have the same name and are on the same task queue, Temporal will try to continue executing them on your new code by design. -You may get errors that make no sense to you because +If those old Workflows have the same name and are on the same Task Queue, Temporal will try to continue executing them on your new code by design. You may get errors that make no sense to you because: -- Temporal is trying to execute old Workflow code that no longer exists in your codebase, or -- your new Client code is expecting Temporal to execute old Workflow/Activity code it doesn't yet know about. +- Temporal is trying to execute old Workflow code that no longer exists in your codebase. +- Your new Client code is expecting Temporal to execute old Workflow/Activity code it doesn't yet know about. -The biggest sign that this is happening is if you notice Temporal is acting non-deterministically: running the same Workflow twice gets different results. - -Stale workflows are usually a non-issue because the errors generated are just noise from code you no longer want to run. -If you need to terminate old stale Workflows, you can do so with Temporal Web or the Temporal CLI. +Stale Workflows are usually a non-issue because the errors generated are just noise from code you no longer want to run. If you need to terminate old stale Workflows, you can do so with Temporal Web or the Temporal CLI. ### Workflow/Activity registration errors diff --git a/docs/develop/typescript/integrations/index.mdx b/docs/develop/typescript/integrations/index.mdx index 1b171619e5..371291145a 100644 --- a/docs/develop/typescript/integrations/index.mdx +++ b/docs/develop/typescript/integrations/index.mdx @@ -1,7 +1,7 @@ --- id: index -title: AI integrations -sidebar_label: AI integrations +title: Integrations +sidebar_label: Integrations toc_max_heading_level: 2 keywords: - integrations @@ -14,9 +14,10 @@ tags: description: Integrations with other tools and services. --- -The following AI framework integrations are available for the Temporal TypeScript SDK: +The following AI framework and tooling integrations are available for the Temporal TypeScript SDK: -| Framework | SDK docs | Integration guide | -| ---------------- | -------------------------------------------------- | ------------------------------------------------------------------------------------------ | -| AI SDK by Vercel | [ai-sdk.dev](https://ai-sdk.dev/docs/introduction) | [Guide](/develop/typescript/integrations/ai-sdk) | -| Braintrust | [braintrust.dev](https://braintrust.dev/docs) | [Guide](https://www.braintrust.dev/docs/integrations/sdk-integrations/temporal#typescript) | +| Framework | Tags | SDK docs | Integration guide | +| ---------------- | --------------- | -------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| AI SDK by Vercel | Agent framework | [ai-sdk.dev](https://ai-sdk.dev/docs/introduction) | [Guide](/develop/typescript/integrations/ai-sdk) | +| Braintrust | Observability | [braintrust.dev](https://braintrust.dev/docs) | [Guide](https://www.braintrust.dev/docs/integrations/sdk-integrations/temporal#typescript) | +| Mastra | Agent framework | [mastra.ai](https://mastra.ai/docs) | [Guide](https://mastra.ai/guides/deployment/temporal) | diff --git a/docs/develop/typescript/nexus/feature-guide.mdx b/docs/develop/typescript/nexus/feature-guide.mdx index 96824daf9d..1a22790a98 100644 --- a/docs/develop/typescript/nexus/feature-guide.mdx +++ b/docs/develop/typescript/nexus/feature-guide.mdx @@ -198,6 +198,33 @@ The handler context also exposes `ctx.requestDeadline` as an optional `Date`, re Note that this is the deadline for the current _request_, not the overall operation. Use it to make decisions about whether to start work that may not finish in time, or to set timeouts on downstream calls. +The [nexus_messaging](https://github.com/temporalio/samples-typescript/tree/main/nexus-messaging) sample shows how to create a Nexus Service that uses synchronous operations to send Updates and Queries. + +Use the Nexus library, as shown below, to get the Client that the Worker was initialized with. In this example, the Workflow Id is derived from the client Id, with the "workflowIdForUser" method. This converts a given client Id (in this case, the client is passing in a user ID) into a Workflow Id. +This way the client only needs the identifier it cares about. + +[nexus-messaging/src/callerpattern/service/handler.ts](https://github.com/temporalio/samples-typescript/blob/main/nexus-messaging/src/callerpattern/service/handler.ts) + +```ts +import * as temporalNexus from '@temporalio/nexus'; + +function workflowIdForUser(userId: string): string { + return `GreetingWorkflow_for_${userId}`; +} + +export const nexusGreetingServiceHandler = nexus.serviceHandler(nexusGreetingService, { + getLanguages: async (ctx, input: GetLanguagesInput) => { + const client = temporalNexus.getClient(); + const handle = client.workflow.getHandle(workflowIdForUser(input.userId)); + return await handle.query(getLanguagesQuery); + }, + + ... +``` + +There are two examples of messaging through Nexus in the sample code, [caller pattern](https://github.com/temporalio/samples-typescript/tree/main/nexus-messaging/src/callerpattern) and [on-demand pattern](https://github.com/temporalio/samples-typescript/tree/main/nexus-messaging/src/ondemandpattern). +The caller pattern shows how to send messages to an existing Workflow, while the on-demand pattern shows how to start a Workflow through Nexus and then send Signals to it. + ### Develop an Asynchronous Nexus Operation handler to start a Workflow Use `@temporalio/nexus`'s `WorkflowRunOperationHandler` helper class to easily expose a Temporal Workflow as a Nexus Operation. diff --git a/docs/develop/typescript/workflows/basics.mdx b/docs/develop/typescript/workflows/basics.mdx index ac96b40b4a..7e2cd0354e 100644 --- a/docs/develop/typescript/workflows/basics.mdx +++ b/docs/develop/typescript/workflows/basics.mdx @@ -115,7 +115,7 @@ export async function helloWorld(): Promise { ## How to develop Workflow logic {#workflow-logic-requirements} Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). -Therefore, each language is limited to the use of certain idiomatic techniques. However, each Temporal SDK provides a +Each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. In the Temporal TypeScript SDK, Workflows run in a deterministic sandboxed environment. The code is bundled on Worker diff --git a/docs/encyclopedia/activities/standalone-activity.mdx b/docs/encyclopedia/activities/standalone-activity.mdx index 50923c2af4..64d0dac12d 100644 --- a/docs/encyclopedia/activities/standalone-activity.mdx +++ b/docs/encyclopedia/activities/standalone-activity.mdx @@ -63,6 +63,7 @@ Pick your SDK and follow the quickstart: - [Go SDK - Standalone Activities quickstart and code sample](/develop/go/activities/standalone-activities) - [Python SDK - Standalone Activities quickstart and code sample](/develop/python/activities/standalone-activities) - [.NET SDK - Standalone Activities quickstart and code sample](/develop/dotnet/activities/standalone-activities) +- [Java SDK - Standalone Activities quickstart and code sample](/develop/java/activities/standalone-activities) ::: diff --git a/docs/encyclopedia/data-conversion/codec-server.mdx b/docs/encyclopedia/data-conversion/codec-server.mdx index 7aa1b41b06..0ee473946c 100644 --- a/docs/encyclopedia/data-conversion/codec-server.mdx +++ b/docs/encyclopedia/data-conversion/codec-server.mdx @@ -2,19 +2,17 @@ id: codec-server title: Codec Server sidebar_label: Codec Server -description: A Codec Server is an HTTP server that provides remote encoding and decoding for Temporal Payloads. +description: + A Codec Server is an HTTP server that provides remote encoding and decoding for Temporal Payloads, enabling the Web UI + and CLI to display decoded data without exposing encryption keys to the Temporal Service. slug: /codec-server toc_max_heading_level: 4 keywords: - encryption - - explanation - - keys + - codec-server - payloads - - secrets - data-converters - - codec-server tags: - - codec-server - Concepts - Encryption - Data Converters @@ -23,43 +21,182 @@ tags: import { CaptionedImage } from '@site/src/components'; -This page discusses [Codec Server](#codec-server). +A Codec Server is an HTTP/HTTPS server that you host and operate. It runs your [Payload Codec](/payload-codec) logic to +encode and decode [Payloads](/dataconversion#payload) on behalf of the Temporal CLI and Web UI. The Codec Server is +independent of the Temporal Service. Encryption keys and codec logic remain in your environment. + +For setup instructions, see [Codec Server setup](/production-deployment/data-encryption#codec-server-setup). + +## Why use a Codec Server + +When you apply a custom [Payload Codec](/payload-codec) for encryption or compression, data stored in the Temporal +Service is encoded. The Temporal Service never has access to your encryption keys, so it cannot decode this data. +Without a Codec Server, the Web UI and CLI display raw encoded payloads. -## What is a Codec Server? {#codec-server} +A Codec Server solves this by giving the Web UI and CLI a way to decode payloads on demand, without exposing keys to the +Temporal Service. Common reasons to run a Codec Server include: -A Codec Server is an HTTP/HTTPS server that uses a [custom Payload Codec](/production-deployment/data-encryption) to decode your data remotely through endpoints. +- **Debugging Workflows.** View decoded Workflow inputs, outputs, and Event History in the Web UI instead of reading + base64-encoded or encrypted blobs. +- **Operating from the CLI.** Use commands like `temporal workflow show` and `temporal workflow execute` with readable + data, even when payloads are encrypted at rest. +- **Encoding inputs from the UI and CLI.** When you start or signal a Workflow from the Web UI or CLI, the Codec Server + can encode the input before it reaches the Temporal Service, so the Temporal Service never sees plaintext (the input + still travels from your browser or CLI to the Codec Server, which is why HTTPS matters in any non-loopback + deployment). +- **Compliance and access control.** Because the Codec Server runs in your environment, you control who can decode + payloads and under what conditions. You can layer authorization on top of the decode endpoint to restrict access per + user or per Namespace. -{/* This should not have changed with tctl-to-temporal */} +## How a Codec Server works + +A Codec Server follows the Temporal +[Codec Server Protocol](https://github.com/temporalio/samples-go/tree/main/codec-server#codec-server-protocol). It +exposes two HTTP POST endpoints: + +- **`/encode`** accepts plaintext payloads and returns encoded payloads. Used for sending payloads. +- **`/decode`** accepts encoded payloads and returns decoded payloads. Used for retrieving payloads. + +Both endpoints receive and respond with a JSON body containing a `payloads` array of [Payload](/dataconversion#payload) +objects. The Codec Server passes each payload through your [Payload Codec](/payload-codec), which applies the same +encoding or decoding logic that your Workers use. -A Codec Server follows the Temporal [Codec Server Protocol](https://github.com/temporalio/samples-go/tree/main/codec-server#codec-server-protocol). -It implements two endpoints: +When the Web UI or CLI needs to display decoded data, it sends the encoded payloads to your Codec Server's `/decode` +endpoint. The Codec Server decodes the payloads and returns them to the client. The Temporal Service never sees the +decoded data. -- `/encode` -- `/decode` +The `/encode` endpoint works in the other direction. When you start a Workflow or send a Signal from the Web UI or CLI, +the input is sent to the Codec Server's `/encode` endpoint first, so data reaches the Temporal Service in its encoded +form. -Each endpoint receives and responds with a JSON body that has a `payloads` property with an array of [Payloads](/dataconversion#payload). -The endpoints run the Payloads through a [Payload Codec](/payload-codec) before returning them. +Your Codec Server should use the same Payload Codec implementation as your Workers to ensure consistent encoding and +decoding. -Most SDKs provide example Codec Server implementation samples, listed here: +## Codec Server with External Storage {#external-storage} -- [Go](https://github.com/temporalio/samples-go/tree/main/codec-server) -- [Java](https://github.com/temporalio/sdk-java/tree/master/temporal-remote-data-encoder) -- [.NET](https://github.com/temporalio/samples-dotnet/tree/main/src/Encryption) -- [Python](https://github.com/temporalio/samples-python/blob/main/encryption/codec_server.py) -- [TypeScript](https://github.com/temporalio/samples-typescript/blob/main/encryption/src/codec-server.ts) +When your Workers and Clients use [External Storage](/external-storage), your storage drivers replace some payloads in +the Event History with small references that point to data in an external store like Amazon S3. The Temporal Service and +the Web UI only see these references, not the actual payload data. This is further complicated by setups where you run +Codecs in proxy that encode payloads after the Data Converter has returned on the Worker. Your Codec Server must be able +to handle downloading and decoding in the correct order for you to be able to view the Workflow data in the UI or CLI. + +To support External Storage, create a handler using `NewPayloadHTTPHandler` with `PayloadHTTPHandlerOptions`. The options +accept your storage drivers, your pre-storage codecs (the Payload Codecs configured in your Worker's Data Converter), +and any post-storage codecs (codecs applied by a proxy after external storage). The handler applies them in the correct +order across all endpoints automatically. When you configure the handler with storage drivers, the existing endpoints +become storage-aware and a new `/download` endpoint becomes available: + +:::caution + +`NewPayloadHTTPHandler` runs the full encode-store-encode and decode-retrieve-decode pipeline. Do not use it as a target +for a remote Data Converter or remote codec on your Workers. For remote codecs, use `NewPayloadCodecHTTPHandler` +separately. If you need both, set up `NewPayloadHTTPHandler` for the Web UI and CLI alongside +`NewPayloadCodecHTTPHandler` for your Workers, and configure both with the same codecs. + +::: + +- **`/download`** retrieves the actual payload data from external storage and decodes it through the Payload Codec. This + endpoint is used internally by `/decode` when it encounters storage references, but you can also call it directly from the Web UI + to retrieve the decoded payload. The Temporal Web UI uses this endpoint when you click to view the full payload for a + storage reference. +- **`/decode`** still decodes encoded payloads, but also handles storage references. By default, `/decode` uses the + download logic internally to retrieve and decode any storage references in the request alongside regular payloads. + With the `?preserveStorageRefs=true` query parameter, `/decode` skips retrieval and returns storage references as-is. +- **`/encode`** applies the Payload Codec, then uploads payloads that exceed the size threshold to external storage and + replaces them with reference tokens. + + -#### Usage +The following example walks through how all three endpoints work together: -When you apply custom encoding with encryption or compression on your Workflow data, it is stored in the encrypted/compressed format on the Temporal Server. For details on what data is encoded, see [Securing your data](/production-deployment/data-encryption). +1. A user starts a Workflow from the CLI with a plaintext input. The CLI sends the input to the Codec Server's `/encode` + endpoint. +2. The Codec Server encodes the payload through the Payload Codec. The encoded payload exceeds the storage threshold, + so the Codec Server uploads it to external storage and returns a small reference token. +3. The CLI sends the reference token to the Temporal Service, which stores it in the Event History. +4. Later, a user views the Workflow in the Web UI. The Web UI retrieves the Event History from the Temporal Service and + sends the payloads to the Codec Server's `/decode` endpoint with the `?preserveStorageRefs=true` query parameter. +5. The Codec Server decodes any non-reference payloads through the Payload Codec, but returns storage references as-is. + The Web UI displays the reference metadata, indicating the payload is stored externally. +6. The user clicks to view the full payload. The Web UI sends the storage reference to the `/download` endpoint. +7. The Codec Server retrieves the encoded payload from external storage, decodes it through the Payload Codec, and + returns the plaintext result to the Web UI. -To see decoded data when using the Temporal CLI or Web UI to perform some operations on a Workflow Execution, configure the Codec Server endpoint in the Web UI and the Temporal CLI. -When you configure the Codec Server endpoints, the Temporal CLI and Web UI send the encoded data to the Codec Server, and display the decoded data received from the Codec Server. +## Codec Server vs. Payload Codec -For details on creating your Codec Server, see [Codec Server Setup](/production-deployment/data-encryption#codec-server-setup). +A Codec Server runs a [Payload Codec](/payload-codec) internally, so the two are directly connected. The difference is +where the codec logic runs and who calls it. + +| | Payload Codec | Codec Server | +| --------------------------------- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- | +| **Purpose** | Encodes and decodes Payloads. Applies encryption, compression, or other byte-level transformations. | Hosts a Payload Codec as an HTTP service so the Web UI and CLI can encode and decode Payloads remotely. | +| **Runs where** | In-process, inside your Workers and Clients. Also runs inside the Codec Server. | As a standalone HTTP service in your environment, with a Payload Codec inside it. | +| **Called by** | The Temporal SDK, automatically on every serialization and deserialization. | The Web UI and CLI, over HTTP, when a user views or submits Payload data. | +| **Has access to encryption keys** | Yes. Keys are available in the Worker or Client process. | Yes. Must be configured with the same keys the Payload Codec uses. | + +You implement the transformation logic once in a Payload Codec, then host that logic in a Codec Server so the Web UI and +CLI can use it remotely. + +## Securing a Codec Server + +Because a Codec Server can decode sensitive data, treat it with the same trust as a Worker. Anyone who can call it has +effective decrypt access. Use HTTPS for any deployment that is not strictly loopback (`localhost`). + +### Network-level restrictions + +Restrict network access to the Codec Server. The Web UI can communicate with a Codec Server that is only accessible on +`localhost`, so running the Codec Server locally is a viable security pattern. For team access, place the Codec Server +behind a VPN. + +### Authentication + +When the Codec Server is accessible beyond `localhost`, authenticate requests to verify the identity of the caller. The +Web UI supports two approaches: + +**Include cross-origin credentials (recommended).** Enable **Include cross-origin credentials** in the Web UI Codec +Server settings. The browser sends cookies scoped to the Codec Server's domain with each request. Your Codec Server must +have its own authentication mechanism (its own login page and session cookies), so the user must have independently +authenticated with the Codec Server. This is the recommended approach because the Codec Server maintains its own auth +boundary, separate from the Temporal UI. + +**Pass access token.** Enable **Pass access token** in the Web UI Codec Server settings. The Web UI includes the same +JSON Web Token (JWT) the user used to log into the Temporal UI in the `Authorization` header of each request. Your Codec +Server validates the token signature against the OpenID Connect (OIDC) provider's JSON Web Key Set (JWKS) endpoint. On +Temporal Cloud, verify against the +[Temporal Cloud JWKS endpoint](https://login.tmprl.cloud/.well-known/jwks.json). On a self-hosted Temporal Service, the +token comes from whatever auth provider you have [configured for the Web UI](/references/web-ui-configuration#auth). +This approach requires less setup but reuses the same token across the Temporal UI and the Codec Server. + +### Namespace-level authorization + +Authentication identifies the caller, but does not confirm the caller is authorized to decode payloads for a specific +Namespace. Each request from the Web UI includes an `X-Namespace` header identifying the Namespace. To enforce +Namespace-level access control, your Codec Server must enforce an additional check on whether the authenticated user has +permissions for the requested Namespace. This applies regardless of which authentication approach you use. + +### Key management + +You may also need [key management infrastructure](/key-management) to share encryption keys between your Workers and the +Codec Server. + +## SDK Codec Server samples + +Most Temporal SDKs provide example Codec Server implementations: + +- [Go](https://github.com/temporalio/samples-go/tree/main/codec-server) +- [Java](https://github.com/temporalio/sdk-java/tree/master/temporal-remote-data-encoder) +- [Python](https://github.com/temporalio/samples-python/blob/main/encryption/codec_server.py) +- [TypeScript](https://github.com/temporalio/samples-typescript/blob/main/encryption/src/codec-server.ts) +- [.NET](https://github.com/temporalio/samples-dotnet/blob/main/src/Encryption/CodecServer/Program.cs) diff --git a/docs/encyclopedia/retry-policies.mdx b/docs/encyclopedia/retry-policies.mdx index 5dab65b3cf..4c8b013042 100644 --- a/docs/encyclopedia/retry-policies.mdx +++ b/docs/encyclopedia/retry-policies.mdx @@ -3,8 +3,7 @@ id: retry-policies title: What is a Temporal Retry Policy? sidebar_label: Retry Policies description: - Optimize your Workflow and Activity Task Executions with a custom Retry Policy on Temporal. Understand default - retries, intervals, backoff, and maximum attempts for error handling. + Optimize your Workflow and Activity Task Executions with a custom Retry Policy on Temporal. Understand default retries, intervals, backoff, and maximum attempts for error handling. toc_max_heading_level: 4 keywords: - activities @@ -18,22 +17,17 @@ import { CaptionedImage, RelatedReadContainer, RelatedReadItem } from '@site/src import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; -A Retry Policy is a collection of settings that tells Temporal how and when to try again after something fails in a -Workflow Execution or Activity Task Execution. +A Retry Policy is a collection of settings that tells Temporal how and when to try again after something fails in a Workflow Execution or Activity Task Execution. ## Overview -Temporal's default behavior is to automatically retry an Activity that fails, so transient or intermittent failures -require no action on your part. This behavior is defined by the Retry Policy. +Temporal's default behavior is to automatically retry an Activity that fails, so transient or intermittent failures require no action on your part. This behavior is defined by the Retry Policy. -A Retry Policy is declarative. You do not need to implement your own logic for handling the retries; you only need to -specify the desired behavior and Temporal will provide it. +A Retry Policy is declarative. You do not need to implement your own logic for handling the retries; you only need to specify the desired behavior and Temporal will provide it. -In contrast to the Activities it contains, a Workflow Execution itself is not associated with a Retry Policy by default. -This may seem counterintuitive, but Workflows and Activities perform different roles. Activities are intended for -operations that may fail, so having a default Retry Policy increases the likelihood that they will ultimately complete -successfully, even if the initial attempt failed. On the other hand, Workflows must be deterministic and are not -intended to perform failure-prone operations. While it is possible to assign a Retry Policy to a Workflow Execution, +In contrast to the Activities it contains, a Workflow Execution itself is not associated with a Retry Policy by default. This may seem counterintuitive, but Workflows and Activities perform different roles. Activities are intended for operations that may fail, so having a default Retry Policy increases the likelihood that they will ultimately complete successfully, even if the initial attempt failed. + +On the other hand, Workflow code must be deterministic to support replay, and failure-prone or non-deterministic operations (API calls, LLM invocations, etc.) should be placed in Activities, which have built-in retry support. While it is possible to assign a Retry Policy to a Workflow Execution, this is not the default and it is uncommon to do so. Retry Policies do not apply to Workflow Task Executions, which retry until the Workflow Execution Timeout (which is @@ -146,14 +140,9 @@ re-execute upon failure; this is not typically true of Workflows. In most use ca an issue with the design or deployment of your application; for example, a permanent failure that may require different input data. -Retrying an entire Workflow Execution is not recommended due to Temporal's deterministic design. Since Workflows replay -the same sequence of events to reach the same state, retrying the whole workflow would repeat the same logic without -resolving the underlying issue that caused the failure. This repetition does not address problems related to external -dependencies or unchanged conditions and can lead to unnecessary resource consumption and higher costs. Instead, it's -more efficient to retry only the failed Activities. This approach targets specific points of failure, allowing the -workflow to progress without redundant operations, thereby saving on resources and ensuring a more focused and effective -error recovery process. If you need to retry parts of your Workflow Definition, we recommend you implement this in your -Workflow code. +Retrying an entire Workflow Execution is not recommended due to the deterministic nature of Workflow replay. Since Workflows replay the same sequence of events to reach the same state, retrying the whole Workflow would repeat the same logic without resolving the underlying issue that caused the failure. This repetition doesn't address problems related to external dependencies or unchanged conditions and can lead to unnecessary resource consumption and higher costs. + +Instead, retry failed Activities within the Workflow, which is Temporal's default behavior. This approach targets specific points of failure, allowing the Workflow to progress without redundant operations, thereby saving on resources and ensuring a more focused and effective error recovery process. If you need to retry parts of your Workflow Definition, we recommend you implement this in your Workflow code. ## Custom Retry Policy diff --git a/docs/encyclopedia/workflow/workflow-definition.mdx b/docs/encyclopedia/workflow/workflow-definition.mdx index 672173c405..eee46ae822 100644 --- a/docs/encyclopedia/workflow/workflow-definition.mdx +++ b/docs/encyclopedia/workflow/workflow-definition.mdx @@ -160,6 +160,12 @@ A critical aspect of developing Workflow Definitions is ensuring that they are d Generally speaking, this means you must take care to ensure that any time your Workflow code is executed it makes the same Workflow API calls in the same sequence, given the same input. Some changes to those API calls are safe to make. +:::tip Note on determinism + +Workflow code must be deterministic to support replay. To handle non-deterministic operations like API calls, LLM/AI invocations, database queries, and other external interactions, put them in Activities. Activities execute outside the replay path and are automatically retried so they don't cause non-determinism errors. + +::: + For example, you can change: - The input parameters, return values, and execution timeouts of Child Workflows and Activities diff --git a/docs/encyclopedia/workflow/workflow-overview.mdx b/docs/encyclopedia/workflow/workflow-overview.mdx index a3014a2bf5..94d6f5c122 100644 --- a/docs/encyclopedia/workflow/workflow-overview.mdx +++ b/docs/encyclopedia/workflow/workflow-overview.mdx @@ -3,9 +3,7 @@ id: workflow-overview title: Temporal Workflow sidebar_label: Workflow description: - This comprehensive guide provides insights into Temporal Workflows, covering Workflow Definitions in various - programming languages, deterministic constraints, handling code changes, and ensuring reliability, durability, and - scalability in a Temporal Application, with examples and best practices for Workflow Versioning and development. + This comprehensive guide provides insights into Temporal Workflows, covering Workflow Definitions in various programming languages, deterministic constraints, handling code changes, and ensuring reliability, durability, and scalability in a Temporal Application, with examples and best practices for Workflow Versioning and development. slug: /workflows toc_max_heading_level: 4 keywords: @@ -25,29 +23,59 @@ This guide provides a comprehensive overview of Temporal Workflows and covers th ## Intro to Workflows -Conceptually, a workflow defines a sequence of steps. With Temporal, those steps are defined by writing code, known as a -Workflow Definition, and are carried out by running that code, which results in a Workflow Execution. +Conceptually, a workflow defines a sequence of steps. With Temporal, those steps are defined by writing code, known as a Workflow Definition, and are carried out by running that code, which results in a Workflow Execution. -In day-to-day conversations, the term Workflow might refer to Workflow Type, a Workflow Definition, or a Workflow -Execution. +In day-to-day conversations, the term Workflow might refer to Workflow Type, a Workflow Definition, or a Workflow Execution. 1. A **Workflow Definition** is the code that defines your Workflow. -2. The **Workflow Type** is the name that maps to a Workflow Definition. It's an identifier that makes it possible to - distinguish one type of Workflow (such as order processing) from another (such as customer onboarding). -3. A **Workflow Execution** is a running Workflow, which is created by combining a Workflow Definition with a request to - execute it. You can execute a Workflow Definition any number of times, potentially providing different input each - time (i.e., a Workflow Definition for order processing might process order #123 in one execution and order #567 in - another execution). It is the actual instance of the Workflow Definition running in the Temporal Platform. +2. The **Workflow Type** is the name that maps to a Workflow Definition. It's an identifier that makes it possible to distinguish one type of Workflow (such as order processing) from another (such as customer onboarding). +3. A **Workflow Execution** is a running Workflow, which is created by combining a Workflow Definition with a request to execute it. You can execute a Workflow Definition any number of times, potentially providing different input each time (i.e., a Workflow Definition for order processing might process order #123 in one execution and order #567 in another execution). It is the actual instance of the Workflow Definition running in the Temporal Platform. -You'll develop those Workflows by writing code in a general-purpose programming language such as Go, Java, TypeScript, -or Python. The code you write is the same code that will be executed at runtime, so you can use your favorite tools and -libraries to develop Temporal Workflows. +You'll develop those Workflows by writing code in a general-purpose programming language such as Go, Java, TypeScript, or Python. The code you write is the same code that will be executed at runtime, so you can use your favorite tools and libraries to develop Temporal Workflows. -Temporal Workflows are resilient. -They can run—and keep running—for years, even if the underlying infrastructure fails. -If the application itself crashes, Temporal will automatically recreate its pre-failure state so it can continue right where it left off. +Temporal Workflows are resilient. They can run—and keep running—for years, even if the underlying infrastructure fails. If the application itself crashes, Temporal will automatically recreate its pre-failure state so it can continue right where it left off. -Each Workflow Execution progresses through a series of **Commands** and **Events**, which are recorded in an **Event -History**. +Each Workflow Execution emits a series of **Commands** and processes a sequence of **Events**, which are recorded in an **Event History**. -Workflows must follow deterministic constraints to ensure consistent replay behavior. +### How Workflow replay works + +When a Workflow [yields](https://en.wikipedia.org/wiki/Yield_(multithreading)) or encounters an error, the goal of Temporal is to bring the Workflow back to the exact same state it was in before the pause occurred. To make that possible, Temporal keeps the Event History. This is a complete, ordered log of everything that has already happened in a Workflow. + +The Event History could look like this for example: + +- Started Timer for 5 minutes +- Scheduled Activity X +- Activity X completed with result Y +- Received Signal Z + +This history is the source of truth for everything that happens in the Workflow. + +#### Resuming a Workflow + +When it's time to continue the Workflow, Temporal doesn't restore memory from a snapshot. It starts the Workflow code from the beginning, replays the Event History step by step, and uses that history to guide the code back to the exact state as before. So the Workflow code is re-run, but uses the recorded events instead of redoing work. Although Temporal doesn't always have to start from the beginning if the state is cached. + +Because the Workflow is re-executed to rebuild its state: + +- It has to make the same decisions when given the same history, which makes a Workflow deterministic. +- It shouldn't depend on any values _not_ recorded in the history which would be different between runs. + +For example: + +- A direct call to `Date.now()` could return a different value on replay. +- A random number could change. +- A network call, which wasn't performed inside an Activity, could return something new. + +If those values changed, the Workflow could take a different path and fail to match the recorded history. To solve this, Temporal provides replay-safe versions of common operations: + +- Time is read from the Workflow context so it matches the recorded history. +- Timers are recorded as events and don’t “wait” again during replay. +- Randomness and similar values can be captured once and reused. + +These APIs make sure the Workflow receives the same values during replay as it did originally. Activities handle everything that interacts with the outside world, like: + +- API calls +- Database queries +- LLM invocations +- File I/O + +When a Workflow calls an Activity, the Activity runs once, its result is recorded in the Event History. During replay, that result is reused, not recomputed. So Activities aren't executed again during replay. diff --git a/docs/evaluate/why-temporal.mdx b/docs/evaluate/why-temporal.mdx index 9cf8549986..bf862ec49c 100644 --- a/docs/evaluate/why-temporal.mdx +++ b/docs/evaluate/why-temporal.mdx @@ -22,6 +22,8 @@ But most of them revolve around these three themes: - Productive development paradigms and code structure - Visible distributed application state +You can check out a list of [use cases for Temporal](/evaluate/use-cases-design-patterns) to better understand how it can fit into your system. Temporal supports things like AI agent orchestration, enabling durable execution of LLM calls, tool use, and complex AI workflows. + :::tip See Temporal in action Watch the following video to see how Temporal ensures an order-fulfillment system can recover from various failures, from process crashes to unreachable APIs. diff --git a/docs/production-deployment/data-encryption.mdx b/docs/production-deployment/data-encryption.mdx index 133aa1c59d..9545c93f94 100644 --- a/docs/production-deployment/data-encryption.mdx +++ b/docs/production-deployment/data-encryption.mdx @@ -1,6 +1,6 @@ --- id: data-encryption -title: Codec Server - Temporal Platform feature guide +title: Codecs and Encryption sidebar_label: Codecs and Encryption description: Encrypt data in Temporal Server to secure Workflow, Activity, and Worker information. Use custom Payload Codecs for encryption/decryption, set up Codec Servers for remote decoding, and ensure secure access. slug: /production-deployment/data-encryption @@ -18,10 +18,11 @@ tags: import { CaptionedImage } from '@site/src/components'; -Temporal Server stores and persists the data handled in your Workflow Execution. -Encrypting this data ensures that any sensitive application data is secure when handled by the Temporal Server. +The Temporal Service persists data from your Workflow Executions, including inputs, outputs, and results. To protect +sensitive data, use a [Payload Codec](/payload-codec) to encrypt payloads before they reach the Temporal Service. With +encryption enabled, data exists unencrypted only on the Client and the Worker process, on hosts that you control. -For example, if you have sensitive information passed in the following objects that are persisted in the Workflow Execution Event History, use encryption to secure it: +The following data is persisted in the Event History and can be encrypted: - Inputs and outputs/results in your [Workflow](/workflow-execution), [Activity](/activity-execution), and [Child Workflow](/child-workflows) - [Signal](/sending-messages#sending-signals) inputs @@ -30,37 +31,19 @@ For example, if you have sensitive information passed in the following objects t - [Query](/sending-messages#sending-queries) inputs and results - Results of [Local Activities](/local-activity) and [Side Effects](/workflow-execution/event#side-effect) - [Application errors and failures](/references/failures). - Failure messages and call stacks are not encoded as codec-capable Payloads by default; you must explicitly enable encoding these common attributes on failures. - For more details, see [Failure Converter](/failure-converter). + Failure messages and call stacks are not encoded as codec-capable Payloads by default; you must explicitly enable + encoding these common attributes on failures. For more details, see [Failure Converter](/failure-converter). -Using encryption ensures that your sensitive data exists unencrypted only on the Client and the Worker Process that is executing the Workflows and Activities, on hosts that you control. +To view encrypted data in the Web UI and CLI, set up a [Codec Server](/codec-server). The following sections cover how +to set up a Codec Server and configure the Web UI and CLI to use it. -By default, your data is serialized to a [Payload](/dataconversion#payload) by a [Data Converter](/dataconversion). -To encrypt your Payload, configure your custom encryption logic with a [Payload Codec](/payload-codec) and set it with a [custom Data Converter](/default-custom-data-converters#custom-data-converter). +For encryption implementation examples, see the following samples: -A Payload Codec does byte-to-byte conversion to transform your Payload (for example, by implementing compression and/or encryption and decryption) and is an optional step that happens between the Client and the [Payload Converter](/payload-converter): - - - -You can run your Payload Codec with a [Codec Server](/codec-server) and use the Codec Server endpoints in the Web UI and CLI to decode your encrypted Payload locally. -For details on how to set up a Codec Server, see [Codec Server setup](#codec-server-setup). - -However, if you plan to set up [remote data encoding](/remote-data-encoding) for your data, ensure that you consider all security implications of running encryption remotely before implementing it. - -When implementing a custom codec, it is recommended to perform your compression or encryption on the entire input Payload and store the result in the data field of a new Payload with a different encoding metadata field. -This ensures that the input Payload's metadata is preserved. -When the encoded Payload is sent to be decoded, you can verify the metadata field before applying the decryption. -If your Payload is not encoded, it is recommended to pass the unencoded data to the decode function instead of failing the conversion. - -Examples for implementing encryption: - -- [Go sample](https://github.com/temporalio/samples-go/tree/main/encryption) -- [Java sample](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/encryptedpayloads) -- [Python sample](https://github.com/temporalio/samples-python/tree/main/encryption) -- [TypeScript sample](https://github.com/temporalio/samples-typescript/tree/main/encryption) -- [.NET sample](https://github.com/temporalio/samples-dotnet/tree/main/src/Encryption) +- [Go](https://github.com/temporalio/samples-go/tree/main/encryption) +- [Java](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/encryptedpayloads) +- [Python](https://github.com/temporalio/samples-python/tree/main/encryption) +- [TypeScript](https://github.com/temporalio/samples-typescript/tree/main/encryption) +- [.NET](https://github.com/temporalio/samples-dotnet/tree/main/src/Encryption) ## Codec Server setup {#codec-server-setup} @@ -68,13 +51,10 @@ Use a Codec Server to programmatically decode your encoded [payloads](/dataconve A Codec Server is an HTTP server that uses your custom Codec logic to decode your data remotely. The Codec Server is independent of the Temporal Service and decodes your encrypted payloads through predefined endpoints. You create, operate, and manage access to your Codec Server in your own environment. -The Temporal CLI and the Web UI in turn provide built-in hooks to call the Codec Server to decode encrypted payloads on demand. - -The Codec Server is independent of the Temporal Server and decodes your encrypted payloads through endpoints. -When you configure a Codec Server endpoint in the Temporal Web UI or CLI, the Web UI and CLI use the remote endpoint to receive decoded payloads from the Codec Server. +When you configure a Codec Server endpoint in the Web UI or CLI, the Web UI and CLI use the remote endpoint to send and receive payloads from the Codec Server. See [API contract requirements](#api-contract-specifications). -Decoded payloads can then be displayed in the Workflow Execution Event History on the Web UI. Note that when you use a Codec Server, the decoded payloads are decoded and returned on the client side only; payloads on the Temporal Server (whether on Temporal Cloud or a self-hosted Temporal Service) remain encrypted. +Decoded payloads can then be displayed in the Workflow Execution Event History on the Web UI. When you use a Codec Server, the decoded payloads are decoded and returned on the client side only. Payloads on the Temporal Service (whether on Temporal Cloud or self-hosted) remain encrypted. Because you create, operate, and manage access to your Codec Server in your controlled environment, ensure that you consider the following: @@ -91,7 +71,13 @@ When you create your Codec Server to handle requests from the Web UI, the follow #### Endpoints -The Web UI and CLI send a POST to a `/decode` endpoint. In your Codec Server, create a `/decode` path and pass the incoming payload to the decode method in your Payload Codec. +The Web UI and CLI send POST requests to the following endpoints on your Codec Server: + +- `/decode` passes incoming payloads to the decode method in your Payload Codec. +- `/encode` passes incoming payloads to the encode method in your Payload Codec. +- `/download` retrieves and decodes payloads from [External Storage](/external-storage). This endpoint is only needed if + your Workers use External Storage. See [Codec Server with External Storage](/codec-server#external-storage) for + details. For examples on how to create your Codec Server, see the following Codec Server implementation samples: @@ -346,14 +332,12 @@ temporal workflow show \ --codec-auth 'auth-header' ``` -### Working with Large Payloads - -Codec Servers can be used for more than encryption and decryption of sensitive data. -Codec Server behavior is left up to implementers -- they can also call external services or perform other tasks, as long as they hook in at the encoding and decoding stages of a Workflow payload. +### Working with large payloads -By default, Temporal limits payload size to 4MB. -If this limitation is problematic for your use case, you could implement a codec that persists your payloads to an object store outside of workflow histories. -An example implementation is available from [DataDog](https://github.com/DataDog/temporal-large-payload-codec). +If your payloads exceed the Temporal Service's size limits, use [External Storage](/external-storage) to offload large +payloads to an external store like Amazon S3. When External Storage is configured, your Codec Server can also retrieve +and decode these payloads for viewing in the Web UI and CLI. See +[Codec Server with External Storage](/codec-server#external-storage) for details. ### Temporal Nexus diff --git a/docs/production-deployment/self-hosted-guide/upgrade-server.mdx b/docs/production-deployment/self-hosted-guide/upgrade-server.mdx index a2bcddc2de..0419e841df 100644 --- a/docs/production-deployment/self-hosted-guide/upgrade-server.mdx +++ b/docs/production-deployment/self-hosted-guide/upgrade-server.mdx @@ -36,9 +36,10 @@ system behavior; however there is no guarantee that there is compatibility betwe When upgrading the Temporal Server, there are two key considerations to keep in mind: -1. **Sequential Upgrades:** Temporal Server should be upgraded sequentially. That is, if you're on version \(v1.n.x\), - your next upgrade should be to \(v1.n+1.x\) or the closest available subsequent version. This sequence should be - repeated until your desired version is reached. +1. **Sequential Upgrades:** Temporal Server should be upgraded sequentially, one minor version at a time. Before + bumping to the next minor version, first upgrade to the highest available patch version of your current minor + version. For example, if you're on \(v1.n.0\), upgrade to \(v1.n.latest\) first, then proceed to + \(v1.(n+1).latest\). Repeat this sequence until you reach your desired version. 2. **Data Compatibility:** During an upgrade, the Temporal Server either updates or restructures the existing version data to match the data format of the newer version. Temporal Server ensures backward compatibility only between two @@ -71,7 +72,8 @@ formats to become unrecognizable. If the old format of the data can't be read to upgrades fail. Check the [Temporal Server releases](https://github.com/temporalio/temporal/releases) and follow these releases in -order. You can skip patch versions; use the latest patch of a minor version when upgrading. +order. Before upgrading to the next minor version, upgrade to the highest available patch version of your current minor +version first. Also, be aware that each upgrade requires the History Service to load all Shards and update the Shard metadata, so allow approximately 10 minutes on each version for these processes to complete before upgrading to the next version. diff --git a/docs/quickstarts.mdx b/docs/quickstarts.mdx index de01b6db5a..fcc0d6e97b 100644 --- a/docs/quickstarts.mdx +++ b/docs/quickstarts.mdx @@ -22,6 +22,7 @@ Choose your language to get started quickly. { href: "/develop/ruby/set-up-local-ruby", title: "Ruby", description: "Install the Ruby SDK and run a Hello World Workflow in Ruby." }, { href: "/develop/typescript/set-up-your-local-typescript", title: "TypeScript", description: "Install the TypeScript SDK and run a Hello World Workflow in TypeScript." }, { href: "/develop/dotnet/set-up-your-local-dotnet", title: ".NET", description: "Install the .NET SDK and run a Hello World Workflow in C#." }, + { href: "/develop/rust/quickstart", title: "Rust", description: "Install the Rust SDK and run a Hello World Workflow in Rust." }, ]} /> diff --git a/docs/troubleshooting/blob-size-limit-error.mdx b/docs/troubleshooting/blob-size-limit-error.mdx index f8c66473eb..a02388da1e 100644 --- a/docs/troubleshooting/blob-size-limit-error.mdx +++ b/docs/troubleshooting/blob-size-limit-error.mdx @@ -1,8 +1,10 @@ --- id: blob-size-limit-error -title: Troubleshoot the blob size limit error -sidebar_label: Blob size limit error -description: The BlobSizeLimitError occurs when a Workflow's payload exceeds the 2 MB request limit or the 4 MB Event History transaction limit set by Temporal. Reduce blob size via compression or batching. +title: Troubleshoot payload and gRPC message size limit errors +sidebar_label: Message size limit errors +description: + Temporal enforces size limits on data passed between Workers and the Temporal Service. Learn about the 2 MB payload + limit and 4 MB gRPC message limit, their error messages, and how to resolve them. toc_max_heading_level: 4 keywords: - error @@ -12,48 +14,122 @@ tags: - Failures --- -The `BlobSizeLimitError` is an error that occurs when the size of a blob (payloads including Workflow context and each Workflow and Activity argument and return value) exceeds the set limit in Temporal. +Temporal enforces size limits on the data that passes between the Temporal Client, Workers, and the Temporal Service. +There are two distinct limits, each producing different error messages and behaviors, and they require different +solutions: -- The max payload for a single request is 2 MB. -- The max size limit for any given [Event History](/workflow-execution/event#event-history) transaction is 4 MB. +- [Payload size limit](#payload-size-limit) +- [gRPC message size limit](#grpc-message-size-limit) -## Why does this error occur? +## Payload size limit -This error occurs when the size of the blob exceeds the maximum size allowed by Temporal. +The Temporal Service enforces a size limit on individual payloads. This limit is 2 MB on Temporal Cloud, but is +configurable on self-hosted deployments with a default of 2 MB. A [payload](/dataconversion#payload) represents the +serialized binary data for the input and output of Workflows and Activities. -This limit helps ensure that the Temporal Service prevents excessive resource usage and potential performance issues when handling large payloads. +### Error messages -## How do I resolve this error? +The error message depends on which operation carried the oversized payload and which SDK version is in use. Examples +include: -To resolve this error, reduce the size of the blob so that it is within the 4 MB limit. +- `WORKFLOW_TASK_FAILED_CAUSE_PAYLOADS_TOO_LARGE` +- `[TMPRL1103] Attempted to upload payloads with size that exceeded the error limit.` +- `BadScheduleActivityAttributes: ScheduleActivityTaskCommandAttributes.Input exceeds size limit` +- `Complete result exceeds size limit` +- `CompleteWorkflowExecutionCommandAttributes.Result exceeds size limit` +- `WORKFLOW_TASK_FAILED_CAUSE_BAD_UPDATE_WORKFLOW_EXECUTION_MESSAGE` -There are multiple strategies you can use to avoid this error: +### Error behavior {#payload-error-behavior} -1. Use compression with a [custom payload codec](/payload-codec) for large payloads. +The behavior when a payload exceeds the size limit depends on the SDK version. - - This addresses the immediate issue of the blob size limit; however, if blob sizes continue to grow this problem can arise again. +**Python SDK 1.23.0+:** The SDK fails the Workflow Task with cause `WORKFLOW_TASK_FAILED_CAUSE_PAYLOADS_TOO_LARGE`. The +Workflow is not terminated and remains open, so you can deploy a fix and allow the Workflow to continue. -2. Break larger batches of commands into smaller batch sizes: +**All other SDK versions:** The behavior depends on whether the oversized payload is an input or a result: - - Workflow-level batching: - 1. Modify the Workflow to process Activities or Child Workflows into smaller batches. - 2. Iterate through each batch, waiting for completion before moving to the next. - - Workflow Task-level batching: - 1. Execute Activities in smaller batches within a single Workflow Task. - 2. Introduce brief pauses or sleeps (for example, 1ms) between batches. +- **Inputs (Workflow input, Activity input):** The Temporal Service rejects the command and terminates the Workflow. + You'll need to resolve the issue and restart the Workflow. +- **Activity result:** The Temporal Service rejects the Activity completion and the Activity fails with an error. +- **Workflow result:** The Workflow gets stuck in a retry loop. The server rejects the `CompleteWorkflowExecution` + command, and replay produces the same oversized result. -3. Consider offloading large payloads to an object store to reduce the risk of exceeding blob size limits: +### How to resolve + +1. Offload large payloads to an object store to reduce the risk of exceeding payload size limits: 1. Pass references to the stored payloads within the Workflow instead of the actual data. - 2. Retrieve the payloads from the object store when needed during execution. + 1. Retrieve the payloads from the object store when needed during execution. + + This is called the + [claim check pattern](https://dataengineering.wiki/Concepts/Software+Engineering/Claim+Check+Pattern). The claim + check pattern is built into the SDKs as [External Storage](/external-storage), or you can implement your own claim + check pattern by using a custom [Payload Codec](/payload-codec) + + This is the most reliable way to avoid hitting payload size limits. Consider implementing the claim check pattern for + Workflows and Activities that have the potential to receive or return large payloads, even if they are currently + within the limit. + + :::tip Support, stability, and dependency info + + External Storage is currently in [Pre-release](/evaluate/development-production-features/release-stages#pre-release). + All APIs are experimental and may be subject to backwards-incompatible changes. Join the + [#large-payloads Slack channel](https://temporalio.slack.com/archives/C09VA2DE15Y) to provide feedback or ask for + help. + + ::: + +1. Use compression with a [custom Payload Codec](/payload-codec) for large payloads. This may address the immediate + issue, but if payload sizes continue to grow, the problem can arise again. + +## gRPC message size limit -## Workflow termination due to oversized response +All communication between the Temporal Client, Workers, and the Temporal Service uses gRPC, which enforces a 4 MB limit +on each request. This limit applies to the full request, including all payload data and command metadata. For example, +when a Workflow schedules multiple Activities in a single Workflow Task, the Worker sends one request containing all +those commands to schedule the Activities and their inputs. -When a Workflow Task response exceeds the 4 MB gRPC message size limit, Temporal automatically terminates the Workflow Execution. This is a non-recoverable error. The Workflow can't progress if it generates a response that's too large, so retrying won't help. +A Workflow can hit this limit even when every individual payload is under 2 MB. Scheduling several Activities with +moderate-sized inputs, or hundreds of Activities with tiny inputs in the same Workflow Task can push the combined +request past 4 MB. Activity results are also subject to this limit. -This typically happens when a Workflow schedules too many Activities, Child Workflows, or other commands in a single Workflow Task. The total size of all commands generated by the Workflow Task must fit within the 4 MB limit. +### Error messages -If your Workflow was terminated for this reason, you'll see a `WorkflowExecutionTerminated` event in the Event History with the cause `WORKFLOW_TASK_FAILED_CAUSE_GRPC_MESSAGE_TOO_LARGE`. +The error message depends on which operation carried the oversized gRPC message and which SDK version is in use. -To prevent this, use the batching strategies described above to split work across multiple Workflow Tasks instead of scheduling everything at once. +- `WORKFLOW_TASK_FAILED_CAUSE_GRPC_MESSAGE_TOO_LARGE` +- `ScheduleToCloseTimeout` (Activities only, see [error behavior](#grpc-error-behavior) below) -See the [gRPC Message Too Large error reference](/references/errors#grpc-message-too-large) for more details. +### Error behavior {#grpc-error-behavior} + +The behavior when a gRPC message exceeds the size limit depends on the SDK version. + +**Python SDK 1.23.0+:** The SDK fails the Workflow Task with cause `WORKFLOW_TASK_FAILED_CAUSE_PAYLOADS_TOO_LARGE`. The +Workflow is not terminated and remains open, so you can deploy a fix and allow the Workflow to continue. For Activities, +the Activity fails with an explicit error instead of timing out silently. + +**All other SDK versions:** The behavior depends on where the oversized message originates: + +- **Workflow Tasks:** The Workflow gets stuck in a retry loop that isn't visible in the Event History. This happens + because when the Worker completes a Workflow Task, it sends all the commands the Workflow produced (such as Activity + schedules and their inputs) back to the Temporal Service. If the combined size exceeds 4 MB, the SDK catches the gRPC + error and sends a failed Workflow Task response with cause `WORKFLOW_TASK_FAILED_CAUSE_GRPC_MESSAGE_TOO_LARGE`. Replay + produces the same oversized request every time, so the Workflow never makes progress. + +- **Activity Tasks:** The Activity gets stuck in a retry loop or exits with a `ScheduleToCloseTimeout`. The Activity + executes successfully, but the Worker can't deliver the oversized result over gRPC. The server never receives the + completion, so it retries the Activity. Each retry completes successfully but fails to deliver the result. The + Activity retries until the `ScheduleToCloseTimeout` expires. If no `ScheduleToCloseTimeout` is set, the Activity + retries indefinitely until the Workflow is manually terminated. The `ResourceExhausted` gRPC error only appears in + Worker logs. + +### How to resolve + +1. Break larger batches of commands into smaller batch sizes: + - Workflow-level batching: + 1. Modify the Workflow to process Activities or Child Workflows in smaller batches. + 2. Iterate through each batch, waiting for completion before moving to the next. + - [Workflow Task](/tasks#workflow-task)-level batching: + 1. Execute Activities in smaller batches within a single Workflow Task. + 2. Introduce brief pauses or sleeps between batches. +2. If the request is large because of payload sizes rather than the number of commands, refer to the + [Payload size limit](#payload-size-limit) section for solutions. diff --git a/docs/with-ai.mdx b/docs/with-ai.mdx index 680e209409..79e957013f 100644 --- a/docs/with-ai.mdx +++ b/docs/with-ai.mdx @@ -8,7 +8,7 @@ description: Give your AI coding agent Temporal expertise and real-time access t import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; -Give your AI coding agent Temporal expertise with Skills and real-time documentation access with the Temporal Docs MCP +Give your AI coding agent Temporal expertise with Skills and real-time documentation access with the Temporal Knowledge Base MCP Server. ## Skills @@ -112,10 +112,10 @@ git clone https://github.com/temporalio/skill-temporal-cloud.git ~/.claude/skill Restart your coding agent after installing. -## Temporal Docs MCP Server +## Temporal Knowledge Base MCP Server -Connect Temporal documentation directly to your AI assistant for accurate, up-to-date answers about Temporal. The -Temporal docs MCP server gives AI tools real-time access to our documentation, so responses draw from current docs +Connect Temporal expertise directly to your AI assistant for accurate, up-to-date answers about Temporal. The +Temporal knowledge base MCP server gives AI tools real-time access to best practices compiled from our documentation, educational materials, community forum responses, and slack channels, so responses draw from current expertise rather than training data. The server requires anonymous authentication using any Google account to enforce rate limits and prevent abuse. We @@ -123,7 +123,7 @@ cannot see nor do we collect any contact information from this. ### Claude Code -Add the Temporal docs MCP server globally so it's available in all your projects: +Add the Temporal knowledge base MCP server globally so it's available in all your projects: 1. Register the MCP server with Claude Code: diff --git a/sidebars.js b/sidebars.js index 8021261aac..8601f4629b 100644 --- a/sidebars.js +++ b/sidebars.js @@ -265,6 +265,7 @@ module.exports = { items: [ 'develop/java/activities/basics', 'develop/java/activities/execution', + 'develop/java/activities/standalone-activities', 'develop/java/activities/timeouts', 'develop/java/activities/asynchronous-activity', 'develop/java/activities/benign-exceptions', diff --git a/src/components/elements/SdkLogos.js b/src/components/elements/SdkLogos.js index 60efc0ac8f..7c64f16e0f 100644 --- a/src/components/elements/SdkLogos.js +++ b/src/components/elements/SdkLogos.js @@ -43,6 +43,12 @@ const supportedTech = [ alt: 'Ruby logo', class: 'w-10', }, + { + link: '/develop/rust', + image: '/img/sdks/svgs/rust.svg', + alt: 'Rust logo', + class: 'w-10', + }, ]; const displayTechListItems = () => { diff --git a/static/diagrams/codec-server-dark.svg b/static/diagrams/codec-server-dark.svg new file mode 100644 index 0000000000..3573fd754d --- /dev/null +++ b/static/diagrams/codec-server-dark.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/static/diagrams/codec-server-with-external-storage-dark.svg b/static/diagrams/codec-server-with-external-storage-dark.svg new file mode 100644 index 0000000000..e7140fbeeb --- /dev/null +++ b/static/diagrams/codec-server-with-external-storage-dark.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/static/diagrams/codec-server-with-external-storage.svg b/static/diagrams/codec-server-with-external-storage.svg new file mode 100644 index 0000000000..726707c33a --- /dev/null +++ b/static/diagrams/codec-server-with-external-storage.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/static/diagrams/codec-server.svg b/static/diagrams/codec-server.svg new file mode 100644 index 0000000000..f943bf11d6 --- /dev/null +++ b/static/diagrams/codec-server.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/static/img/develop/task-queue-priority-fairness/fairness-details.png b/static/img/develop/task-queue-priority-fairness/fairness-details.png index 3444f75914..9672fb11c6 100644 Binary files a/static/img/develop/task-queue-priority-fairness/fairness-details.png and b/static/img/develop/task-queue-priority-fairness/fairness-details.png differ diff --git a/static/img/develop/task-queue-priority-fairness/inheritance.png b/static/img/develop/task-queue-priority-fairness/inheritance.png new file mode 100644 index 0000000000..c9d0f03d95 Binary files /dev/null and b/static/img/develop/task-queue-priority-fairness/inheritance.png differ diff --git a/static/img/develop/task-queue-priority-fairness/priority-details.png b/static/img/develop/task-queue-priority-fairness/priority-details.png index d99a8f10f1..cb8c581487 100644 Binary files a/static/img/develop/task-queue-priority-fairness/priority-details.png and b/static/img/develop/task-queue-priority-fairness/priority-details.png differ diff --git a/static/img/develop/task-queue-priority-fairness/priority-fairness.png b/static/img/develop/task-queue-priority-fairness/priority-fairness.png index 9c6d6ba704..13eb373566 100644 Binary files a/static/img/develop/task-queue-priority-fairness/priority-fairness.png and b/static/img/develop/task-queue-priority-fairness/priority-fairness.png differ