Skip to content

Commit 49deba6

Browse files
authored
Merge branch 'main' into copilot/fix-typo-in-configuration-example
2 parents 0ff93b1 + 59e1e10 commit 49deba6

46 files changed

Lines changed: 2036 additions & 2320 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/snipsync.yml

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
name: Snipsync
2+
3+
on:
4+
schedule:
5+
- cron: '0 6 * * *' # Daily at 6:00 UTC
6+
workflow_dispatch:
7+
8+
jobs:
9+
snipsync:
10+
name: Sync code snippets
11+
runs-on: ubuntu-latest
12+
permissions:
13+
contents: write
14+
pull-requests: write
15+
steps:
16+
- name: Generate token
17+
id: generate_token
18+
uses: actions/create-github-app-token@v1
19+
with:
20+
app-id: ${{ secrets.TEMPORAL_CICD_APP_ID }}
21+
private-key: ${{ secrets.TEMPORAL_CICD_PRIVATE_KEY }}
22+
23+
- name: Checkout
24+
uses: actions/checkout@v6
25+
with:
26+
token: ${{ steps.generate_token.outputs.token }}
27+
ref: main
28+
29+
- name: Setup Node
30+
uses: actions/setup-node@v4
31+
with:
32+
node-version: 20
33+
cache: yarn
34+
35+
- name: Install dependencies
36+
run: yarn install --frozen-lockfile
37+
38+
- name: Run snipsync
39+
run: yarn snipsync
40+
41+
- name: Check for changes
42+
id: changes
43+
run: |
44+
if git diff --quiet; then
45+
echo "has_changes=false" >> "$GITHUB_OUTPUT"
46+
else
47+
echo "has_changes=true" >> "$GITHUB_OUTPUT"
48+
fi
49+
50+
- name: Commit and push changes
51+
if: steps.changes.outputs.has_changes == 'true'
52+
run: |
53+
git config user.name "github-actions[bot]"
54+
git config user.email "github-actions[bot]@users.noreply.github.com"
55+
56+
branch_name="snipsync/daily-update"
57+
git checkout -B "$branch_name"
58+
git add docs/
59+
git commit -m "chore: sync code snippets via snipsync"
60+
git push --force-with-lease origin "$branch_name"
61+
62+
- name: Create or update PR
63+
if: steps.changes.outputs.has_changes == 'true'
64+
env:
65+
GH_TOKEN: ${{ steps.generate_token.outputs.token }}
66+
run: |
67+
branch_name="snipsync/daily-update"
68+
existing_pr=$(gh pr list --head "$branch_name" --state open --json number --jq '.[0].number')
69+
70+
if [ -n "$existing_pr" ]; then
71+
echo "PR #$existing_pr already exists — updated with latest push."
72+
else
73+
gh pr create \
74+
--title "chore: sync code snippets" \
75+
--body "$(cat <<'EOF'
76+
Automated daily sync of code snippets from source repositories via snipsync.
77+
78+
This PR was generated by the [Snipsync workflow](https://github.com/${{ github.repository }}/actions/workflows/snipsync.yml).
79+
EOF
80+
)" \
81+
--head "$branch_name" \
82+
--base "main"
83+
fi

docs/cloud/metrics/index.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,3 +45,5 @@ Cloud Metrics for all Namespaces in your account are available from two sources:
4545
OpenMetrics is the recommended option for most users.
4646

4747
:::
48+
49+
For setting up SDK metrics emitted by your Workers and Clients, see [SDK metrics setup](/cloud/metrics/sdk-metrics-setup).

docs/cloud/metrics/prometheus-grafana.mdx

Lines changed: 21 additions & 208 deletions
Large diffs are not rendered by default.
Lines changed: 147 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,147 @@
1+
---
2+
id: sdk-metrics-setup
3+
title: Monitor SDK metrics with Prometheus and Grafana
4+
sidebar_label: SDK Metrics
5+
description: Set up Temporal SDK metrics with Prometheus and Grafana for monitoring Workers and Client performance.
6+
slug: /cloud/metrics/sdk-metrics-setup
7+
toc_max_heading_level: 4
8+
keywords:
9+
- temporal sdk metrics
10+
- prometheus scrape endpoint
11+
- sdk metrics setup
12+
- temporal sdk monitoring
13+
- grafana sdk metrics
14+
- worker metrics
15+
- temporal sdk prometheus
16+
- sdk metrics dashboard
17+
tags:
18+
- Metrics
19+
- Observability
20+
- Temporal Cloud
21+
---
22+
23+
import { ZoomingImage } from '@site/src/components';
24+
25+
SDK metrics are emitted by SDK Clients used to start your Workers and to start, signal, or query your Workflow Executions.
26+
Unlike [Temporal Cloud metrics](/cloud/metrics/), which are exposed through a Prometheus HTTP API endpoint, SDK metrics require you to set up a Prometheus scrape endpoint in your application code for Prometheus to collect and aggregate.
27+
28+
For a full list of available SDK metrics and their descriptions, see the [SDK metrics reference](/references/sdk-metrics).
29+
30+
The process for setting up SDK metrics includes the following steps:
31+
32+
1. [Expose a metrics endpoint](#sdk-metrics-setup) in your application code where Prometheus can scrape SDK metrics.
33+
2. [Configure Prometheus](#prometheus-configuration) to scrape your SDK metrics endpoints.
34+
3. [Add an SDK metrics data source](#grafana-data-source-configuration) in Grafana.
35+
4. [Set up dashboards](#grafana-dashboards-setup) to visualize SDK metrics.
36+
37+
Set up your connections to Temporal Cloud using an SDK of your choice and have some Workflows running on Temporal Cloud.
38+
Ensure Prometheus and Grafana are installed.
39+
40+
- [Go](/develop/go/temporal-client#connect-to-temporal-cloud)
41+
- [Java](/develop/java/temporal-client#connect-to-temporal-cloud)
42+
- [Python](/develop/python/temporal-client#connect-to-temporal-cloud)
43+
- [TypeScript](/develop/typescript/core-application#connect-to-temporal-cloud)
44+
- [.NET](/develop/dotnet/temporal-client#connect-to-temporal-cloud)
45+
46+
## Expose a metrics endpoint {#sdk-metrics-setup}
47+
48+
You must configure a Prometheus scrape endpoint for Prometheus to collect and aggregate your SDK metrics.
49+
Each language development guide has details on how to set this up.
50+
51+
- [Go SDK](/develop/go/observability#metrics)
52+
- [Java SDK](/develop/java/observability#metrics)
53+
- [TypeScript SDK](/develop/typescript/observability#metrics)
54+
- [Python](/develop/python/observability#metrics)
55+
- [.NET](/develop/dotnet/observability#metrics)
56+
57+
For working examples of how to configure metrics in each SDK, see the metrics samples:
58+
59+
- [Go SDK Samples](https://github.com/temporalio/samples-go/tree/main/metrics)
60+
- [Java SDK Samples](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/metrics)
61+
- [TypeScript SDK Samples](https://github.com/temporalio/samples-typescript/tree/main/interceptors-opentelemetry)
62+
- [Python SDK Samples](https://github.com/temporalio/samples-python/tree/main/custom_metric)
63+
- [.NET SDK Samples](https://github.com/temporalio/samples-dotnet/tree/main/src/OpenTelemetry/DotNetMetrics)
64+
65+
Some examples use OpenTelemtry to instrument metrics. It is useful to use a
66+
[Prometheus exporter with OpenTelemetry](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusexporter) to expose metrics for scraping.
67+
68+
## Configure Prometheus {#prometheus-configuration}
69+
70+
For Temporal SDKs, you must have Prometheus running and configured to listen on the scrape endpoints exposed in your application code.
71+
72+
For this example, you can run Prometheus locally or as a Docker container.
73+
In either case, ensure that you set the listen targets to the ports where you expose your scrape endpoints.
74+
This configuration assumes the scrape endpoint is set to port 8077 as in the [SDK metrics setup](#sdk-metrics-setup) example.
75+
76+
```yaml
77+
global:
78+
scrape_interval: 30s # Set the scrape interval to every 30 seconds. Default is every 1 minute.
79+
#...
80+
81+
# Set your scrape configuration targets to the ports exposed on your endpoints in the SDK.
82+
scrape_configs:
83+
- job_name: 'temporalsdkmetrics'
84+
metrics_path: /metrics
85+
scheme: http
86+
static_configs:
87+
- targets:
88+
# This is the scrape endpoint where Prometheus listens for SDK metrics.
89+
- localhost:8077
90+
# You can have multiple targets here, provided they are set up in your application code.
91+
```
92+
93+
See the [Prometheus documentation](https://prometheus.io/docs/introduction/first_steps/) for more details on how you can run Prometheus locally or using Docker.
94+
95+
To check whether Prometheus is receiving metrics from your SDK target, go to [http://localhost:9090](http://localhost:9090) and navigate to **Status&nbsp;> Targets**.
96+
The status of your target endpoint defined in your configuration appears here.
97+
98+
## Add an SDK metrics data source in Grafana {#grafana-data-source-configuration}
99+
100+
Depending on how you use Grafana, you can either install and run it locally, run it as a Docker container, or log in to Grafana Cloud to set up your data sources.
101+
102+
If you have installed and are running Grafana locally, go to [http://localhost:3000](http://localhost:3000) and sign in.
103+
104+
To add the SDK metrics Prometheus endpoint as a data source, do the following:
105+
106+
1. Go to **Configuration&nbsp;> Data sources**.
107+
2. Select **Add data source&nbsp;> Prometheus**.
108+
3. Enter a name for your SDK metrics data source, such as _Temporal SDK metrics_.
109+
4. In the **HTTP** section, enter your Prometheus endpoint in the URL field.
110+
If running Prometheus locally as described in the examples in this article, enter `http://localhost:9090`.
111+
5. For this example, enable **Skip TLS Verify** in the **Auth** section.
112+
6. Click **Save and test** to verify that the data source is working.
113+
114+
If you see issues in setting this data source, check whether the endpoints set in your SDKs are showing metrics.
115+
If you don't see your SDK metrics at the scrape endpoints defined, check whether your Workers and Workflow Executions are running.
116+
If you see metrics on the scrape endpoints, but Prometheus shows your targets are down, then there is an issue with connecting to the targets set in your SDKs.
117+
Verify your Prometheus configuration and restart Prometheus.
118+
119+
If you're running Grafana as a container, you can set your SDK metrics Prometheus data source in your Grafana configuration.
120+
See the example Grafana configuration described in the [Prometheus and Grafana setup for open-source Temporal Service](/self-hosted-guide/monitoring#grafana) article.
121+
122+
## Set up Grafana dashboards {#grafana-dashboards-setup}
123+
124+
To set up SDK metrics dashboards in Grafana, you can use the UI or configure them directly in your Grafana deployment.
125+
126+
:::tip
127+
128+
Temporal provides community-driven [example dashboards for Temporal SDKs](https://github.com/temporalio/dashboards/tree/master/sdk) that you can customize to meet your needs.
129+
130+
:::
131+
132+
To import a dashboard in Grafana:
133+
134+
1. In the navigation bar, select **Dashboards** > **Import dashboard**.
135+
2. You can either copy and paste the JSON from the [Temporal SDK sample dashboards](https://github.com/temporalio/dashboards/tree/master/sdk), or import the JSON files into Grafana.
136+
3. Save the dashboard and review the metrics data in the graphs.
137+
138+
To configure dashboards with the UI:
139+
140+
1. Go to **Create > Dashboard** and add an empty panel.
141+
2. On the **Panel configuration** page, in the **Query** tab, select the "Temporal SDK metrics" data source that you configured earlier.
142+
3. Expand the **Metrics browser** and select the metrics you want.
143+
A list of Worker performance metrics is described in the [Developer's Guide - Worker performance](/develop/worker-performance).
144+
All SDK-related metrics are listed in the [SDK metrics](/references/sdk-metrics) reference.
145+
4. The graph should now display data based on your selected queries.
146+
Note that SDK metrics will only show if you have Workflow Execution data and running Workers.
147+
If you don't see SDK metrics, run your Worker and Workflow Executions, then monitor the dashboard.

docs/develop/dotnet/core-application.mdx

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,60 @@ This means there are several things Workflows cannot do such as:
8484
Some calls in .NET do unsuspecting non-deterministic things and are easy to accidentally use.
8585
This is especially true with `Task`s.
8686
Temporal requires that the deterministic `TaskScheduler.Current` is used, but many .NET async calls will use `TaskScheduler.Default` implicitly (and some analyzers even encourage this).
87+
88+
The following sections cover replay-safe APIs, followed by .NET-specific `Task` gotchas.
89+
90+
#### Logging
91+
92+
Use [`Workflow.Logger`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_Logger), which is an instance of .NET's `ILogger`. The SDK logger automatically suppresses log messages during replay to avoid duplicates:
93+
94+
```csharp
95+
Workflow.Logger.LogInformation("Starting workflow for {Name}", name);
96+
```
97+
98+
For logger configuration, see [Observability: Logging](/develop/dotnet/observability#logging).
99+
100+
#### Random numbers and UUIDs
101+
102+
Use [`Workflow.Random`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_Random) to get a deterministic `Random` instance. For UUIDs, use [`Workflow.NewGuid()`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_NewGuid). Never use `System.Random` or `Guid.NewGuid()` directly:
103+
104+
```csharp
105+
// Good - deterministic across replays
106+
var value = Workflow.Random.Next(1, 100);
107+
var uniqueId = Workflow.NewGuid();
108+
109+
// Bad - different result on every replay
110+
var value = new Random().Next(1, 100);
111+
```
112+
113+
#### Current time
114+
115+
Use [`Workflow.UtcNow`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_UtcNow) instead of `DateTime.UtcNow`. The SDK returns the time of the last Workflow Task, which is consistent across replays:
116+
117+
```csharp
118+
var currentTime = Workflow.UtcNow;
119+
```
120+
121+
#### Detecting replay (advanced)
122+
123+
Use [`Workflow.Unsafe.IsReplaying`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.Unsafe.html#Temporalio_Workflows_Workflow_Unsafe_IsReplaying) to guard code that should only run on the first execution, such as emitting metrics or sending external notifications from an Interceptor.
124+
:::caution
125+
126+
Never use this to affect Workflow business logic — branching on replay status breaks determinism.
127+
128+
:::
129+
130+
```csharp
131+
if (!Workflow.Unsafe.IsReplaying)
132+
{
133+
EmitMetric("workflow_started", 1);
134+
}
135+
```
136+
137+
If your goal is to always take action when something new is happening, check that `Workflow.Unsafe.IsReplayingHistoryEvents` is false instead. This will be false during read-only operations like queries and update validators. This is what the SDK's built-in logger and metric meter use internally.
138+
139+
#### .NET Task gotchas
140+
87141
Here are some known gotchas to avoid with .NET tasks inside of Workflows:
88142

89143
- Do not use `Task.Run` - this uses the default scheduler and puts work on the thread pool.

docs/develop/dotnet/message-passing.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,7 @@ Follow these guidelines when writing your message handlers:
3737

3838
- Message handlers are defined as methods on the Workflow class, using one of the three attributes: [`WorkflowQueryAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowQueryAttribute.html), [`WorkflowSignalAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowSignalAttribute.html), and [`WorkflowUpdateAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowUpdateAttribute.html).
3939
- The parameters and return values of handlers and the main Workflow function must be [serializable](/dataconversion).
40-
- Prefer data classes to multiple input parameters.
41-
Data class parameters allow you to add fields without changing the calling signature.
40+
- Prefer data classes to multiple input parameters. Data class parameters allow you to add fields without changing the calling signature. Keep in mind that serialization and deserialization can fail with the default data converter if the new field does not have a default value.
4241

4342
### Query handlers {#queries}
4443

docs/develop/go/cancellation.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ func YourWorkflow(ctx workflow.Context) error {
5252
WaitForCancellation: true,
5353
}
5454
defer func() {
55-
// This logic ensures cleanup only happens if there is a Cancellation error
55+
// This logic ensures cleanup only happens if there is a Cancelation error
5656
if !errors.Is(ctx.Err(), workflow.ErrCanceled) {
5757
return
5858
}

0 commit comments

Comments
 (0)