You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/concepts-limits.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,15 +117,15 @@ Each resource scales synchronously and immediately between the minimum RU/s and
117
117
118
118
### Serverless
119
119
120
-
[Serverless](serverless.md) lets you use your Azure Cosmos DB resources in a consumption-based fashion. The following table lists the limits for storage and throughput burstability per container/database. These limits can't be increased. Allocate extra serverless accounts for more storage needs.
120
+
[Serverless](serverless.md) lets you use your Azure Cosmos DB resources in a consumption-based fashion.
121
121
122
122
| Resource | Limit |
123
123
| --- | --- |
124
-
| Maximum RU/s per container | 20,000*|
125
-
| Maximum storage across all items per (logical) partition | 20 GB|
126
-
| Maximum storage per container |1 TB |
124
+
| Maximum storage across all items per (logical) partition | 20 GB ¹|
125
+
| Maximum number of distinct (logical) partition keys | Unlimited|
126
+
| Maximum storage per container |Unlimited|
127
127
128
-
*Maximum RU/sec availability is dependent on data stored in the container. See, [Serverless Performance](serverless-performance.md)
128
+
¹ If your workload reaches the logical partition limit of 20 GB in production, rearchitecting your application with a different partition key is recommended as a long-term solution. To give you time to rearchitect your application, request a temporary increase in the logical partition key limit for your existing application. [File an Azure support ticket](create-support-request-quota-increase.md) and select quota type **Temporary increase in container's logical partition key size**. Requesting a temporary increase is intended as a temporary mitigation and not recommended as a long-term solution. To remove the configuration, file a support ticket and select quota type **Restore container’s logical partition key size to default (20 GB)**. You can file this support ticket after deleting data to fit the 20-GB logical partition limit or rearchitecting your application with a different partition key.
Copy file name to clipboardExpand all lines: articles/cosmos-db/serverless.md
+27-1Lines changed: 27 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,11 +38,36 @@ The Azure Cosmos DB serverless option best fits scenarios in which you expect *i
38
38
39
39
For more information, see [How to choose between provisioned throughput and serverless](throughput-serverless.md).
40
40
41
+
### Best practices for multi-tenant applications
42
+
43
+
When designing multi-tenant applications on Azure Cosmos DB, two isolation models are recommended:
44
+
45
+
#### Partition key per tenant
46
+
In this model, each tenant is represented as a logical partition key within a container. This approach:
47
+
- Scales efficiently as the number of tenants increases
48
+
- Reduces per-tenant cost by sharing throughput and storage
49
+
- Works well for business-to-consumer (B2C) applications with many smaller tenants
50
+
51
+
For more information, see the [partition-key-per-tenant](https://aka.ms/CosmosMultitenancy#partition-key-per-tenant-model) model.
52
+
53
+
#### Database account per tenant
54
+
In this model, each tenant has a dedicated Azure Cosmos DB account. This approach:
55
+
- Provides strong isolation boundaries
56
+
- Allows per-tenant settings such as regional configuration, customer-managed keys, and point-in-time restore
57
+
- Works well for business-to-business (B2B) applications that require differentiated configurations
58
+
59
+
For more information, see the [database-account-per-tenant](https://aka.ms/CosmosMultitenancy#database-account-per-tenant-model) model.
60
+
61
+
> [!Note]
62
+
Avoid designing multi-tenant applications with a container-per-tenant or database-per-tenant approach. These patterns can introduce [scalability challenges](concepts-limits.md#serverless-1) as your customer base grows. Instead, use one of the recommended models above to ensure predictable performance and cost efficiency.
63
+
64
+
For a detailed walkthrough, see [Multi tenancy in Azure Cosmos DB](https://aka.ms/CosmosMultitenancy).
65
+
41
66
## Use serverless resources
42
67
43
68
Azure Cosmos DB serverless is a new account type in Azure Cosmos DB. When you create an Azure Cosmos DB account, you choose between *provisioned throughput* and *serverless* options.
44
69
45
-
To get started with using the serverless model, you must create a new serverless account. Migrating an existing account to or from the serverless model currently isn't supported.
70
+
To get started with using the serverless model, you must create a new serverless account.
46
71
47
72
Any container that's created in a serverless account is a serverless container. Serverless containers have the same capabilities as containers that are created in a provisioned throughput account type. You read, write, and query your data exactly the same way. But a serverless account and a serverless container also have other specific characteristics:
48
73
@@ -72,3 +97,4 @@ Azure Cosmos DB serverless extends high availability support with availability z
72
97
-[Azure Cosmos DB serverless account performance](serverless-performance.md)
73
98
-[How to choose between provisioned throughput and serverless](throughput-serverless.md)
74
99
-[Pricing model in Azure Cosmos DB](how-pricing-works.md)
100
+
-[Multi tenancy in Azure Cosmos DB](https://aka.ms/CosmosMultitenancy)
Copy file name to clipboardExpand all lines: articles/postgresql/flexible-server/concepts-monitoring.md
+80-6Lines changed: 80 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Review the monitoring and metrics features in an Azure Database for
4
4
author: varun-dhawan
5
5
ms.author: varundhawan
6
6
ms.reviewer: maghan
7
-
ms.date: 4/21/2025
7
+
ms.date: 12/11/2025
8
8
ms.service: azure-database-postgresql
9
9
ms.subservice: flexible-server
10
10
ms.topic: concept-article
@@ -242,11 +242,85 @@ There are several options to visualize Azure Monitor metrics.
242
242
|Overview page|Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. |This page is based on platform metrics that are collected automatically. No configuration is required. |
243
243
|[Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)|You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. |- Once data collection is configured, no other configuration is required.<br>- Platform metrics for Azure resources are automatically available.<br>- Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>- Application metrics are available after Application Insights is configured. |
244
244
|[Grafana](https://grafana.com/grafana/dashboards/21177-azure-azure-postgresql-flexible-server-monitoring/)| You can use Grafana for visualizing and alerting on metrics. All versions of Grafana include the [Azure Monitor datasource plug-in](/azure/azure-monitor/visualize/grafana-plugin) to visualize your Azure Monitor metrics and logs. | To become familiar with Grafana dashboards, some training is required. However, you can simplify the process by downloading a prebuilt [Azure PostgreSQL Monitoring](https://grafana.com/grafana/dashboards/21177-azure-azure-postgresql-flexible-server-monitoring/), which allows for easy monitoring of all Azure Database for PostgreSQL flexible server instances within your organization. |
245
-
246
-
247
-
## Logs
248
-
249
-
In addition to the metrics, you can use Azure Database for PostgreSQL to configure and access Azure Database for PostgreSQL standard logs. For more information, see [Logging concepts](concepts-logging.md).
245
+
## Azure Database for PostgreSQL resource logs
246
+
247
+
These are logs generated and collected from operations that occur at the data plane level.
248
+
249
+
They aren't automatically collected by default. Their collection to a supported external location requires configuration and has associated costs of ingestion, retention and subsequent querying.
250
+
251
+
These logs are organized in categories and those categories are grouped into category groups.
252
+
253
+
Following are the logs that, using **Diagnostic Settings** can be streamed to an external destination like a Log Analytics workspace, an storage account, an event hub, or a partner solution:
254
+
255
+
**Description**: PostgreSQL server logs.<br>
256
+
**Running frequency**: 10 seconds.<br>
257
+
**Category name**: PostgreSQLLogs.<br>
258
+
**Display name**: PostgreSQL Server Logs.<br>
259
+
**Included in category group**: audit and allLogs.<br>
260
+
**Resource specific table**: PGSQLServerLogs.<br>
261
+
**Value of Category column when streamed to AzureDiagnostics**: PostgreSQLLogs.<br>
262
+
**Function to concatenate events from AzureDiagnostics and resource specific table**: _PGSQL_GetPostgresServerLogs.<br>
263
+
**Additional requirements**: None.<br>
264
+
265
+
**Description**: Snapshot of active PostgreSQL sessions showing details current database connections and their activity, including session metadata, timing, and wait states.<br>
266
+
**Running frequency**: 5 minutes.<br>
267
+
**Category name**: PostgreSQLFlexSessions.<br>
268
+
**Display name**: PostgreSQL Sessions data.<br>
269
+
**Included in category group**: audit and allLogs.<br>
270
+
**Resource specific table**: PGSQLPgStatActivitySessions.<br>
271
+
**Value of Category column when streamed to AzureDiagnostics**: PostgreSQLFlexSessions.<br>
272
+
**Function to concatenate events from AzureDiagnostics and resource specific table**: _PGSQL_GetPgStatActivitySessions.<br>
273
+
**Additional requirements**: None.<br>
274
+
275
+
**Description**: Detailed query performance statistics from PostgreSQL query store.<br>
276
+
**Running frequency**: 5 minutes when `pg_qs.interval_length_minutes` is between 1 and 5. Number of minutes specified in `pg_qs.interval_length_minutes`, when `pg_qs.interval_length_minutes` is higher than 5 minutes.<br>
**Display name**: PostgreSQL Query Store Runtime.<br>
279
+
**Included in category group**: audit and allLogs.<br>
280
+
**Resource specific table**: PGSQLQueryStoreRuntime.<br>
281
+
**Value of Category column when streamed to AzureDiagnostics**: PostgreSQLFlexQueryStoreRuntime.<br>
282
+
**Function to concatenate events from AzureDiagnostics and resource specific table**: _PGSQL_GetQueryStoreRuntime.<br>
283
+
**Additional requirements**: `pg_qs.query_capture_mode` must be set to either `top` or `all`.<br>
284
+
285
+
**Description**: What queries were waiting on what wait events and for how long.<br>
286
+
**Running frequency**: 5 minutes when `pg_qs.interval_length_minutes` is between 1 and 5. Number of minutes specified in `pg_qs.interval_length_minutes`, when `pg_qs.interval_length_minutes` is higher than 5 minutes.<br>
**Display name**: PostgreSQL Query Store Wait Statistics.<br>
289
+
**Included in category group**: audit and allLogs.<br>
290
+
**Resource specific table**: PGSQLQueryStoreWaits.<br>
291
+
**Value of Category column when streamed to AzureDiagnostics**: PostgreSQLFlexQueryStoreWaitStats.<br>
292
+
**Function to concatenate events from AzureDiagnostics and resource specific table**: _PGSQL_GetQueryStoreWaits.<br>
293
+
**Additional requirements**: `pg_qs.query_capture_mode` must be set to either `top` or `all`, and `pgms_wait_sampling.query_capture_mode` must be set to `on`.<br>
294
+
295
+
**Description**: Schema-level aggregated statistics about all tables in the database, summarizing table activity and maintenance metrics.<br>
296
+
**Running frequency**: 30 minutes.<br>
297
+
**Category name**: PostgreSQLFlexTableStats.<br>
298
+
**Display name**: PostgreSQL Autovacuum and schema statistics.<br>
299
+
**Included in category group**: audit and allLogs.<br>
300
+
**Resource specific table**: PGSQLAutovacuumStats.<br>
301
+
**Value of Category column when streamed to AzureDiagnostics**: PostgreSQLFlexTableStats.<br>
302
+
**Function to concatenate events from AzureDiagnostics and resource specific table**: _PGSQL_GetAutovacuumStats.<br>
303
+
**Additional requirements**: None.<br>
304
+
305
+
**Description**: Database-level view of transaction ID (XID) and multixact ID age and wraparound risk, along with thresholds for autovacuum and emergency vacuum actions.<br>
0 commit comments