diff --git a/src/content/docs/d1/platform/limits.mdx b/src/content/docs/d1/platform/limits.mdx
index 7170f777f33897b..942695488c0c3b0 100644
--- a/src/content/docs/d1/platform/limits.mdx
+++ b/src/content/docs/d1/platform/limits.mdx
@@ -3,18 +3,17 @@ pcx_content_type: concept
title: Limits
sidebar:
order: 2
-
---
-import { Render, Details } from "~/components";
+import { Render } from "~/components";
| Feature | Limit |
| ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------- |
-| Databases | 50,000 (Workers Paid)[^1] / 10 (Free) |
+| Databases per account | 50,000 (Workers Paid) [^1] / 10 (Free) |
| Maximum database size | 10 GB (Workers Paid) / 500 MB (Free) |
-| Maximum storage per account | 1 TB (Workers Paid)[^2] / 5 GB (Free) |
+| Maximum storage per account | 1 TB (Workers Paid) [^2] / 5 GB (Free) |
| [Time Travel](/d1/reference/time-travel/) duration (point-in-time recovery) | 30 days (Workers Paid) / 7 days (Free) |
-| Maximum Time Travel restore operations | 10 restores per 10 minute (per database) |
+| Maximum Time Travel restore operations | 10 restores per 10 minutes (per database) |
| Queries per Worker invocation (read [subrequest limits](/workers/platform/limits/#how-many-subrequests-can-i-make)) | 1000 (Workers Paid) / 50 (Free) |
| Maximum number of columns per table | 100 |
| Maximum number of rows per table | Unlimited (excluding per-database storage limits) |
@@ -32,22 +31,14 @@ Limits for individual queries (listed above) apply to each individual statement
:::
[^1]: The maximum number of databases per account can be increased by request on Workers Paid and Enterprise plans, with support for millions to tens-of-millions of databases (or more) per account. Refer to the guidance on limit increases on this page to request an increase.
-[^2]: The maximum storage per account can be increased by request on Workers Paid and Enterprise plans. Refer to the guidance on limit increases on this page to request an increase.
-[^3]: A single Worker script can have up to 1 MB of script metadata. A binding is defined as a binding to a resource, such as a D1 database, KV namespace, [environmental variable](/workers/configuration/environment-variables/), or secret. Each resource binding is approximately 150-bytes, however environmental variables and secrets are controlled by the size of the value you provide. Excluding environmental variables, you can bind up to \~5,000 D1 databases to a single Worker script.
-[^4]: Requests to Cloudflare API must resolve in 30 seconds. Therefore, this duration limit also applies to the entire batch call.
-[^5]: The imported file is uploaded to R2. Refer to [R2 upload limit](/r2/platform/limits).
-
-1: The maximum number of databases per account can be increased by request on Workers Paid and Enterprise plans, with support for millions to tens-of-millions of databases (or more) per account. Refer to the guidance on limit increases on this page to request an increase.
-
-2: The maximum storage per account can be increased by request on Workers Paid and Enterprise plans. Refer to the guidance on limit increases on this page to request an increase.
+[^2]: The maximum storage per account can be increased by request on Workers Paid and Enterprise plans. Refer to the guidance on limit increases on this page to request an increase.
-3: A single Worker script can have up to 1 MB of script metadata. A binding is defined as a binding to a resource, such as a D1 database, KV namespace, [environmental variable](/workers/configuration/environment-variables/), or secret. Each resource binding is approximately 150 bytes, however environmental variables and secrets are controlled by the size of the value you provide. Excluding environmental variables, you can bind up to \~5,000 D1 databases to a single Worker script.
+[^3]: A single Worker script can have up to 1 MB of script metadata. A binding is defined as a binding to a resource, such as a D1 database, KV namespace, [environmental variable](/workers/configuration/environment-variables/), or secret. Each resource binding is approximately 150-bytes, however environmental variables and secrets are controlled by the size of the value you provide. Excluding environmental variables, you can bind up to \~5,000 D1 databases to a single Worker script.
-4: Requests to Cloudflare API must resolve in 30 seconds. Therefore, this duration limit also applies to the entire batch call.
+[^4]: Requests to Cloudflare API must resolve in 30 seconds. Therefore, this duration limit also applies to the entire batch call.
-5: The imported file is uploaded to R2. Refer to [R2 upload limit](/r2/platform/limits).
-
+[^5]: The imported file is uploaded to R2. Refer to [R2 upload limit](/r2/platform/limits).
Cloudflare also offers other storage solutions such as [Workers KV](/kv/api/), [Durable Objects](/durable-objects/), and [R2](/r2/get-started/). Each product has different advantages and limits. Refer to [Choose a data or storage product](/workers/platform/storage-options/) to review which storage option is right for your use case.
@@ -57,4 +48,4 @@ Cloudflare also offers other storage solutions such as [Workers KV](/kv/api/), [
Frequently asked questions related to D1 limits:
-
+
diff --git a/src/content/docs/durable-objects/platform/limits.mdx b/src/content/docs/durable-objects/platform/limits.mdx
index 1e1e0399eaa2a72..e8e997e617669af 100644
--- a/src/content/docs/durable-objects/platform/limits.mdx
+++ b/src/content/docs/durable-objects/platform/limits.mdx
@@ -5,23 +5,23 @@ sidebar:
order: 2
---
-import { Render, GlossaryTooltip, Details, WranglerConfig } from "~/components";
+import { Render, GlossaryTooltip, WranglerConfig } from "~/components";
Durable Objects are a special kind of Worker, so [Workers Limits](/workers/platform/limits/) apply according to your Workers plan. In addition, Durable Objects have specific limits as listed in this page.
## SQLite-backed Durable Objects general limits
-| Feature | Limit |
-| ------------------------------ | -------------------------------------------------------------------------------------------------------------- |
-| Number of Objects | Unlimited (within an account or of a given class) |
-| Maximum Durable Object classes | 500 (Workers Paid) / 100 (Free) [^1] |
-| Storage per account | Unlimited (Workers Paid) / 5GB (Free) [^2] |
-| Storage per class | Unlimited [^3] |
-| Storage per Durable Object | 10 GB [^3] |
-| Key size | Key and value combined cannot exceed 2 MB |
-| Value size | Key and value combined cannot exceed 2 MB |
-| WebSocket message size | 32 MiB (only for received messages) |
-| CPU per request | 30 seconds (default) / configurable to 5 minutes of [active CPU time](/workers/platform/limits/#cpu-time) [^4] |
+| Feature | Limit |
+| -------------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
+| Number of Objects | Unlimited (within an account or of a given class) |
+| Maximum Durable Object classes (per account) | 500 (Workers Paid) / 100 (Free) [^1] |
+| Storage per account | Unlimited (Workers Paid) / 5GB (Free) [^2] |
+| Storage per class | Unlimited [^3] |
+| Storage per Durable Object | 10 GB [^3] |
+| Key size | Key and value combined cannot exceed 2 MB |
+| Value size | Key and value combined cannot exceed 2 MB |
+| WebSocket message size | 32 MiB (only for received messages) |
+| CPU per request | 30 seconds (default) / configurable to 5 minutes of [active CPU time](/workers/platform/limits/#cpu-time) [^4] |
[^1]: Identical to the Workers [script limit](/workers/platform/limits/).
@@ -29,18 +29,7 @@ Durable Objects are a special kind of Worker, so [Workers Limits](/workers/platf
[^3]: Accounts on the Workers Free plan are limited to 5 GB total Durable Objects storage.
-[^4]: Each incoming HTTP request or WebSocket _message_ resets the remaining available CPU time to 30 seconds. This allows the Durable Object to consume up to 30 seconds of compute after each incoming network request, with each new network request resetting the timer. If you consume more than 30 seconds of compute between incoming network requests, there is a heightened chance that the individual Durable Object is evicted and reset. CPU time per request invocation [can be increased](/durable-objects/platform/limits/#increasing-durable-object-cpu-limits).
-
-
-1. Identical to the Workers [script limit](/workers/platform/limits/).
-
-2. Durable Objects both bills and measures storage based on a gigabyte
(1 GB = 1,000,000,000 bytes) and not a gibibyte (GiB).
-
-3. Accounts on the Workers Free plan are limited to 5GB total Durable Objects storage.
-
-4. Each incoming HTTP request or WebSocket _message_ resets the remaining available CPU time to 30 seconds. This allows the Durable Object to consume up to 30 seconds of compute after each incoming network request, with each new network request resetting the timer. If you consume more than 30 seconds of compute between incoming network requests, there is a heightened chance that the individual Durable Object is evicted and reset. CPU time per request invocation [can be increased](/durable-objects/platform/limits/#increasing-durable-object-cpu-limits).
-
-
+[^4]: Each incoming HTTP request or WebSocket _message_ resets the remaining available CPU time to 30 seconds. This allows the Durable Object to consume up to 30 seconds of compute after each incoming network request, with each new network request resetting the timer. If you consume more than 30 seconds of compute between incoming network requests, there is a heightened chance that the individual Durable Object is evicted and reset. CPU time per request invocation [can be increased](/durable-objects/platform/limits/#can-i-increase-durable-objects-cpu-limit).
### SQL storage limits
@@ -60,32 +49,23 @@ For Durable Object classes with [SQLite storage](/durable-objects/api/sqlite-sto
-| Feature | Limit for class with key-value storage backend |
-| ------------------------------ | --------------------------------------------------- |
-| Number of Objects | Unlimited (within an account or of a given class) |
-| Maximum Durable Object classes | 500 (Workers Paid) / 100 (Free) [^5] |
-| Storage per account | 50 GB (can be raised by contacting Cloudflare) [^6] |
-| Storage per class | Unlimited |
-| Storage per Durable Object | Unlimited |
-| Key size | 2 KiB (2048 bytes) |
-| Value size | 128 KiB (131072 bytes) |
-| WebSocket message size | 32 MiB (only for received messages) |
-| CPU per request | 30s (including WebSocket messages) [^7] |
+| Feature | Limit for class with key-value storage backend |
+| -------------------------------------------- | --------------------------------------------------- |
+| Number of Objects | Unlimited (within an account or of a given class) |
+| Maximum Durable Object classes (per account) | 500 (Workers Paid) / 100 (Free) [^5] |
+| Storage per account | 50 GB (can be raised by contacting Cloudflare) [^6] |
+| Storage per class | Unlimited |
+| Storage per Durable Object | Unlimited |
+| Key size | 2 KiB (2048 bytes) |
+| Value size | 128 KiB (131072 bytes) |
+| WebSocket message size | 32 MiB (only for received messages) |
+| CPU per request | 30s (including WebSocket messages) [^7] |
[^5]: Identical to the Workers [script limit](/workers/platform/limits/).
[^6]: Durable Objects both bills and measures storage based on a gigabyte
(1 GB = 1,000,000,000 bytes) and not a gibibyte (GiB).
-[^7]: Each incoming HTTP request or WebSocket _message_ resets the remaining available CPU time to 30 seconds. This allows the Durable Object to consume up to 30 seconds of compute after each incoming network request, with each new network request resetting the timer. If you consume more than 30 seconds of compute between incoming network requests, there is a heightened chance that the individual Durable Object is evicted and reset. CPU time per request invocation [can be increased](/durable-objects/platform/limits/#increasing-durable-object-cpu-limits).
-
-
-5. Identical to the Workers [script limit](/workers/platform/limits/).
-
-6. Durable Objects both bills and measures storage based on a gigabyte
(1 GB = 1,000,000,000 bytes) and not a gibibyte (GiB).
-
-7. Each incoming HTTP request or WebSocket _message_ resets the remaining available CPU time to 30 seconds. This allows the Durable Object to consume up to 30 seconds of compute after each incoming network request, with each new network request resetting the timer. If you consume more than 30 seconds of compute between incoming network requests, there is a heightened chance that the individual Durable Object is evicted and reset. CPU time per request invocation [can be increased](/durable-objects/platform/limits/#increasing-durable-object-cpu-limits).
-
-
+[^7]: Each incoming HTTP request or WebSocket _message_ resets the remaining available CPU time to 30 seconds. This allows the Durable Object to consume up to 30 seconds of compute after each incoming network request, with each new network request resetting the timer. If you consume more than 30 seconds of compute between incoming network requests, there is a heightened chance that the individual Durable Object is evicted and reset. CPU time per request invocation [can be increased](/durable-objects/platform/limits/#can-i-increase-durable-objects-cpu-limit).
diff --git a/src/content/docs/kv/platform/limits.mdx b/src/content/docs/kv/platform/limits.mdx
index 5d9145d6ba8fca2..c1a6d3e9bc1152d 100644
--- a/src/content/docs/kv/platform/limits.mdx
+++ b/src/content/docs/kv/platform/limits.mdx
@@ -3,25 +3,24 @@ pcx_content_type: concept
title: Limits
sidebar:
order: 2
-
---
-import { Render } from "~/components"
-
-| Feature | Free | Paid |
-| ------------------------------------------------------------------------------ | --------------------- | ------------ |
-| Reads | 100,000 reads per day | Unlimited |
-| Writes to different keys | 1,000 writes per day | Unlimited |
-| Writes to same key | 1 per second | 1 per second |
-| Operations/Worker invocation [^1] | 1000 | 1000 |
-| Namespaces | 1000 | 1000 |
-| Storage/account | 1 GB | Unlimited |
-| Storage/namespace | 1 GB | Unlimited |
-| Keys/namespace | Unlimited | Unlimited |
-| Key size | 512 bytes | 512 bytes |
-| Key metadata | 1024 bytes | 1024 bytes |
-| Value size | 25 MiB | 25 MiB |
-| Minimum [`cacheTtl`](/kv/api/read-key-value-pairs/#cachettl-parameter) [^2] | 30 seconds | 30 seconds |
+import { Render } from "~/components";
+
+| Feature | Free | Paid |
+| --------------------------------------------------------------------------- | --------------------- | ------------ |
+| Reads | 100,000 reads per day | Unlimited |
+| Writes to different keys | 1,000 writes per day | Unlimited |
+| Writes to same key | 1 per second | 1 per second |
+| Operations/Worker invocation [^1] | 1000 | 1000 |
+| Namespaces per account | 1,000 | 1,000 |
+| Storage/account | 1 GB | Unlimited |
+| Storage/namespace | 1 GB | Unlimited |
+| Keys/namespace | Unlimited | Unlimited |
+| Key size | 512 bytes | 512 bytes |
+| Key metadata | 1024 bytes | 1024 bytes |
+| Value size | 25 MiB | 25 MiB |
+| Minimum [`cacheTtl`](/kv/api/read-key-value-pairs/#cachettl-parameter) [^2] | 30 seconds | 30 seconds |
[^1]: Within a single invocation, a Worker can make up to 1,000 operations to external services (for example, 500 Workers KV reads and 500 R2 reads). A bulk request to Workers KV counts for 1 request to an external service.
diff --git a/src/content/docs/r2/platform/limits.mdx b/src/content/docs/r2/platform/limits.mdx
index a8ebf0341d1c78f..6bade937a4fd574 100644
--- a/src/content/docs/r2/platform/limits.mdx
+++ b/src/content/docs/r2/platform/limits.mdx
@@ -5,31 +5,28 @@ pcx_content_type: concept
import { Render } from "~/components";
-| Feature | Limit |
-| ------------------------------------------------------------------- | ---------------------------- |
-| Data storage per bucket | Unlimited |
-| Maximum number of buckets per account | 1,000,000 |
-| Maximum rate of bucket management operations per bucket1 | 50 per second |
-| Number of custom domains per bucket | 50 |
-| Object key length | 1,024 bytes |
-| Object metadata size | 8,192 bytes |
-| Object size | 5 TiB per object2 |
-| Maximum upload size4 | 5 GiB (single-part) / 4.995TiB (multi-part) 3 |
-| Maximum upload parts | 10,000 |
-| Maximum concurrent writes to the same object name (key) | 1 per second 5 |
-
-1 Bucket management operations include creating, deleting, listing,
-and configuring buckets. This limit does _not_ apply to reading or writing objects to a bucket.
-
2 The object size limit is 5 GiB less than 5 TiB, so 4.995
-TiB.
-
3 The max upload size is 5 MiB less than 5 GiB, so 4.995 GiB.
-
4 Max upload size applies to uploading a file via one request,
-uploading a part of a multipart upload, or copying into a part of a multipart
-upload. If you have a Worker, its inbound request size is constrained by
-[Workers request limits](/workers/platform/limits#request-limits). The max
-upload size limit does not apply to subrequests.
-
5 Concurrent writes to the same object name (key) at a higher rate will cause you to see HTTP 429 (rate limited) responses, as you would with other object storage systems.
-
+| Feature | Limit |
+| ------------------------------------------------------------ | ------------------------------------------------- |
+| Data storage per bucket | Unlimited |
+| Maximum number of buckets per account | 1,000,000 |
+| Maximum rate of bucket management operations per bucket [^1] | 50 per second |
+| Number of custom domains per bucket | 50 |
+| Object key length | 1,024 bytes |
+| Object metadata size | 8,192 bytes |
+| Object size | 5 TiB per object [^2] |
+| Maximum upload size [^3] | 5 GiB (single-part) / 4.995 TiB (multi-part) [^4] |
+| Maximum upload parts | 10,000 |
+| Maximum concurrent writes to the same object name (key) | 1 per second [^5] |
+
+[^1]: Bucket management operations include creating, deleting, listing, and configuring buckets. This limit does _not_ apply to reading or writing objects to a bucket.
+
+[^2]: The object size limit is 5 GiB less than 5 TiB, so 4.995 TiB.
+
+[^3]: Max upload size applies to uploading a file via one request, uploading a part of a multipart upload, or copying into a part of a multipart upload. If you have a Worker, its inbound request size is constrained by [Workers request limits](/workers/platform/limits#request-limits). The max upload size limit does not apply to subrequests.
+
+[^4]: The max upload size is 5 MiB less than 5 GiB, so 4.995 GiB.
+
+[^5]: Concurrent writes to the same object name (key) at a higher rate return HTTP 429 (rate limited) responses.
Limits specified in MiB (mebibyte), GiB (gibibyte), or TiB (tebibyte) are storage units of measurement based on base-2. 1 GiB (gibibyte) is equivalent to 230 bytes (or 10243 bytes). This is distinct from 1 GB (gigabyte), which is 109 bytes (or 10003 bytes).
@@ -39,7 +36,7 @@ Limits specified in MiB (mebibyte), GiB (gibibyte), or TiB (tebibyte) are storag
Managed public bucket access through an `r2.dev` subdomain is not intended for production usage and has a variable rate limit applied to it. The `r2.dev` endpoint for your bucket is designed to enable testing.
-* If you exceed the rate limit (hundreds of requests/second), requests to your `r2.dev` endpoint will be temporarily throttled and you will receive a `429 Too Many Requests` response.
-* Bandwidth (throughput) may also be throttled when using the `r2.dev` endpoint.
+- If you exceed the rate limit (hundreds of requests/second), requests to your `r2.dev` endpoint will be temporarily throttled and you will receive a `429 Too Many Requests` response.
+- Bandwidth (throughput) may also be throttled when using the `r2.dev` endpoint.
For production use cases, connect a [custom domain](/r2/buckets/public-buckets/#custom-domains) to your bucket. Custom domains allow you to serve content from a domain you control (for example, `assets.example.com`), configure fine-grained caching, set up redirect and rewrite rules, mutate content via [Cloudflare Workers](/workers/), and get detailed URL-level analytics for content served from your R2 bucket.
diff --git a/src/content/docs/vectorize/platform/limits.mdx b/src/content/docs/vectorize/platform/limits.mdx
index 55ece811a04d0de..59fa6e443b45597 100644
--- a/src/content/docs/vectorize/platform/limits.mdx
+++ b/src/content/docs/vectorize/platform/limits.mdx
@@ -5,7 +5,9 @@ sidebar:
order: 2
---
-The following limits apply to accounts, indexes and vectors (as specified):
+import { Details } from "~/components";
+
+The following limits apply to accounts, indexes, and vectors:
:::note[Need a higher limit?]
To request an adjustment to a limit, complete the [Limit Increase Request Form](https://forms.gle/nyamy2SM9zwWTXKE6). If the limit can be increased, Cloudflare will contact you with next steps.
@@ -30,19 +32,19 @@ To request an adjustment to a limit, complete the [Limit Increase Request Form](
| Maximum metadata indexes per Vectorize index | 10 |
| Maximum indexed data per metadata index per vector | 64 bytes |
-## Limits V1 (deprecated)
-
-The following limits apply to accounts, indexes and vectors (as specified):
+
-| Feature | Current Limit |
+| Feature | Limit |
| ------------------------------------- | -------------------------------- |
| Indexes per account | 100 indexes |
| Maximum dimensions per vector | 1536 dimensions |
| Maximum vector ID length | 64 bytes |
-| Metadata per vector | 10KiB |
+| Metadata per vector | 10 KiB |
| Maximum returned results (`topK`) | 20 |
| Maximum upsert batch size (per batch) | 1000 (Workers) / 5000 (HTTP API) |
| Maximum index name length | 63 bytes |
| Maximum vectors per index | 200,000 |
| Maximum namespaces per index | 1000 namespaces |
| Maximum namespace name length | 63 bytes |
+
+