diff --git a/skills/application-development/.gitkeep b/skills/application-development/.gitkeep deleted file mode 100644 index e69de29..0000000 diff --git a/skills/application-development/benchmarking-transaction-patterns/SKILL.md b/skills/application-development/benchmarking-transaction-patterns/SKILL.md new file mode 100644 index 0000000..bc1a6f5 --- /dev/null +++ b/skills/application-development/benchmarking-transaction-patterns/SKILL.md @@ -0,0 +1,229 @@ +--- +name: benchmarking-transaction-patterns +description: Guides benchmarking and comparing explicit multi-statement transactions versus single-statement CTE transactions in CockroachDB, with fair test methodology, contention analysis, and performance interpretation. Use when comparing transaction formulations, benchmarking CockroachDB workloads under contention, investigating retry pressure, or deciding whether to rewrite multi-step application flows into single SQL statements. +compatibility: "CockroachDB >= 22.1. Requires SQL access and a test cluster for benchmark execution. Do not run benchmarks against production workloads." +metadata: + author: cockroachdb + version: "1.0" +--- + +# Benchmarking Transaction Patterns + +Guides users through benchmarking, explaining, and comparing two formulations of the same transactional business workflow in CockroachDB: explicit multi-statement transactions versus single-statement CTE transactions. Focuses on performance under contention, fair test methodology, and result interpretation. + +**Complement to design skills:** For general transaction design principles, see [designing-application-transactions](../designing-application-transactions/SKILL.md). For SQL syntax and query patterns, see [cockroachdb-sql](../../query-and-schema-design/cockroachdb-sql/SKILL.md). + +## When to Use This Skill + +- Comparing explicit multi-statement transactions versus CTE-based single-statement transactions +- Benchmarking CockroachDB workloads under high concurrency or hot-key contention +- Investigating retry pressure, p95/p99 latency, or throughput differences between transaction formulations +- Deciding whether to rewrite a multi-step application flow into a single SQL statement +- Setting up a fair side-by-side benchmark with proper reset discipline +- Interpreting benchmark results (throughput, retries, tail latency, failures) +- Explaining why SQL Activity still shows waiting even with CTE transactions + +## Prerequisites + +- CockroachDB test cluster (do not benchmark on production) +- SQL client or JDBC driver for benchmark execution +- Understanding of CockroachDB SERIALIZABLE isolation and retry behavior +- Familiarity with basic concurrency testing concepts + +## Core Concept + +When two implementations perform the same business behavior, the transaction formulation itself can be a primary performance lever under contention. + +### Explicit Transaction Model + +The application orchestrates the workflow as separate SQL statements inside a transaction: read state, apply logic, write changes, commit. + +```sql +BEGIN; + +SELECT balance FROM accounts WHERE id = $1; + +-- Application decides whether transfer is allowed + +UPDATE accounts SET balance = balance - $2 WHERE id = $1; +UPDATE accounts SET balance = balance + $2 WHERE id = $3; + +INSERT INTO transfers (from_acct, to_acct, amount, created_at) +VALUES ($1, $3, $2, now()); + +COMMIT; +``` + +This keeps the transaction open across multiple statements and often includes application-side decision logic between steps. + +### CTE Transaction Model + +The same read/decision/write logic is expressed as a single SQL statement, so the database evaluates and applies the business operation atomically without intermediate client orchestration. + +```sql +WITH debit AS ( + UPDATE accounts + SET balance = balance - $2 + WHERE id = $1 + AND balance >= $2 + RETURNING id +), credit AS ( + UPDATE accounts + SET balance = balance + $2 + WHERE id = $3 + AND EXISTS (SELECT 1 FROM debit) + RETURNING id +), ins AS ( + INSERT INTO transfers (from_acct, to_acct, amount, created_at) + SELECT $1, $3, $2, now() + WHERE EXISTS (SELECT 1 FROM debit) + AND EXISTS (SELECT 1 FROM credit) + RETURNING id +) +SELECT id FROM ins; +``` + +### Why CTE Tends to Win Under Contention + +The explicit version keeps the transaction open across multiple statements, increasing the time window for write conflicts, timestamp pushes, and retries. Under high concurrency, each retry repeats the read and write work and continues contending for the same hot data. + +The CTE version collapses the same business logic into a single atomic statement, reducing transaction duration and sharply narrowing the contention window. + +## Steps + +### 1. Prepare the Benchmark Environment + +Set up a dedicated test database and schema. Do not mix benchmark workloads with other traffic. + +```sql +CREATE DATABASE IF NOT EXISTS bankbench; +USE bankbench; + +CREATE TABLE accounts ( + id INT PRIMARY KEY, + balance DECIMAL(18,2) NOT NULL DEFAULT 0 +); + +CREATE TABLE transfers ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + from_acct INT NOT NULL, + to_acct INT NOT NULL, + amount DECIMAL(18,2) NOT NULL, + created_at TIMESTAMPTZ NOT NULL DEFAULT now() +); +``` + +### 2. Seed the Test Data + +Use multi-row UPSERT for efficient seeding. Single-row inserts distort setup cost. + +```sql +INSERT INTO accounts (id, balance) +SELECT generate_series(1, 10000), 1000.00 +ON CONFLICT (id) DO UPDATE SET balance = 1000.00; +``` + +### 3. Run the Explicit Transaction Benchmark + +Execute with realistic concurrency (e.g., 64-128 workers) and a fixed duration or iteration count. Record throughput, retries, p50/p95/p99 latency, max latency, and failures. + +### 4. Reset Between Runs for Fair Comparison + +For a fair benchmark, reset account balances between explicit and CTE runs so table size, index size, and account state remain comparable. + +```sql +UPDATE accounts SET balance = 1000.00; +``` + +### 5. Run the CTE Transaction Benchmark + +Execute with the same concurrency, duration, and parameters as the explicit run. + +### 6. Compare Results + +Always compare these metrics side by side: + +| Metric | What to Look For | +|--------------------|------------------------------------------------------------------| +| Throughput (txn/s) | Higher is better; CTE typically sustains better under contention | +| Total retries | CTE often reduces to near-zero | +| p50 latency | Median transaction time | +| p95 latency | Tail latency under moderate contention | +| p99 latency | Worst-case tail; explicit model often shows spikes | +| Max latency | Outlier behavior | +| Failures | Non-retryable errors | + +## Benchmark Reference Results + +In a reported high-contention run comparing the two models: + +| Metric | Explicit | CTE | Change | +|-----------------|-------------|---------------|--------| +| Throughput | 591.1 txn/s | 1,035.1 txn/s | +75.1% | +| Wall time | 216.5s | 123.7s | -42.9% | +| Average latency | 202.2 ms | 111.3 ms | -45.0% | +| Total retries | 2,270,977 | 0 | -100% | + +Extended runs preserved the same directional result at higher total volume, with the explicit model continuing to accumulate retries and occasional failures while the CTE model stayed at zero retries and zero failures. + +### Impact Summary + +| Dimension | Explicit Multi-Statement | Single-Statement CTE | +|------------------------------|-------------------------------------|-------------------------| +| Round trips | Multiple client/server interactions | Single request | +| Transaction lifetime | Longer | Shorter | +| Client retry complexity | Higher | Lower | +| Atomic invariant enforcement | Spread across statements/app logic | Contained in SQL | +| Expected throughput | Lower under contention | Higher under contention | +| Client-visible retries | More likely | Often reduced | + +## Decision Guidance + +### Prefer the Explicit Pattern When + +- The business workflow truly cannot be expressed cleanly in one SQL statement +- Readability or staged business logic matters more than peak throughput +- The contention level is low enough that retry amplification is not the dominant cost + +### Prefer the CTE Pattern When + +- The workflow is contention-heavy +- The operation is naturally atomic +- The application currently performs read-decide-write across multiple statements +- The main goal is higher throughput, lower retries, and more stable p95/p99 latency + +## Fair Benchmark Rules + +1. **Reset between runs** for fair comparison so balances, table size, and index size stay consistent +2. **Treat no-reset runs as a demo**, not an apples-to-apples benchmark +3. **Use `--batch-size=1`** when you want one business unit of work at a time for clean comparison +4. **Compare the right metrics** — always include throughput, retries, p50, p95, p99, max latency, and failures +5. **Use multi-row UPSERT for seeding** — single-row seeding distorts setup cost + +## Common Misconceptions + +**"CTE always wins in every workload"** — No. The claim is narrower: when the same business workflow can be expressed as a single atomic statement and the workload is contention-sensitive, collapsing the transaction shape can materially improve performance and stability. + +**"SQL Activity showing waiting means CTE failed"** — Single-statement CTE execution does not eliminate contention. Statements can still wait on row conflicts, write intents, latches, or scheduling. The right comparison is overall throughput, tail latency, and retry profile. + +**"Single-statement means no contention"** — A CTE can still wait under contention. The benefit is a narrower contention window, not the elimination of contention. + +## Safety Considerations + +- Run benchmarks on dedicated test clusters, not production +- Reset data between runs for fair comparison +- Monitor cluster health during benchmark execution +- Use realistic but not destructive concurrency levels +- Validate that benchmark results transfer to your specific workload before making production changes + +## References + +- [CockroachDB Transactions Documentation](https://www.cockroachlabs.com/docs/stable/transactions) +- [Advanced Client-Side Transaction Retries](https://www.cockroachlabs.com/docs/stable/advanced-client-side-transaction-retries) +- [Performance Best Practices](https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview) +- [Comparing Multi-Statement vs Single-Statement Transactions](https://andrewdeally.medium.com/comparing-multi-statement-vs-single-statement-transactions-for-account-transfers-in-sql-09190b116e64) +- [Set-Based Operations with CockroachDB](https://andrewdeally.medium.com/set-based-operations-with-cockroachdb-c9f371992dc7) +- [Deep Dive into Transaction Retry Failures](https://www.mindfulchase.com/explore/troubleshooting-tips/databases/deep-dive-into-transaction-retry-failures-in-cockroachdb-root-causes-and-fixes.html) +- [Troubleshooting CockroachDB Performance](https://www.mindfulchase.com/explore/troubleshooting-tips/databases/troubleshooting-cockroachdb-performance-in-enterprise-deployments.html) +- [CockroachDB Transaction Demo](https://github.com/cockroachdb/cockroach-transaction-demo) +- [CockroachDB Best Practices & Anti-Patterns Demo](https://github.com/viragtripathi/cockroachdb-best-practices-demo) -- Demos 1-2 show retry patterns and contention scaling under concurrency diff --git a/skills/application-development/designing-application-transactions/SKILL.md b/skills/application-development/designing-application-transactions/SKILL.md new file mode 100644 index 0000000..3e9e11f --- /dev/null +++ b/skills/application-development/designing-application-transactions/SKILL.md @@ -0,0 +1,588 @@ +--- +name: designing-application-transactions +description: Guides application developers in designing correct and performant transaction patterns for CockroachDB, covering transaction lifetime, implicit vs explicit transactions, retry handling with exponential backoff, pushing invariants into SQL, selective pessimistic locking, set-based operations, connection pooling, prepared statements, keyset pagination, follower reads, and separating business logic from database logic. Use when building applications on CockroachDB, designing transaction workflows, handling retries, optimizing application-layer database interactions, or configuring connection pools. +compatibility: "CockroachDB >= 22.1. Works with or without a live database connection. With connection, requires appropriate privileges on target tables." +metadata: + author: cockroachdb + version: "1.0" +--- + +# Designing Application Transactions + +Guides application developers through the design principles and implementation patterns needed to build correct, performant, and resilient applications on CockroachDB. Covers the full spectrum from transaction scoping and retry logic to connection pooling and observability. + +**Complement to SQL skills:** For SQL syntax, schema design, and query optimization, see [cockroachdb-sql](../../query-and-schema-design/cockroachdb-sql/SKILL.md). For benchmarking transaction formulations under contention, see [benchmarking-transaction-patterns](../benchmarking-transaction-patterns/SKILL.md). + +## When to Use This Skill + +- Designing transaction boundaries for a CockroachDB application +- Implementing client-side retry logic with exponential backoff +- Deciding between implicit and explicit transactions +- Choosing between optimistic and pessimistic concurrency control +- Replacing read-modify-write loops with atomic SQL +- Configuring connection pools (HikariCP, pgbouncer, etc.) +- Implementing keyset pagination instead of OFFSET/LIMIT +- Using follower reads for reporting and analytics queries +- Separating business orchestration from database transactions +- Using prepared statements for performance and security +- Selecting explicit column projections instead of SELECT * +- Testing application behavior under concurrency +- Monitoring application-level database performance + +## Prerequisites + +- Familiarity with CockroachDB's SERIALIZABLE isolation level +- Understanding of ACID transaction semantics +- Access to application source code for transaction design changes +- SQL connection to a CockroachDB cluster (for testing and validation) + +## Steps + +### 1. Keep Transactions Short-Lived + +Transactions must include only the minimal set of SQL operations needed for one atomic state change. Do not place remote API calls, service-to-service requests, loops, expensive computation, or artificial waits inside a CockroachDB transaction. + +Long-lived transactions increase intent lifetime, contention, and retry probability in CockroachDB's distributed, optimistic-concurrency architecture. + +**Anti-pattern:** + +```java +@Transactional +public void createOrder(Order order) { + orderRepository.save(order); + paymentGateway.charge(order); // external call inside TX +} +``` + +**Correct approach — split the logic:** + +```java +@Transactional +public void createOrderRecord(Order order) { + orderRepository.save(order); +} + +// Outside the transaction +paymentGateway.charge(order); +``` + +**Why it matters:** +- Active intents block concurrent writers, reducing cluster throughput +- Competing transactions are more likely to encounter `40001` retry errors +- External work inside a retried transaction may run twice, causing duplicate side effects +- Long transactions tie up connections and memory, reducing concurrency + +### 2. Use Implicit Transactions for Single Statements + +CockroachDB automatically wraps each individual SQL statement as a transaction in autocommit mode. For single `INSERT`, `UPDATE`, `DELETE`, or `SELECT` statements, do not wrap in explicit `BEGIN`/`COMMIT`. + +**Preferred:** + +```sql +INSERT INTO orders (id, status) +VALUES (gen_random_uuid(), 'open'); +``` + +**Avoid:** + +```sql +BEGIN; +INSERT INTO orders (id, status) +VALUES (gen_random_uuid(), 'open'); +COMMIT; +``` + +**Benefits:** Simpler code paths, lower latency (fewer round trips), less resource usage, and fewer retry concerns since single-statement transactions are easier for CockroachDB to retry automatically. + +### 3. Use Explicit Transactions for Grouped Statements and Handle Retries + +When multiple SQL operations must succeed or fail together, use explicit transactions with `BEGIN`/`COMMIT`. Because CockroachDB defaults to SERIALIZABLE isolation, transaction retries are a normal part of correct execution under contention. + +```sql +BEGIN; + UPDATE accounts SET balance = balance - 100 WHERE id = 1; + UPDATE accounts SET balance = balance + 100 WHERE id = 2; +COMMIT; +``` + +**Client-side retry loop with exponential backoff:** + +```python +import random +import time + +def execute_with_retry(conn, txn_logic): + backoff = 0.1 + while True: + try: + with conn.transaction() as txn: + txn_logic(txn) + return + except SerializationFailure: + time.sleep(backoff + random.uniform(0, 0.1)) + backoff = min(backoff * 2, 2.0) +``` + +**Advanced retry with the cockroach_restart savepoint protocol:** + +```sql +BEGIN; +SAVEPOINT cockroach_restart; +-- transactional work +RELEASE SAVEPOINT cockroach_restart; +COMMIT; +``` + +**WARNING: Generic savepoints do NOT work as retry mechanisms.** CockroachDB aborts the entire transaction on a `40001` serialization failure. Using `ROLLBACK TO SAVEPOINT` on a regular savepoint cannot recover -- the transaction remains in an aborted state. Only the special `SAVEPOINT cockroach_restart` protocol (where the client catches the error, rolls back to the savepoint, and re-executes the work) is supported. For most applications, a full-transaction retry loop is simpler and recommended. + +**SQLSTATE guidance:** + +| Code | Meaning | Action | +|-----------------|-----------------------------------------|-------------------------------------------------------| +| `40001` | Serialization / retryable | Retry the entire unit of work with backoff and jitter | +| `40003` | Ambiguous result / indeterminate commit | Do not blindly replay non-idempotent work | +| `08xx` / `57xx` | Network or server transient issues | Retry carefully, account for ambiguous commits | +| `23xxx` | Constraint and application errors | Usually should not be retried | + +### 4. Mark Read-Only Transactions Where Applicable + +Read-only transactions perform retrieval only and make no writes. Marking them as read-only allows CockroachDB to avoid unnecessary write intents, reduce contention with writers, and enable follower or bounded-staleness reads. + +```sql +BEGIN; +SET TRANSACTION READ ONLY; +SELECT * FROM customers WHERE region = 'US-East'; +COMMIT; +``` + +### 5. Push Invariants into SQL — Avoid Read-Modify-Write Loops + +Do not fetch state into application code, modify it in memory, and write it back. Prefer atomic SQL, constraints, guarded UPDATEs, UPSERT, INSERT ... ON CONFLICT, and CTE-based mutations. + +**Anti-pattern:** + +```python +balance = db.fetch("SELECT balance FROM accounts WHERE id = 123") +balance += 100 +db.execute("UPDATE accounts SET balance = %s WHERE id = 123", (balance,)) +``` + +**Preferred atomic SQL:** + +```sql +UPDATE accounts +SET balance = balance + 100 +WHERE id = 123; +``` + +**Guarded write with invariant enforcement:** + +```sql +UPDATE customer_daily_limits +SET used_total = used_total + $2 +WHERE customer_id = $1 + AND day = current_date + AND used_total + $2 <= daily_limit; +``` + +**Atomic CTE pattern:** + +```sql +WITH limit_row AS ( + SELECT customer_id, day + FROM customer_daily_limits + WHERE customer_id = $1 AND day = current_date + FOR UPDATE +), spend AS ( + UPDATE customer_daily_limits AS l + SET remaining_limit = l.remaining_limit - $2, + used_total = l.used_total + $2 + FROM limit_row + WHERE l.customer_id = limit_row.customer_id + AND l.day = limit_row.day + AND l.remaining_limit >= $2 + RETURNING l.customer_id, l.day +), ins AS ( + INSERT INTO transfers (customer_id, amount, direction, created_at) + SELECT $1, $2, 'debit', now() + FROM spend + RETURNING id AS transfer_id +) +SELECT transfer_id FROM ins; +``` + +**Key approaches:** +- Use atomic updates: `UPDATE ... SET col = col + 1` +- Use version or timestamp checks in WHERE clauses for optimistic concurrency +- Enforce business rules with `UNIQUE`, `CHECK`, `NOT NULL`, and `FOREIGN KEY` constraints +- Use `UPSERT` or `INSERT ... ON CONFLICT` instead of read-before-write existence checks +- Use CTEs to keep multi-step logic atomic + +### 6. Use SELECT ... FOR UPDATE Selectively + +CockroachDB defaults to optimistic concurrency, which works well for most workloads. For hot rows or contention-heavy read-before-write paths, `SELECT ... FOR UPDATE` reduces retry churn by making contenders wait instead of race. + +```sql +BEGIN; +SELECT balance FROM accounts WHERE id = 1 FOR UPDATE; +UPDATE accounts SET balance = balance - 100 WHERE id = 1; +COMMIT; +``` + +**Use when:** +- The same rows are updated frequently by many concurrent transactions +- Optimistic retries are causing thrashing +- Consistency before write is required (inventory, financial transfers) + +**Counterintuitive contention insight:** Adding more application pods or threads targeting the same hot rows does NOT increase throughput -- it decreases it. With N concurrent writers on the same row, only 1 can commit per round; the other N-1 are aborted with `40001` and must retry. More concurrency on hot data means more wasted work and lower TPS. Solutions: use `SELECT ... FOR UPDATE` to serialize access, use atomic `UPDATE SET balance = balance + amount` to eliminate the read-modify-write cycle, or distribute writes across multiple rows. + +**Trade-off:** Overusing pessimistic locks can introduce waiting chains or deadlocks. Reserve for hot paths and contention-heavy workloads. + +### 7. Use Set-Based Operations Over Row-by-Row Loops + +CockroachDB performs best with set-oriented SQL rather than many small client-driven statements. This reduces round trips, shortens contention windows, and improves throughput. + +**Row-by-row anti-pattern:** + +```python +for row in rows: + db.execute( + "UPDATE accounts SET balance = balance + 10 WHERE id = %s", + (row.id,) + ) +``` + +**Set-based preferred:** + +```sql +UPDATE accounts +SET balance = balance + 10 +WHERE region = 'US-East'; +``` + +**Batch INSERT:** + +```sql +INSERT INTO trades (id, symbol, price) +VALUES + (1, 'AAPL', 180), + (2, 'GOOG', 125), + (3, 'AMZN', 140); +``` + +**Batch UPDATE with UNNEST:** + +```sql +WITH incoming AS ( + SELECT * + FROM UNNEST( + ARRAY['u1', 'u2', 'u3']::STRING[], + ARRAY['active', 'inactive', 'active']::STRING[] + ) AS t(id, new_status) +) +UPDATE users AS u +SET status = incoming.new_status, + updated_at = now() +FROM incoming +WHERE u.id = incoming.id; +``` + +**Maintenance batching with LIMIT:** + +```sql +DELETE FROM sessions +WHERE expires_at < now() +LIMIT 10000; +``` + +**JDBC batching (Java):** Use `addBatch`/`executeBatch` instead of per-row `executeUpdate`. This sends all rows in a single network round trip rather than N individual round trips, eliminating idle time that can account for ~50% of transaction latency in chatty workloads. + +**Declarative TTL:** + +```sql +ALTER TABLE events +SET (ttl_expiration_expression = 'created_at + INTERVAL ''7 DAY'''); +``` + +### 8. Use Follower Reads for Non-Critical Queries + +Many analytics, dashboard, and display-oriented queries do not need the absolute latest value. CockroachDB supports follower reads and bounded-staleness reads from follower replicas with lower latency. + +**Basic follower read:** + +```sql +SELECT * FROM orders +AS OF SYSTEM TIME '-5s'; +``` + +**Bounded staleness:** + +```sql +SELECT * FROM inventory +AS OF SYSTEM TIME with_max_staleness(INTERVAL '10s'); +``` + +**Read-write split pattern for heavy reads:** When a workflow reads a large payload (e.g., KYC JSON document) and then updates a status field, split it into three phases: (1) read outside the transaction with `AS OF SYSTEM TIME` for a conflict-free snapshot, (2) process in the application layer, (3) start a short write-only transaction. This avoids holding write intents during the heavy read. + +**Use when:** Dashboards, analytics, ETL, display-only reads, or large-payload workflows where the read and write can be separated. + +**Avoid when:** The workflow requires the latest transactional state for a subsequent write decision. + +### 9. Use Keyset Pagination Instead of OFFSET/LIMIT + +As the OFFSET grows, CockroachDB must scan and discard more rows. Keyset pagination uses the last row's ordered key values to jump directly to the next page. + +**OFFSET/LIMIT (inefficient at depth):** + +```sql +SELECT id, order_date, customer_id +FROM orders +ORDER BY id +LIMIT 100 OFFSET 5000; +``` + +**Keyset pagination (preferred):** + +```sql +SELECT id, order_date, customer_id +FROM orders +WHERE id > 5000 +ORDER BY id +LIMIT 100; +``` + +**Multi-column keyset:** + +```sql +SELECT id, created_at, customer_id +FROM orders +WHERE (created_at, id) > ('2025-01-01 00:00:00', 5000) +ORDER BY created_at, id +LIMIT 100; +``` + +**Trade-off:** Keyset pagination is ideal for next/previous navigation but not for arbitrary "jump to page 73" UX. + +### 10. Use Prepared Statements for Performance and Security + +Prepared statements reuse query structure and bind new values, improving performance through plan reuse and protecting against SQL injection. + +**Unsafe dynamic string concatenation:** + +```python +query = f"SELECT * FROM users WHERE username = '{user_input}'" +cursor.execute(query) +``` + +**Prepared / parameterized execution:** + +```python +cursor.execute("SELECT * FROM users WHERE username = %s;", (user_input,)) +``` + +**Plan reuse:** + +```sql +PREPARE get_balance AS +SELECT balance FROM accounts WHERE id = $1; + +EXECUTE get_balance(1001); +EXECUTE get_balance(2002); +``` + +### 11. Use Column Projections Instead of SELECT * + +Select only the columns you need. `SELECT *` increases network payload, memory usage, CPU cost, and prevents narrower index-only scans. + +```sql +-- Avoid +SELECT * FROM users WHERE id = 101; + +-- Preferred +SELECT name, email FROM users WHERE id = 101; +``` + +**Schema evolution impact:** If a later schema change adds `profile_picture BYTEA`, queries using `SELECT *` automatically pull that extra data. Explicit projections avoid this hidden performance regression. + +### 12. Design Keys and Indexes to Distribute Load + +Sequential or monotonically increasing primary keys create write hotspots. Keys and indexes should distribute reads and writes across ranges evenly. + +**Hotspot anti-pattern:** + +```sql +CREATE TABLE orders ( + id SERIAL PRIMARY KEY, + customer_id UUID, + region STRING +); +``` + +**Randomized key:** + +```sql +CREATE TABLE orders ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + customer_id UUID, + region STRING +); +``` + +**Hash-sharded index:** + +```sql +CREATE INDEX orders_by_id_hash +ON orders (id) +USING HASH SHARDED WITH BUCKET_COUNT = 16; +``` + +**Composite key for natural distribution:** + +```sql +CREATE TABLE sales ( + region_id STRING, + order_id UUID DEFAULT gen_random_uuid(), + PRIMARY KEY (region_id, order_id) +); +``` + +**Enforce explicit PKs cluster-wide:** + +```sql +SET CLUSTER SETTING sql.defaults.require_explicit_primary_keys.enabled = true; +``` + +### 13. Configure Connection Pooling + +Opening new database connections is expensive. Pooling reuses live connections to improve performance and prevent overload. + +**HikariCP guidance:** + +```yaml +maximumPoolSize: (vCPUs * 4) / number_of_pool_instances +minimumIdle: equal to maximumPoolSize +maxLifetime: 30 min (add jitter +/- 5 min) +idleTimeout: 5-10 min typical +keepaliveTime: slightly shorter than infrastructure timeout (~5 min) +connectionTimeout: 10-30 s typical +autoCommit: true unless using explicit transactions only +``` + +**Example stable configuration:** + +```yaml +maximum-pool-size: 12 +minimum-idle: 12 +max-lifetime: 1800000 +idle-timeout: 600000 +keepalive-time: 300000 +connection-timeout: 10000 +auto-commit: true +pool-name: ingestionPool +``` + +### 14. Separate Business Logic from Database Logic + +CockroachDB should manage ACID reads, writes, and schema-level integrity. The application layer should orchestrate workflows, external services, queues, and long-running work. + +**Inside the transaction:** +- Reads, writes, constraints, short guarded state transitions + +**Outside the transaction:** +- HTTP calls, RPC/service calls, email, payment providers, queue publishing + +**Asynchronous workflow pattern:** + +```python +def handle_order(order): + db.execute("INSERT INTO orders (id, status) VALUES (%s, %s)", (order.id, 'PENDING')) + publish_event('process_order', {'order_id': order.id}) +``` + +### 15. Respect the 16MB Transaction Payload Limit + +CockroachDB has a practical limit of ~16MB per transaction payload. This limit applies to the TOTAL data written in a single transaction, not just individual rows. + +**Two ways to hit the limit:** +- One large row (e.g., a 15MB JSON document) +- Many moderate rows in one transaction (e.g., 25 INSERTs of 500KB each = 12.5MB) + +**Guidelines:** +- Keep individual rows under 1MB +- Keep total transaction payload under 4MB +- Limit transactions to <10 statements +- Chunk large documents into 64-256KB pieces +- Store blobs >1MB in object storage (S3/GCS) with a database reference +- Break multi-statement transactions into smaller batches (commit every 5-10 statements) + +**Exceeding the limit causes `split failed while applying backpressure to Put` errors:** large Raft proposals block consensus, range splits stall, and the system applies backpressure. + +### 16. Use Session Guardrails + +Set session-level guardrails to catch runaway queries and missing WHERE clauses during development and testing: + +```sql +SET transaction_rows_read_err = 10000; +SET transaction_rows_written_err = 1000; +``` + +These cause transactions that exceed the thresholds to fail with an explicit error rather than silently consuming cluster resources. + +### 17. Test and Optimize Under Concurrency + +Single-user correctness is not sufficient. Test with realistic concurrency to surface retries, hotspots, contention, and workload-specific bottlenecks. + +**Quick start:** + +```bash +cockroach workload init bank 'postgresql://root@localhost:26257?sslmode=disable' +cockroach workload run bank --concurrency=64 --duration=10m +``` + +See [monitoring-and-concurrency-testing](references/monitoring-and-concurrency-testing.md) for detailed contention queries, validation checklists, and Prometheus metrics. + +### 18. Monitor for Performance and Contention + +Actively monitor query latency, contention, retries, and data distribution using `EXPLAIN ANALYZE`, `crdb_internal.transaction_contention_events`, DB Console SQL Activity, and Key Visualizer. + +See [monitoring-and-concurrency-testing](references/monitoring-and-concurrency-testing.md) for live contention queries, Prometheus metrics, and external monitoring integration. + +## Decision Guide + +| Scenario | Recommended Pattern | +|---------------------------------------------|--------------------------------------| +| Single SQL statement | Implicit transaction (autocommit) | +| Multiple statements, all-or-nothing | Explicit transaction with retry loop | +| Read current state before write on hot rows | `SELECT ... FOR UPDATE` | +| Historical, display, or reporting read | `AS OF SYSTEM TIME` / follower reads | +| Batch of records in memory | `UNNEST` / `VALUES` / batch SQL | +| Multi-step business rule in one operation | Single-statement CTE | + +## Safety Considerations + +- Always implement retry logic for `40001` serialization errors +- Make operations idempotent so retries do not cause duplicate side effects (use `INSERT ... ON CONFLICT DO NOTHING`) +- Do not use stale snapshot reads as authoritative preconditions for writes +- Do not run `EXPLAIN ANALYZE` on production queries that modify data +- Be cautious adding indexes to high-traffic tables during peak hours + +## References + +- [CockroachDB Transactions Documentation](https://www.cockroachlabs.com/docs/stable/transactions) +- [Advanced Client-Side Transaction Retries](https://www.cockroachlabs.com/docs/stable/advanced-client-side-transaction-retries) +- [SQL Performance Best Practices](https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview) +- [Follower Reads and Bounded Staleness](https://www.cockroachlabs.com/docs/stable/follower-reads) +- [Optimize Statement Performance](https://www.cockroachlabs.com/docs/stable/make-queries-fast) +- [Row-Level TTL](https://www.cockroachlabs.com/docs/stable/row-level-ttl) +- [Schema Design and Indexes](https://www.cockroachlabs.com/docs/stable/schema-design-indexes) +- [SQL Injection Prevention](https://www.cockroachlabs.com/docs/stable/sql-injection-prevention) +- [Architecture: Transaction Layer](https://www.cockroachlabs.com/docs/stable/architecture/transaction-layer) +- [JPA Best Practices: Explicit and Implicit Transactions](https://blog.cloudneutral.se/jpa-best-practices-explicit-and-implicit-transactions) +- [Deep Dive into Transaction Retry Failures](https://www.mindfulchase.com/explore/troubleshooting-tips/databases/deep-dive-into-transaction-retry-failures-in-cockroachdb-root-causes-and-fixes.html) +- [Comparing Multi-Statement vs Single-Statement Transactions](https://andrewdeally.medium.com/comparing-multi-statement-vs-single-statement-transactions-for-account-transfers-in-sql-09190b116e64) +- [Set-Based Operations with CockroachDB](https://andrewdeally.medium.com/set-based-operations-with-cockroachdb-c9f371992dc7) +- [Bulk Rewrites with the CockroachDB JDBC Driver](https://blog.cloudneutral.se/cockroachdb-jdbc-driver-part-iii-bulk-rewrites) +- [What is a Database Hotspot?](https://www.cockroachlabs.com/blog/the-hot-content-problem-metadata-storage-for-media-streaming/) +- [CockroachDB Transaction Demo](https://github.com/cockroachdb/cockroach-transaction-demo) +- [CockroachDB Best Practices & Anti-Patterns Demo](https://github.com/viragtripathi/cockroachdb-best-practices-demo) -- 10 runnable Java demos covering retries, batching, PK hotspots, guardrails, chunking, and multi-region +- [CockroachDB JDBC Wrapper](https://github.com/viragtripathi/cockroachdb-jdbc-wrapper) -- automatic retry library for Java/JDBC applications diff --git a/skills/application-development/designing-application-transactions/references/monitoring-and-concurrency-testing.md b/skills/application-development/designing-application-transactions/references/monitoring-and-concurrency-testing.md new file mode 100644 index 0000000..0a17492 --- /dev/null +++ b/skills/application-development/designing-application-transactions/references/monitoring-and-concurrency-testing.md @@ -0,0 +1,112 @@ +# Monitoring and Concurrency Testing Reference + +Detailed guidance for testing application behavior under concurrency and monitoring CockroachDB for performance and contention issues. + +## Testing Under Concurrency + +Single-user correctness is not sufficient. Test with realistic concurrency to surface retries, hotspots, contention, and workload-specific bottlenecks. + +### Workload Simulation + +```bash +cockroach workload init bank 'postgresql://root@localhost:26257?sslmode=disable' +cockroach workload run bank --concurrency=64 --duration=10m +``` + +### Python Multithreading Simulation + +```python +import threading +from myapp import execute_transaction + +threads = [] +for _ in range(50): + t = threading.Thread(target=execute_transaction) + t.start() + threads.append(t) + +for t in threads: + t.join() +``` + +### Contention Inspection + +```sql +SELECT * +FROM crdb_internal.transaction_contention_events +ORDER BY contention_duration DESC +LIMIT 10; +``` + +### Minimum Validation Checklist + +- Run concurrent workload tests +- Inspect `crdb_internal.transaction_contention_events` +- Review DB Console SQL Activity Statements view +- Use `EXPLAIN` and `EXPLAIN ANALYZE` on critical queries +- Verify retry logic and idempotency in the application path + +**When to test:** Before launching high-volume workloads, after schema or key redesigns, when adding regions, and during release load testing. + +## Monitoring for Performance and Contention + +Actively monitor query latency, contention, retries, and data distribution. Regular inspection detects early warning signs such as full table scans, long-running transactions, lock waits, and hot ranges. + +### Explain Plans + +```sql +EXPLAIN SELECT * FROM orders WHERE customer_id = 'abc123'; +EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 'abc123'; +``` + +### Live Contention Query + +```sql +WITH waits AS ( + SELECT + lh.database_name, lh.schema_name, lh.table_name, lh.index_name, + lh.lock_key_pretty, lh.lock_key, + lh.txn_id AS blocking_txn_id, + lw.txn_id AS waiting_txn_id + FROM crdb_internal.cluster_locks AS lh + JOIN crdb_internal.cluster_locks AS lw + ON lh.lock_key = lw.lock_key + WHERE lh.granted = true + AND lw.granted = false +) +SELECT + w.database_name, w.schema_name, w.table_name, w.index_name, + w.lock_key_pretty, + w.blocking_txn_id, + qh.query AS blocking_sql, + w.waiting_txn_id, + qw.query AS waiting_sql +FROM waits AS w +LEFT JOIN crdb_internal.cluster_queries AS qh + ON qh.txn_id = w.blocking_txn_id +LEFT JOIN crdb_internal.cluster_queries AS qw + ON qw.txn_id = w.waiting_txn_id +ORDER BY w.table_name, w.index_name, w.lock_key_pretty; +``` + +### Key Visualizer + +Use the DB Console heatmap to identify hot ranges, index skew, and uneven distribution. + +### Key Prometheus Metrics + +- `sql.transactions.retries` — retry frequency +- `sql.transactions.duration` — transaction duration distribution +- `sql.distsql.flows.total` — distributed SQL flow count +- `kv.range.write_bytes_per_second` — write throughput per range +- `kv.range.requests.slow.latch` — slow latch acquisitions indicating contention + +### External Monitoring Integration + +Integrate with Prometheus and Grafana to build dashboards tracking the metrics above. Set alerts on retry rate spikes, p99 latency increases, and hot range detection. + +## References + +- [Performance Tuning Guide](https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview) +- [Make Queries Fast](https://www.cockroachlabs.com/docs/stable/make-queries-fast) +- [Troubleshooting CockroachDB Performance](https://www.mindfulchase.com/explore/troubleshooting-tips/databases/troubleshooting-cockroachdb-performance-in-enterprise-deployments.html) diff --git a/skills/application-development/designing-multi-region-applications/SKILL.md b/skills/application-development/designing-multi-region-applications/SKILL.md new file mode 100644 index 0000000..ddac074 --- /dev/null +++ b/skills/application-development/designing-multi-region-applications/SKILL.md @@ -0,0 +1,328 @@ +--- +name: designing-multi-region-applications +description: Guides developers in selecting and implementing multi-region patterns for CockroachDB applications, covering active-passive vs active-active architectures, REGIONAL BY ROW, GLOBAL tables, manual geo-partitioning with lease preferences, and live demo setup with validation queries. Use when designing multi-region database topologies, choosing between REGIONAL BY ROW and manual partitioning, building multi-region demos, or optimizing cross-region latency. +compatibility: "CockroachDB >= 22.1 with multi-region licensed features. Requires a multi-region cluster or cockroach demo with locality flags." +metadata: + author: cockroachdb + version: "1.0" +--- + +# Designing Multi-Region Applications + +Guides developers through selecting the right multi-region pattern for their CockroachDB application and implementing it with proper validation. Covers the decision model for choosing between regular regional tables, `REGIONAL BY ROW`, `GLOBAL` tables, and manual geo-partitioning, plus a hands-on demo framework for comparing approaches. + +**Complement to other skills:** For transaction design patterns, see [designing-application-transactions](../designing-application-transactions/SKILL.md). For SQL syntax and schema design, see [cockroachdb-sql](../../query-and-schema-design/cockroachdb-sql/SKILL.md). + +## When to Use This Skill + +- Deciding how to model multi-region read/write behavior in CockroachDB +- Choosing between active-active and active-passive architectures +- Evaluating `REGIONAL BY ROW` vs manual geo-partitioning +- Understanding `GLOBAL` table behavior and trade-offs +- Designing for local reads and writes in multiple regions +- Building or presenting a multi-region demo or workshop +- Validating leaseholder placement and zone configurations +- Optimizing cross-region transaction latency + +**Do not use this skill** when the question is only about SQL syntax, indexing, or generic schema design with no multi-region decision involved. + +## Prerequisites + +- Understanding of CockroachDB range architecture and leaseholder concepts +- Multi-region cluster or `cockroach demo` with locality flags for testing +- Knowledge of application write patterns (single-region vs multi-region) + +## Pattern Selection + +### Step 1: Identify the Application Write Model + +Ask first: **is there one write home, or many?** + +- If the application has **one primary region for read/write**, start with a primary-region / regular regional-table model or a manually configured active-passive design. +- If the application needs **low-latency read/write in multiple regions**, evaluate manual geo-partitioning or `REGIONAL BY ROW`. +- If the table is mostly **reference data** that should read fast everywhere and the write path is not the main focus, consider `GLOBAL` tables. + +### Step 2: Choose the Pattern + +#### A. Regular Regional Tables (Active-Passive) + +**Use when:** +- The application has one primary region for RW +- Remote regions are secondary or read-mostly +- Simplicity matters more than region-local writes everywhere + +**Characteristics:** +- All leaseholders stay in the active region +- Replicas in other regions provide resiliency and single-region-failure survival +- Indicative latency: ~20ms writes, ~2-5ms reads (local region) + +**Recommendation:** Prefer the higher-level multi-region abstractions first unless the user explicitly needs manual control over partitions, voters, and lease preferences. + +#### B. Manual Geo-Partitioning with Region-Specific Leaseholders + +**Use when:** +- The application is active-active +- The data model is region-keyed +- The team wants explicit operational control +- Understanding internal mechanics (partitions, voters, lease preferences) is important + +**Characteristics:** +- Region-specific leaseholder pattern keeps writes around ~20ms and reads around ~2-5ms +- The application must enforce reads and writes for a key in the same region +- More DDL and operational burden +- Best for teaching internals + +**Example DDL:** + +```sql +CREATE TABLE accounts_manual ( + account_id STRING(40), + owner_id STRING(40) NOT NULL, + status STRING(20) NOT NULL, + region STRING(10) NOT NULL, + CONSTRAINT accounts_manual_pkey PRIMARY KEY (region, account_id) +); + +ALTER INDEX accounts_manual_pkey + PARTITION BY LIST (region) ( + PARTITION na_ne VALUES IN ('NA-NE'), + PARTITION na_mw VALUES IN ('NA-MW'), + PARTITION na_nw VALUES IN ('NA-NW') + ); + +ALTER PARTITION na_ne OF INDEX accounts_manual_pkey + CONFIGURE ZONE USING + num_replicas = 5, + num_voters = 5, + voter_constraints = '{+region=NA-NE: 2, +region=NA-MW: 2, +region=NA-NW: 1}', + lease_preferences = '[[+region=NA-NE]]'; +``` + +#### C. REGIONAL BY ROW + +**Use when:** +- The workload is active-active +- Each row naturally belongs to a region +- The team wants local RW in multiple regions without hand-managing partition zone configs +- The goal is the developer-facing multi-region abstraction + +**Characteristics:** +- All configured regions are possible home/leaseholder regions +- Indicative latency: ~20ms writes, ~2-5ms reads (local region) +- Less manual configuration than geo-partitioning +- Default recommendation for region-affine application data + +**Example DDL:** + +```sql +CREATE DATABASE IF NOT EXISTS example_service_rbr; +ALTER DATABASE example_service_rbr PRIMARY REGION 'NA-NE'; +ALTER DATABASE example_service_rbr ADD REGION 'NA-NW'; +ALTER DATABASE example_service_rbr ADD REGION 'NA-MW'; +ALTER DATABASE example_service_rbr SURVIVE REGION FAILURE; + +USE example_service_rbr; + +CREATE TABLE accounts_rbr ( + account_id STRING(40), + owner_id STRING(40) NOT NULL, + status STRING(20) NOT NULL, + region crdb_internal_region + NOT NULL + DEFAULT gateway_region()::crdb_internal_region, + CONSTRAINT accounts_rbr_pkey PRIMARY KEY (region, account_id) +) LOCALITY REGIONAL BY ROW AS region; +``` + +**Local allocation pattern:** + +```sql +WITH candidate AS ( + SELECT id, resource_code + FROM resource_pool + WHERE allocated_at IS NULL + AND region = gateway_region()::crdb_internal_region + ORDER BY random() + LIMIT 1 + FOR UPDATE +) +UPDATE resource_pool +SET allocated_at = now() +WHERE id = (SELECT id FROM candidate); +``` + +#### D. GLOBAL Tables + +**Use when:** +- The table is global/reference-style data +- The workload is primarily about broad read locality rather than region-owned writes + +**Important constraint:** `GLOBAL` tables optimize for fast reads everywhere. Do not position them as an "RW everywhere" pattern without verifying product-specific behavior in the official documentation. + +#### E. Survival Goals + +Choose the survival goal based on the trade-off between write latency and durability: + +```sql +-- Survive any single zone failure (default, 3+ zones required): +ALTER DATABASE mydb SURVIVE ZONE FAILURE; + +-- Survive an entire region going down (3+ regions required): +ALTER DATABASE mydb SURVIVE REGION FAILURE; +``` + +| Goal | Requirement | Write Latency | Data Safety | +|------------------------|-------------|---------------------------------|--------------------------| +| SURVIVE ZONE FAILURE | 3+ zones | Low (local consensus) | Survives 1 zone outage | +| SURVIVE REGION FAILURE | 3+ regions | Higher (cross-region consensus) | Survives 1 region outage | + +`SURVIVE REGION FAILURE` adds write latency because Raft consensus must span regions, but guarantees zero data loss even if an entire cloud region goes offline. + +### Pattern Comparison + +| Aspect | Regular Regional | Manual Geo-Partition | REGIONAL BY ROW | GLOBAL | +|--------------------|----------------------------|-----------------------------------------|-------------------------------|---------------------------| +| Write model | Single primary region | Active-active, region-keyed | Active-active, row-affine | Write from primary region | +| Read locality | Local to primary | Local to partition | Local to row region | All regions | +| Operational burden | Low | High | Medium | Low | +| Configuration | Minimal | Explicit partitions, zones, lease prefs | Database-level abstractions | Table-level declaration | +| Best for | Simple primary-region apps | Full control over mechanics | Developer-facing multi-region | Reference data | + +## Live Demo Setup + +For workshops and technical walkthroughs, use a 9-node local demo cluster to make multi-region locality observable. + +### Cluster Setup + +```bash +cockroach demo \ + --nodes 9 \ + --no-example-database \ + --insecure \ + --demo-locality=\ +region=NA-NE,zone=NA-NE-1:\ +region=NA-NE,zone=NA-NE-2:\ +region=NA-NE,zone=NA-NE-3:\ +region=NA-MW,zone=NA-MW-1:\ +region=NA-MW,zone=NA-MW-2:\ +region=NA-MW,zone=NA-MW-3:\ +region=NA-NW,zone=NA-NW-1:\ +region=NA-NW,zone=NA-NW-2:\ +region=NA-NW,zone=NA-NW-3 +``` + +### Demo Flow + +**Recommended presentation order:** + +1. Start with the manual geo-partitioning path +2. Show explicit partitioning and zone configuration +3. Run validation queries and confirm lease homing +4. Switch to REGIONAL BY ROW +5. Run RBR validations +6. Compare operational surface area + +### Validation Queries + +**Manual partitioning validation:** + +```sql +SHOW RANGES FROM INDEX accounts_manual_pkey WITH DETAILS; +``` + +Check that: +- All expected partition values are present +- Lease holder locality matches partition region +- Mismatches return FAIL, otherwise PASS + +**RBR validation:** + +```sql +SHOW RANGES FROM TABLE accounts_rbr WITH DETAILS; +``` + +Check that: +- Leaseholder locality coverage includes the expected regions +- There are no unexpected lease regions + +### Demo Talking Points + +**Manual path:** +- Precise control over partitions, voters, replicas, and lease preferences +- More DDL and operational burden +- Best for teaching internals and understanding what the database does under the hood + +**RBR path:** +- Keeps application intent front and center +- Less manual configuration +- Easier to explain for app teams +- Still grounded in the same topology + +## Cross-Region Latency Guidance + +Transaction latency increases when the client is remote from the relevant leaseholder/quorum path. + +| Client Location | Local RW Latency | Cross-Region RW Latency | +|----------------------------|------------------|-------------------------| +| Same region as leaseholder | ~10-20ms | — | +| Different region | — | ~50-150ms+ | + +**Guidance:** +- Place latency-sensitive services close to their primary data locality +- Use follower reads for non-critical display/reporting queries +- Use multi-region table locality and zone configuration intentionally +- Do not assume "distributed" means "same latency everywhere" + +## Output Expectations + +A strong answer using this skill should include: + +1. The recommended pattern +2. Why it fits the workload +3. What the application must do (routing, row affinity, primary-region assumptions) +4. What CockroachDB manages automatically vs manually +5. Expected latency shape or locality behavior +6. A warning when the user is asking for something the chosen pattern does not optimize for + +## Guardrails + +- Do not claim that regular primary-region tables provide symmetric low-latency writes from all regions +- Do not claim that `GLOBAL` is the answer for all-region low-latency writes without supporting documentation +- When comparing manual geo-partitioning vs `REGIONAL BY ROW`, explicitly call out control vs simplicity +- When the user wants to understand internal mechanics, bias toward explaining the manual model first +- When the user wants the best default application pattern, bias toward `REGIONAL BY ROW` for region-affine data +- Keep region names and locality labels consistent across all SQL +- Do not mix manual and abstraction approaches in the same explanation unless explicitly comparing them +- Always include validation, not just DDL + +## Multi-Region Migration Checklist + +For teams migrating from single-region PostgreSQL/Oracle to multi-region CockroachDB: + +1. Deploy nodes with `--locality=region=,zone=` +2. Set primary region: `ALTER DATABASE PRIMARY REGION ''` +3. Add regions: `ALTER DATABASE ADD REGION ''` (for each) +4. Set survival goal: `ALTER DATABASE SURVIVE ZONE|REGION FAILURE` +5. Classify tables: GLOBAL (reference data), REGIONAL BY ROW (row-affine), REGIONAL BY TABLE (default) +6. Set localities: `ALTER TABLE SET LOCALITY ` +7. Monitor leaseholder distribution in DB Console +8. Test failover: kill a zone/region and verify survival goal holds + +## Safety Considerations + +- Multi-region configuration changes affect data placement across the cluster +- Test multi-region configurations on demo or staging clusters before production +- Validate leaseholder placement after configuration changes +- Allow time for range rebalancing after topology changes + +## References + +- [CockroachDB Multi-Region Overview](https://www.cockroachlabs.com/docs/stable/multiregion-overview) +- [REGIONAL BY ROW Tables](https://www.cockroachlabs.com/docs/stable/regional-tables) +- [GLOBAL Tables](https://www.cockroachlabs.com/docs/stable/global-tables) +- [Follower Reads Documentation](https://www.cockroachlabs.com/docs/stable/follower-reads) +- [CockroachDB Transactions](https://www.cockroachlabs.com/docs/stable/transactions) +- [Performance Best Practices](https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview) +- [Cross-Regional Latency Impact on Transactions](https://andrewdeally.medium.com/cross-regional-latency-impact-on-transactions-with-cockroachdb-a38e0dcb82f9) +- [Query Parallelism with CockroachDB](https://andrewdeally.medium.com/when-and-how-to-use-query-parallelism-with-cockroachdb-df92fbe92845) +- [CockroachDB Best Practices & Anti-Patterns Demo](https://github.com/viragtripathi/cockroachdb-best-practices-demo) -- Demo 10 covers multi-region patterns with runnable examples