Skip to content
Closed
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion claim-db-worker/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,9 @@ function errorResponse(title: string, message: string, details?: string, status
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
// --- Rate limiting ---
const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: request.url });
// Use client IP for consistent rate limiting across environments
const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip');
const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! });
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Do not trust client header over CF IP; add fallbacks and avoid null keys

Using x-agent before cf-connecting-ip allows trivial rate-limit evasion by rotating x-agent. Also, clientIP can be null in some envs; the non-null assertion ! only silences TS, not runtime.

Refactor to prefer CF-provided IP, then standard proxy headers, then a dev/testing override, and finally a safe default.

-    // Use client IP for consistent rate limiting across environments
-    const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip');
-    const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! });
+    // Use client IP for consistent rate limiting across environments (prefer CF header).
+    const clientIP =
+      request.headers.get('cf-connecting-ip') ||
+      request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() ||
+      request.headers.get('x-real-ip') ||
+      // Allow explicit test override last
+      request.headers.get('x-agent') ||
+      'anonymous';
+    let success = true;
+    try {
+      ({ success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP }));
+    } catch (err) {
+      // Fail-open to avoid blocking in case the binding is unavailable; adjust if you prefer fail-closed.
+      console.error('RateLimiter error', err);
+      success = true;
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Use client IP for consistent rate limiting across environments
const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip');
const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! });
// Use client IP for consistent rate limiting across environments (prefer CF header).
const clientIP =
request.headers.get('cf-connecting-ip') ||
request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() ||
request.headers.get('x-real-ip') ||
// Allow explicit test override last
request.headers.get('x-agent') ||
'anonymous';
let success = true;
try {
({ success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP }));
} catch (err) {
// Fail-open to avoid blocking in case the binding is unavailable; adjust if you prefer fail-closed.
console.error('RateLimiter error', err);
success = true;
}
🤖 Prompt for AI Agents
In claim-db-worker/src/index.ts around lines 33 to 35, the code currently trusts
the x-agent header first and uses a non-null assertion for the rate-limit key
which can be null at runtime; instead, prefer the Cloudflare-provided IP
(cf-connecting-ip), then standard proxy headers like x-forwarded-for (taking the
left-most IP), then a controlled dev/testing override header (e.g.
x-dev-client-ip), and finally a safe default string (e.g. "unknown" or
"127.0.0.1"); remove the non-null assertion and build a validated key string
before calling env.CLAIM_DB_RATE_LIMITER.limit({ key }), ensuring you trim and
normalize the chosen header value so it cannot be empty and falls back to the
default.


if (!success) {
return errorResponse('Rate Limit Exceeded', "We're experiencing high demand. Please try again later.", '', 429);
Expand Down
2 changes: 1 addition & 1 deletion claim-db-worker/wrangler.jsonc
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
"type": "ratelimit",
"namespace_id": "1006",
"simple": {
"limit": 100,
"limit": 10,
"period": 60,
},
Comment on lines +27 to 29
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

CLAIM_DB limit also tightened to 10 req/min — align expectations

Ensure clients and tests reflect the new ceiling to avoid false alarms. Consider staggered rollout or per-env overrides if needed.


🏁 Script executed:

#!/bin/bash
# Show the wrangler.jsonc around the rate limit config
echo "---- claim-db-worker/wrangler.jsonc (lines 1-100) ----"
sed -n '1,100p' claim-db-worker/wrangler.jsonc || true

echo -e "\n---- All occurrences of 'limit' in JSON/JSONC files ----"
rg --type-add 'jsonc:*.jsonc' --type-add 'json:*.json' -tjson -tjsonc '"limit"' -C 3

echo -e "\n---- References to rate limiting in code/tests ----"
rg -n "limit" -e "claim" -e "db" || true
rg -n "rate" -e "limit" || true

echo -e "\n---- Test files mentioning 10 or 60 ----"
rg -n "10" --glob '*test*' || true
rg -n "60" --glob '*test*' || true

Length of output: 2323


CLAIM_DB rate limit lowered to 10 req/min—update clients & tests

We’ve tightened CLAIM_DB_RATE_LIMITER (limit: 10, period: 60). Any clients or tests assuming a higher ceiling will now hit this threshold. To avoid failures or false alarms:

• tests/test-rate-limits.sh: lower the default TEST_COUNT (currently 110) to ≤ 10 or make it configurable per-env
• Client integrations/docs that reference CLAIM_DB throughput > 10 req/min
• Consider per-environment overrides or a staggered rollout if some workloads need more headroom

🤖 Prompt for AI Agents
In claim-db-worker/wrangler.jsonc around lines 27 to 29, the CLAIM_DB rate
limiter was reduced to 10 requests per 60 seconds which will break tests and
clients that assume higher throughput; update tests/test-rate-limits.sh to lower
TEST_COUNT to 10 or make TEST_COUNT configurable via env vars and default to
≤10, update any client integrations and documentation that reference >10 req/min
to reflect the new limit, and add a per-environment override (env var) or note a
staged rollout plan for workloads that need higher headroom.

},
Expand Down
7 changes: 6 additions & 1 deletion create-db-worker/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,12 @@ export { DeleteDbWorkflow };
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
// --- Rate limiting ---
const { success } = await env.CREATE_DB_RATE_LIMITER.limit({ key: request.url });
// Use client IP for consistent rate limiting across environments
const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip');

console.log(`Client IP: ${clientIP} - Request URL: ${request.url}`);

const { success } = await env.CREATE_DB_RATE_LIMITER.limit({ key: clientIP! });

if (!success) {
return new Response(`429 Failure - rate limit exceeded for ${request.url}`, { status: 429 });
Expand Down
2 changes: 1 addition & 1 deletion create-db-worker/wrangler.jsonc
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
"type": "ratelimit",
"namespace_id": "1005",
"simple": {
"limit": 100,
"limit": 10,
"period": 60,
},
},
Expand Down
24 changes: 17 additions & 7 deletions tests/test-rate-limits.sh
Original file line number Diff line number Diff line change
@@ -1,18 +1,21 @@
#!/bin/bash

# Test Rate Limits Script
# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url]
# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url] [agent_id]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Add strict mode and safer loop quoting

Improve robustness and satisfy shellcheck.

 # Test Rate Limits Script
-# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url] [agent_id]
+# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url] [agent_id]
+set -Eeuo pipefail

Also change the loop and seq usage:

-    for i in $(seq 1 $TEST_COUNT); do
+    for i in $(seq 1 "${TEST_COUNT}"); do
🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around line 4, the script lacks strict shell
settings and uses unsafe loop/seq and unquoted variables; enable robust shell
behavior by adding at the top: set -euo pipefail and IFS=$'\n\t', then update
the loop to avoid unquoted seq and word-splitting (for example replace
constructs like for i in $(seq 1 $test_count) with for i in $(seq 1
"$test_count") or preferably a POSIX-safe while loop using a counter), and
ensure all variable expansions in the loop and commands are quoted (e.g.,
"$create_db_url" "$claim_db_url" "$agent_id") to satisfy shellcheck and prevent
word-splitting.


# Default values
TEST_COUNT=${1:-110}
CREATE_DB_URL=${2:-"http://127.0.0.1:8787"}
CLAIM_DB_URL=${3:-"http://127.0.0.1:9999"}
CREATE_DB_URL=${2:-"https://create-db-temp.prisma.io"}
CLAIM_DB_URL=${3:-"https://create-db.prisma.io"}
AGENT_ID=${4:-"meow"}

echo "🧪 Testing Rate Limits"
echo "======================"
echo "Test Count: $TEST_COUNT"
echo "Create DB URL: $CREATE_DB_URL"
echo "Claim DB URL: $CLAIM_DB_URL"
echo "Agent Header: X-Agent: $AGENT_ID"
echo "User-Agent: prisma-rate-limit-test/$AGENT_ID"
echo ""

# Function to test a worker
Expand All @@ -32,8 +35,15 @@ test_worker() {
for i in $(seq 1 $TEST_COUNT); do
echo -n "Request $i/$TEST_COUNT: "

# Make the request and capture both response body and status code
response=$(curl -s -w "%{http_code}" -o /tmp/response_$i.json "$endpoint" 2>/dev/null)
# Make the request with unique agent headers and capture body + status code
response=$(curl -s \
-H "x-agent: $AGENT_ID" \
-H "x-Agent: $AGENT_ID" \
-H "User-Agent: prisma-rate-limit-test/$AGENT_ID" \
-w "%{http_code}" \
-o /tmp/response_$i.json \
"$endpoint" 2>/dev/null)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Quote variables (SC2086), remove duplicate header, and safe temp files

  • Quote expansions to avoid word splitting/globbing and satisfy shellcheck.
  • Remove duplicate x-Agent header.
  • Use mktemp and trap for cleanup.
-        # Make the request with unique agent headers and capture body + status code
-        response=$(curl -s \
-          -H "x-agent: $AGENT_ID" \
-          -H "x-Agent: $AGENT_ID" \
-          -H "User-Agent: prisma-rate-limit-test/$AGENT_ID" \
-          -w "%{http_code}" \
-          -o /tmp/response_$i.json \
-          "$endpoint" 2>/dev/null)
+        # Make the request with unique agent header and capture body + status code
+        tmp_dir="${TMP_DIR:-$(mktemp -d)}"
+        trap 'rm -rf "$tmp_dir"' EXIT
+        response="$(
+          curl -s \
+            -H "x-agent: ${AGENT_ID}" \
+            -H "User-Agent: prisma-rate-limit-test/${AGENT_ID}" \
+            -w "%{http_code}" \
+            -o "${tmp_dir}/response_${i}.json" \
+            "${endpoint}" 2>/dev/null
+        )"
🧰 Tools
🪛 Shellcheck (0.10.0)

[info] 44-44: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around lines 38 to 46, the curl call uses unquoted
variable expansions, repeats the x-Agent header, and writes to a fixed temp
path; fix it by removing the duplicate header, quoting all variable expansions
(e.g. "$AGENT_ID" and "$endpoint") to prevent word-splitting/globbing, create a
unique temp file with mktemp (e.g. tmp=$(mktemp)) and write curl output to that
quoted path, and add a trap to remove the temp file on exit to ensure safe
cleanup.

status_code=${response: -3}

case $status_code in
Expand All @@ -52,7 +62,7 @@ test_worker() {
esac

# Small delay between requests
sleep 0.1
#sleep 0.05
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: throttle to reduce noisy bursts

Consider re-introducing a small sleep (e.g., 20–50ms) to avoid overwhelming remote endpoints during ad-hoc runs.

🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around line 65, the commented-out small sleep
("#sleep 0.05") should be re-enabled to throttle bursts; uncomment it and use a
portable short delay such as sleep 0.02–0.05 (e.g., sleep 0.02 or sleep 0.05) to
reduce request bursts during ad-hoc runs, or if sub-second precision is required
in the environment, replace with a portable usleep/timeout helper that sleeps
for 20–50ms.

done

echo ""
Expand All @@ -76,4 +86,4 @@ echo "- Later requests should be rate limited (429)"
echo "- This confirms rate limiting is working correctly"
echo ""
echo "💡 To test with your actual deployed URLs, run:"
echo " ./tests/test-rate-limits.sh 110 https://create-db-temp.prisma.io https://create-db.prisma.io"
echo " ./tests/test-rate-limits.sh 110 https://create-db-temp.prisma.io https://create-db.prisma.io my-local-agent"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Example invocation: consider demonstrating local defaults

Recommend showing a local/miniflare example first to prevent accidental production load.

🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around line 89, the example invocation only shows
production endpoints which may cause accidental load; add a local/miniflare
first so users see a safe default. Update the echo line to print a local example
(e.g., using localhost or miniflare URL and a local agent name) before the
existing production example, ensuring both examples are clearly separated and
labeled (local first, then production).