Skip to content
This repository was archived by the owner on Jan 29, 2026. It is now read-only.

Conversation

Copy link
Contributor

Copilot AI commented Oct 27, 2025

Persist Rate Limit Data Across Restarts

Goal: Implement persistent rate limiting to prevent bypass attacks via server restarts

Implementation Checklist:

  • Create backend/src/api/middleware/rateLimit.js with dual persistence modes
    • File-based persistence (Option 1 - single instance)
    • Redis-based persistence (Option 2 - distributed/production)
    • In-memory fallback for both modes
  • Add persistence functions
    • loadRateLimits() - Load from storage on startup
    • persistRateLimits() - Save to storage periodically
    • startRateLimitPersistence() - Initialize file-based persistence
    • initRateLimitStore() - Initialize Redis persistence
  • Implement cleanup of expired entries
  • Add graceful shutdown handlers (SIGTERM/SIGINT)
  • Create .data directory structure (auto-created on first save)
  • Update backend/src/server.js to initialize rate limiting
  • Add Redis configuration to backend/package.json (optional dependency)
  • Update backend/.env.example with Redis environment variables
  • Add .data/ to .gitignore
  • Create comprehensive test suite (backend/tests/rateLimit.test.js)
  • Document configuration and usage (backend/RATE_LIMITING.md)
  • Address code review feedback

Technical Implementation:

File-based persistence:

  • Serializes Map to JSON in .data/rate-limits.json
  • Persists every 30 seconds and on SIGTERM/SIGINT
  • Cleans expired entries before each save
  • Successfully tested across server restarts

Redis-based persistence:

  • Uses Redis sorted sets for efficient time-window queries
  • Graceful fallback to in-memory if Redis unavailable
  • Connection error handling with retry strategy
  • Fail-open approach (allows requests if Redis fails)

Test Results: All 5 tests passing

  • ✅ Basic rate limiting
  • ✅ Rate limit enforcement (100 requests max)
  • ✅ File persistence across restart
  • ✅ Expired entries cleanup
  • ✅ Multiple client isolation

Live Server Test:

  • Server starts with file-based persistence
  • Rate limit headers correctly set (X-RateLimit-*)
  • State persists across graceful restarts
  • Data auto-saves on shutdown

Code Quality:

  • Fixed spacing in test string concatenation
  • Proper error handling for server startup
  • Prevents duplicate signal handler registration

Security Benefits:

  • Prevents rate limit bypass via server restart
  • Protects against deployment cycle exploitation
  • Maintains rate limit state across crashes and restarts

Fixes #75

Original prompt

This section details on the original issue you should resolve

<issue_title>[Infrastructure] Persist Rate Limit Data Across Restarts</issue_title>
<issue_description>## 🔒 Priority: MEDIUM - Production Readiness

Background

The current rate limiting implementation at backend/src/api/middleware/rateLimit.js uses an in-memory Map that is lost on server restart, allowing attackers to bypass limits by forcing restarts or exploiting deployment cycles.

Current Implementation - Volatile Rate Limits

// backend/src/api/middleware/rateLimit.js (line 9)
const requestCounts = new Map(); // Lost on restart

export function rateLimit(req, res, next) {
  const clientId = req.clientId || req.ip;
  const now = Date.now();
  const windowStart = now - WINDOW_MS;
  
  // Get or initialize count for this client
  if (!requestCounts.has(clientId)) {
    requestCounts.set(clientId, []);
  }
  
  const requests = requestCounts.get(clientId);
  // ...
}

Security Concern

Attack Scenario:

  1. Attacker makes 99 requests (just under 100 limit)
  2. Attacker triggers server restart (crash, deploy, etc.)
  3. Rate limit counter resets to 0
  4. Attacker makes another 99 requests
  5. Repeat → 10x normal rate limit bypass

Recommended Solutions

Option 1: File-Based Persistence (Simple, Single Instance)

// backend/src/api/middleware/rateLimit.js
import fs from 'fs/promises';
import path from 'path';

const RATE_LIMIT_FILE = path.join(process.cwd(), '.data', 'rate-limits.json');
const requestCounts = new Map();
let persistenceInterval;

// Load rate limits on startup
async function loadRateLimits() {
  try {
    const data = await fs.readFile(RATE_LIMIT_FILE, 'utf-8');
    const stored = JSON.parse(data);
    
    // Convert stored array back to Map and filter expired entries
    const now = Date.now();
    const windowStart = now - WINDOW_MS;
    
    for (const [clientId, requests] of stored) {
      const validRequests = requests.filter(timestamp => timestamp > windowStart);
      if (validRequests.length > 0) {
        requestCounts.set(clientId, validRequests);
      }
    }
    
    logger.info({ clients: requestCounts.size }, 'Rate limits loaded from disk');
  } catch (error) {
    if (error.code !== 'ENOENT') {
      logger.error({ error }, 'Failed to load rate limits');
    }
  }
}

// Persist rate limits periodically
async function persistRateLimits() {
  try {
    const now = Date.now();
    const windowStart = now - WINDOW_MS;
    
    // Clean up expired entries before persisting
    const activeClients = [];
    for (const [clientId, requests] of requestCounts.entries()) {
      const validRequests = requests.filter(timestamp => timestamp > windowStart);
      if (validRequests.length > 0) {
        activeClients.push([clientId, validRequests]);
      } else {
        requestCounts.delete(clientId);
      }
    }
    
    await fs.writeFile(RATE_LIMIT_FILE, JSON.stringify(activeClients));
    logger.debug({ clients: activeClients.length }, 'Rate limits persisted to disk');
  } catch (error) {
    logger.error({ error }, 'Failed to persist rate limits');
  }
}

// Start persistence scheduler
export function startRateLimitPersistence() {
  loadRateLimits(); // Load on startup
  
  // Persist every 30 seconds
  persistenceInterval = setInterval(persistRateLimits, 30000);
  
  // Persist on graceful shutdown
  process.on('SIGTERM', async () => {
    await persistRateLimits();
  });
  
  process.on('SIGINT', async () => {
    await persistRateLimits();
  });
}

export function stopRateLimitPersistence() {
  if (persistenceInterval) {
    clearInterval(persistenceInterval);
    persistenceInterval = null;
  }
}

Option 2: Redis Backend (Production, Distributed)

npm install ioredis
// backend/src/api/middleware/rateLimit.js
import Redis from 'ioredis';

const redis = new Redis({
  host: process.env.REDIS_HOST || 'localhost',
  port: parseInt(process.env.REDIS_PORT) || 6379,
  password: process.env.REDIS_PASSWORD,
  db: parseInt(process.env.REDIS_DB) || 0,
  lazyConnect: true
});

let useRedis = false;

// Initialize Redis connection
export async function initRateLimitStore() {
  try {
    await redis.connect();
    useRedis = true;
    logger.info('Rate limiting using Redis backend');
  } catch (error) {
    logger.warn({ error }, 'Redis unavailable, falling back to in-memory rate limiting');
    useRedis = false;
  }
}

export async function rateLimit(req, res, next) {
  const clientId = req.clientId || req.ip;
  const now = Date.now();
  
  if (useRedis) {
    await rateLimitRedis(clientId, now, req, res, next);
  } else {
    rateLimitMemory(clientId, now, req, res, next);
  }
}

async function rateLimitRedis(clientId, now, req, res, next) {
  const key = `ratelimit:${clientId}`;
  
  try {
    // Use Redis sorted set with timestamps as scores
    const windowStart = now - WINDOW_MS;
    
    // Remove old entries
    await redis.zr...

</details>

- Fixes clduab11/gemini-flow#75

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 We'd love your input! Share your thoughts on Copilot coding agent in our [2 minute survey](https://gh.io/copilot-coding-agent-survey).

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 27, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copilot AI and others added 2 commits October 27, 2025 23:57
- Created rate limit middleware with dual persistence modes
- File-based: Auto-saves to .data/rate-limits.json every 30s
- Redis-based: Uses sorted sets for distributed deployments
- Graceful shutdown handlers persist data on SIGTERM/SIGINT
- Expired entries cleanup before each persistence
- Comprehensive test suite with 100% passing tests
- Added documentation and configuration examples

Co-authored-by: clduab11 <[email protected]>
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements persistent rate limiting to prevent bypass attacks via server restarts. It introduces dual persistence modes (file-based and Redis-based) with automatic cleanup of expired entries, graceful shutdown handling, and comprehensive test coverage.

Key Changes:

  • Added persistent rate limiting middleware with file-based and Redis-based storage options
  • Implemented automatic persistence on shutdown and periodic saves every 30 seconds
  • Created comprehensive test suite verifying persistence across restarts and cleanup of expired entries

Reviewed Changes

Copilot reviewed 6 out of 8 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
backend/src/api/middleware/rateLimit.js New rate limiting middleware with dual persistence modes (file/Redis) and automatic cleanup
backend/src/server.js Integrated rate limiting initialization with storage mode selection based on Redis configuration
backend/tests/rateLimit.test.js Comprehensive test suite covering basic limiting, enforcement, persistence, cleanup, and multi-client isolation
backend/package.json Added test script and ioredis as optional dependency for Redis support
backend/RATE_LIMITING.md Complete documentation covering configuration, usage, security benefits, and troubleshooting
backend/.env.example Added Redis configuration variables for optional distributed rate limiting
Files not reviewed (1)
  • backend/package-lock.json: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@clduab11
Copy link
Owner

@gemini-code-assist, review and analyze all changes including resolved conversations.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a robust persistent rate-limiting mechanism, which is a crucial security enhancement to prevent bypass attacks. The implementation thoughtfully includes both file-based and Redis-based persistence modes, along with comprehensive documentation and a good set of tests for the file-based path. My review focuses on improving robustness, security, and test coverage. Key suggestions include adding tests for the Redis implementation, addressing a potential DoS vulnerability with the fallback client identifier, and making minor improvements to logging and shutdown handling. Overall, this is a strong contribution that significantly improves the application's resilience.

@clduab11
Copy link
Owner

@copilot , title still reflects WIP and I'm unable to squash and merge. Please rectify.

- Fixed spacing in test string concatenation
- Wrapped app.listen() in Promise for proper error handling
- Prevent duplicate signal handler registration
- All tests passing, server starts correctly

Co-authored-by: clduab11 <[email protected]>
Copy link
Contributor Author

Copilot AI commented Oct 28, 2025

@copilot , title still reflects WIP and I'm unable to squash and merge. Please rectify.

PR title updated to remove "WIP" and description updated. Code review issues addressed in commit 7d9a9b3:

  • Fixed spacing in string concatenation
  • Added proper error handling for server startup
  • Prevented duplicate signal handler registration

All tests passing and ready for merge.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

codex OpenAI's Codex documentation Improvements or additions to documentation enhancement New feature or request gen/qol improves General code improvements and cleanup

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Infrastructure] Persist Rate Limit Data Across Restarts

2 participants