Skip to content

[Bug]: Events aren't getting consumed by subscribers, causing unprocessed events and memory leaks #14357

@mateomoreno

Description

@mateomoreno

Package.json file

{
  "name": "marketplace-backend",
  "version": "0.0.1",
  "description": "A starter for Medusa projects.",
  "author": "Medusa (https://medusajs.com)",
  "license": "MIT",
  "keywords": [
    "sqlite",
    "postgres",
    "typescript",
    "ecommerce",
    "headless",
    "medusa"
  ],
  "scripts": {
    "build": "rm -rf .medusa && medusa build",
    "seed": "medusa exec ./src/scripts/seed.ts",
    "start": "medusa start",
    "dev": "medusa develop",
    "test:integration:http": "TEST_TYPE=integration:http NODE_OPTIONS=--experimental-vm-modules jest --silent=false --runInBand --forceExit",
    "test:integration:modules": "TEST_TYPE=integration:modules NODE_OPTIONS=--experimental-vm-modules jest --silent --runInBand --forceExit",
    "test:unit": "TEST_TYPE=unit NODE_OPTIONS=--experimental-vm-modules jest --silent --runInBand --forceExit --detectOpenHandles",
    "predeploy": "medusa db:migrate",
    "uploadimage:test": "node src/scripts/run-cloudinary-upload.js"
  },
  "dependencies": {
    "@anthropic-ai/sdk": "^0.71.2",
    "@google-shopping/products": "^0.8.0",
    "@medusajs/admin-sdk": "2.11.3",
    "@medusajs/cli": "2.11.3",
    "@medusajs/framework": "2.11.3",
    "@medusajs/medusa": "2.11.3",
    "@opentelemetry/api": "^1.9.0",
    "@opentelemetry/exporter-trace-otlp-grpc": "^0.207.0",
    "@opentelemetry/sdk-node": "^0.207.0",
    "@portabletext/block-tools": "^1.1.35",
    "@sanity/block-tools": "^3.70.0",
    "@sanity/client": "^7.3.0",
    "@sanity/schema": "^4.10.2",
    "@sanity/types": "^3.97.1",
    "@sentry/node": "^10.22.0",
    "@sentry/opentelemetry-node": "^7.114.0",
    "@sentry/profiling-node": "^10.22.0",
    "@shopify/storefront-api-client": "^1.0.6",
    "@types/jsdom": "^21.1.7",
    "algoliasearch": "^5.29.0",
    "cloudinary": "^2.6.0",
    "facebook-nodejs-business-sdk": "^24.0.1",
    "googleapis": "^166.0.0",
    "groq": "^3.92.0",
    "jsdom": "^27.0.0"
  },
  "devDependencies": {
    "@medusajs/test-utils": "2.11.3",
    "@swc/core": "1.5.7",
    "@swc/jest": "^0.2.36",
    "@types/jest": "^29.5.13",
    "@types/node": "^20.0.0",
    "@types/react": "^18.3.2",
    "@types/react-dom": "^18.2.25",
    "jest": "^29.7.0",
    "prop-types": "^15.8.1",
    "react": "^18.2.0",
    "react-dom": "^18.2.0",
    "ts-node": "^10.9.2",
    "typescript": "^5.6.2",
    "vite": "^5.2.11",
    "yalc": "^1.0.0-pre.53"
  },
  "engines": {
    "node": ">=20"
  },
  "packageManager": "[email protected]+sha512.a6b2f7906b721bba3d67d4aff083df04dad64c399707841b7acf00f6b133b7ac24255f2652fa22ae3534329dc6180534e98d17432037ff6fd140556e2bb3137e"
}

Node.js version

v22.0.0

Database and its version

PostgresSQL 16.11

Operating system name and version

Ubuntu 22.04 LTS or Ubuntu 24.04 LTS

Browser name

Chrome

What happended?

Just to clarify, we've been running into this issue for a couple of months now and there are a couple of owned and peer threads on Discord talking about this issue. Our main Medusa architecture consists of various systems including Shopify stores, Algolia indexes, Anthropic APIs, etc. They all speak to each other through either scheduled jobs, manual workflow triggers and also important events and subscribers that keep everything synced. Over the past couple of months we started running into memory leaks in our Redis instance due to events piling up and not getting processed. We did a pretty good job in tryying to reduce the number of events in ordfer to redu e the load on the Redis instance. While this helped a little bit, events soon started piling up again and in most cases events aren't getting picked up by subscribers so logic never executes. These events are critical events like product.created, product.update and order.created.

Expected behavior

Expected behavior is for events to get added to the event bus/queue and processed with retries if needed and ttls. Mainly like so:

Events emitted → queued in Redis → processed by worker → subscribers triggered
Events processed in order (FIFO)
No events lost
Worker processes queue continuously

Actual behavior

Events are being added to Redis but are never processed, causing them to pile up and crash the Redis instance

Link to reproduction repo

n/a

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions