Replies: 3 comments
-
|
it seems that the app is opening too many connections or not releasing them properly / haven't seen anything similar before. What are the server specs, is there any constraint there? Also was there any traffic on the system while this happened? |
Beta Was this translation helpful? Give feedback.
-
|
Hello @mgogoulos, Specs are (from fastfetch): So I think there are no particular constraints. There should have been 0 or almost 0 traffic at that time: the dns entry, reverse proxy and the whole mediacms service where just have been setup, and it was late in the evening. Since that event, the system is working "just fine" apparently: gfurlan@helium ~/docker/mediacms main* ❯ docker compose ps 15:28:07
WARN[0000] /home/gfurlan/docker/mediacms/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
mediacms-celery_beat-1 mediacms/mediacms:latest "./deploy/docker/ent…" celery_beat 30 hours ago Up 9 hours 80/tcp, 9000/tcp
mediacms-celery_worker-1 mediacms/mediacms:latest "./deploy/docker/ent…" celery_worker 30 hours ago Up 9 hours 80/tcp, 9000/tcp
mediacms-db-1 postgres:17.2-alpine "docker-entrypoint.s…" db 39 hours ago Up 9 hours (healthy) 5432/tcp
mediacms-redis-1 redis:alpine "docker-entrypoint.s…" redis 39 hours ago Up 9 hours (healthy) 6379/tcp
mediacms-web-1 mediacms/mediacms:latest "./deploy/docker/ent…" web 30 hours ago Up 9 hours 9000/tcp, 0.0.0.0:12380->80/tcp, [::]:12380->80/tcp |
Beta Was this translation helpful? Give feedback.
-
|
I've eventually managed to reproduce the problem. Today I tried to follow the hints in this post: #74 (comment) and once it started trying encoding the video again, it started spamming the connections issue again: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the issue
I installed MediaCMS yesterday evening and uploaded a couple of videos. This morning the web, celery_worker and celery_beat services were already offline. The logs showed nothing fancy (also because docker compose cycled the logs after the restart), but I noticed the "FATAL: sorry, too many clients already" error in the db log spamming for roughly 5:20 hours before shutting down (and I think letting the above services crash).
To Reproduce
Steps to reproduce the issue:
Expected behavior
No crash.
Screenshots
N-A
Environment (please complete the following information):
Additional context
Last lines of spam where:
Beta Was this translation helpful? Give feedback.
All reactions