Skip to content

seqr container in CrashLoopBackOff — "Failed to connect to localhost port 8000: Connection refused" #159

@ysayyed11

Description

@ysayyed11

Describe the issue

We deployed seqr via Helm (seqr-platform chart) on a local kind cluster, and the seqr pod keeps going into CrashLoopBackOff.
The startup probe repeatedly fails with the message:
curl: (7) Failed to connect to localhost port 8000: Connection refused

jgyi,The deployment uses NFS-backed storage for persistent volumes.
Since the seqr Helm chart expects data to be available under /var/seqr, we’ve created a symlink to point that path to the mounted NFS share.

Pod description

kubectl describe pod seqr-postgresql-0
Name: seqr-postgresql-0
Namespace: default
Priority: 0
Service Account: seqr-postgresql
Node: kind-control-plane/172.18.0.2
Start Time: Mon, 20 Oct 2025 12:53:08 +0300
Labels: app.kubernetes.io/component=primary
app.kubernetes.io/instance=sidra-seqr
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=postgresql
app.kubernetes.io/version=16.4.0
apps.kubernetes.io/pod-index=0
controller-revision-hash=seqr-postgresql-65868cbcf9
helm.sh/chart=postgresql-15.5.31
statefulset.kubernetes.io/pod-name=seqr-postgresql-0
Annotations:
Status: Running
IP: 10.244.0.182
IPs:
IP: 10.244.0.182
Controlled By: StatefulSet/seqr-postgresql
Init Containers:
seqr-postgresql-init-chmod-data:
Container ID: containerd://cfb26f6cbe6fa3c1c7bdb13e8f5e9bc13f842f55f19388aae9bb84124fe25445
Image: bitnamilegacy/os-shell
Image ID: docker.io/bitnamilegacy/os-shell@sha256:8f020b42160f0a0b66d8d3f2fdc80a27563b585021267dd868263704aef2dfeb
Port:
Host Port:
SeccompProfile: RuntimeDefault
Command:
/bin/sh
-ec
mkdir -p /var/seqr/postgresql-data
chown -R id -u:id -G | cut -d " " -f2 /var/seqr/postgresql-data

State:          Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Mon, 20 Oct 2025 12:57:16 +0300
  Finished:     Mon, 20 Oct 2025 12:57:18 +0300
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
  /var/seqr from data (rw)

Containers:
postgresql:
Container ID: containerd://806dba876fea271523d3587a000f1174e626413094e0ab99dc51dae9ffee8db1
Image: docker.io/bitnamilegacy/postgresql:12.19.0-debian-12-r9
Image ID: docker.io/bitnamilegacy/postgresql@sha256:dc2f9d41425f0a990fef9979f460f3f589cca48a21b53914663d4912d13dd9e4
Port: 5432/TCP
Host Port: 0/TCP
SeccompProfile: RuntimeDefault
State: Running
Started: Mon, 20 Oct 2025 13:24:05 +0300
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Mon, 20 Oct 2025 13:20:45 +0300
Finished: Mon, 20 Oct 2025 13:24:05 +0300
Ready: False
Restart Count: 8
Limits:
memory: 2Gi
Requests:
memory: 2Gi
Liveness: exec [/bin/sh -c exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [/bin/sh -c -e exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
] delay=5s timeout=5s period=10s #success=1 #failure=6
Startup: exec [/bin/sh -c exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432] delay=30s timeout=1s period=10s #success=1 #failure=15
Environment:
BITNAMI_DEBUG: false
POSTGRESQL_PORT_NUMBER: 5432
POSTGRESQL_VOLUME_DIR: /var/seqr
PGDATA: /var/seqr/postgresql-data
POSTGRES_PASSWORD: <set to the key 'password' in secret 'postgres-secrets'> Optional: false
POSTGRESQL_ENABLE_LDAP: no
POSTGRESQL_ENABLE_TLS: no
POSTGRESQL_LOG_HOSTNAME: false
POSTGRESQL_LOG_CONNECTIONS: false
POSTGRESQL_LOG_DISCONNECTIONS: false
POSTGRESQL_PGAUDIT_LOG_CATALOG: off
POSTGRESQL_CLIENT_MIN_MESSAGES: error
POSTGRESQL_SHARED_PRELOAD_LIBRARIES: pgaudit
Mounts:
/dev/shm from dshm (rw)
/docker-entrypoint-initdb.d/ from custom-init-scripts (rw)
/opt/bitnami/postgresql/conf from empty-dir (rw,path="app-conf-dir")
/opt/bitnami/postgresql/tmp from empty-dir (rw,path="app-tmp-dir")
/tmp from empty-dir (rw,path="tmp-dir")
/var/seqr from data (rw)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
empty-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
custom-init-scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: seqr-postgresql-init-scripts
Optional: false
dshm:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: seqr-platform-pvc
ReadOnly: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 34m default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal Scheduled 34m default-scheduler Successfully assigned default/seqr-postgresql-0 to kind-control-plane
Normal Pulled 30m kubelet Container image "bitnamilegacy/os-shell" already present on machine
Normal Created 30m kubelet Created container: seqr-postgresql-init-chmod-data
Normal Started 30m kubelet Started container seqr-postgresql-init-chmod-data
Normal Started 13m (x6 over 30m) kubelet Started container postgresql
Normal Pulled 10m (x7 over 30m) kubelet Container image "docker.io/bitnamilegacy/postgresql:12.19.0-debian-12-r9" already present on machine
Normal Created 10m (x7 over 30m) kubelet Created container: postgresql
Warning Unhealthy 4m20s (x117 over 29m) kubelet Startup probe failed: 127.0.0.1:5432 - rejecting connections
Normal Killing 3m50s (x8 over 27m) kubelet Container postgresql failed startup probe, will be restarted
[root@seqr-prod seqr]# kubectl describe pod seqr-b45cbdfc8-6phm
Name: seqr-b45cbdfc8-6phmg
Namespace: default
Priority: 0
Service Account: seqr
Node: kind-control-plane/172.18.0.2
Start Time: Mon, 20 Oct 2025 12:53:08 +0300
Labels: app.kubernetes.io/instance=sidra-seqr
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=seqr
app.kubernetes.io/part-of=seqr-platform
app.kubernetes.io/version=b6fdd9c208f50f2102c1b650db6c41b002fbdaf5
helm.sh/chart=seqr-3.11.0
name=seqr
pod-template-hash=b45cbdfc8
Annotations: checksum/config: db35e292491b1d03ff14849eed9fef5e58a0f9e55697ac0fe0a5fa936e100d67
Status: Running
IP: 10.244.0.179
IPs:
IP: 10.244.0.179
Controlled By: ReplicaSet/seqr-b45cbdfc8
Init Containers:
mkdir-loading-datasets:
Container ID: containerd://8215abfddb50a240debddf70a2d5be9933a7be4170ab70f8fd0ca6817f2eb31d
Image: busybox:1.35
Image ID: docker.io/library/busybox@sha256:98ad9d1a2be345201bb0709b0d38655eb1b370145c7d94ca1fe9c421f76e245a
Port:
Host Port:
Command:
/bin/mkdir
-p
/var/seqr/seqr-loading-temp
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 20 Oct 2025 12:57:22 +0300
Finished: Mon, 20 Oct 2025 12:57:22 +0300
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ltgxb (ro)
/var/seqr from seqr-datasets (rw)
Containers:
seqr:
Container ID: containerd://4438717914c395e607c16e3997e5da830d0ca180e3d298200870d31ef051cd9d
Image: gcr.io/seqr-project/seqr:b6fdd9c208f50f2102c1b650db6c41b002fbdaf5
Image ID: gcr.io/seqr-project/seqr@sha256:12bbacffcf0f9671e20dc81c4bc6a9c9e15b609b63075f126e63e56770d34503
Port: 8000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 20 Oct 2025 13:24:02 +0300
Finished: Mon, 20 Oct 2025 13:26:00 +0300
Ready: False
Restart Count: 8
Liveness: exec [/bin/bash -c /readiness_probe] delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [/bin/bash -c /readiness_probe] delay=0s timeout=1s period=10s #success=1 #failure=3
Startup: exec [/bin/bash -c /readiness_probe] delay=0s timeout=1s period=15s #success=1 #failure=60
Environment Variables from:
seqr ConfigMap Optional: false
Environment:
POSTGRES_PASSWORD: <set to the key 'password' in secret 'postgres-secrets'> Optional: false
DJANGO_KEY: <set to the key 'django_key' in secret 'seqr-secrets'> Optional: false
CLICKHOUSE_READER_PASSWORD: <set to the key 'reader_password' in secret 'clickhouse-secrets'> Optional: true
CLICKHOUSE_WRITER_PASSWORD: <set to the key 'writer_password' in secret 'clickhouse-secrets'> Optional: true
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ltgxb (ro)
/var/seqr from seqr-datasets (rw)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
seqr-datasets:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: seqr-platform-pvc
ReadOnly: false
kube-api-access-ltgxb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 34m default-scheduler Successfully assigned default/seqr-b45cbdfc8-6phmg to kind-control-plane
Warning FailedScheduling 34m default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal Pulling 30m kubelet Pulling image "busybox:1.35"
Normal Pulled 30m kubelet Successfully pulled image "busybox:1.35" in 2.176s (5.713s including waiting). Image size: 2159953 bytes.
Normal Created 30m kubelet Created container: mkdir-loading-datasets
Normal Started 30m kubelet Started container mkdir-loading-datasets
Normal Pulled 30m kubelet Successfully pulled image "gcr.io/seqr-project/seqr:b6fdd9c208f50f2102c1b650db6c41b002fbdaf5" in 1.611s (4.349s including waiting). Image size: 417115309 bytes.
Normal Pulled 28m kubelet Successfully pulled image "gcr.io/seqr-project/seqr:b6fdd9c208f50f2102c1b650db6c41b002fbdaf5" in 1.509s (1.509s including waiting). Image size: 417115309 bytes.
Normal Pulled 25m kubelet Successfully pulled image "gcr.io/seqr-project/seqr:b6fdd9c208f50f2102c1b650db6c41b002fbdaf5" in 1.579s (1.579s including waiting). Image size: 417115309 bytes.
Normal Pulled 23m kubelet Successfully pulled image "gcr.io/seqr-project/seqr:b6fdd9c208f50f2102c1b650db6c41b002fbdaf5" in 1.745s (1.745s including waiting). Image size: 417115309 bytes.
Normal Pulled 20m kubelet Successfully pulled image "gcr.io/seqr-project/seqr:b6fdd9c208f50f2102c1b650db6c41b002fbdaf5" in 1.486s (1.486s including waiting). Image size: 417115309 bytes.
Normal Started 20m (x5 over 30m) kubelet Started container seqr
Normal Created 17m (x6 over 30m) kubelet Created container: seqr
Normal Pulled 17m kubelet Successfully pulled image "gcr.io/seqr-project/seqr:b6fdd9c208f50f2102c1b650db6c41b002fbdaf5" in 1.52s (1.52s including waiting). Image size: 417115309 bytes.
Warning BackOff 9m48s (x38 over 26m) kubelet Back-off restarting failed container seqr in pod seqr-b45cbdfc8-6phmg_default(04bd3075-e82d-4996-a412-2befd7f27c9a)
Normal Pulling 5m13s (x8 over 30m) kubelet Pulling image "gcr.io/seqr-project/seqr:b6fdd9c208f50f2102c1b650db6c41b002fbdaf5"
Normal Pulled 5m12s kubelet Successfully pulled image "gcr.io/seqr-project/seqr:b6fdd9c208f50f2102c1b650db6c41b002fbdaf5" in 1.881s (1.881s including waiting). Image size: 417115309 bytes.
Warning Unhealthy 4m46s (x57 over 30m) kubelet Startup probe failed: % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to localhost port 8000: Connection refused

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions