-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Hi,
great chart! I have everything working on the UI but I notice scavenging maybe hitting the wrong port - not sure.
I am getting an error on the scheduled job so I decided to kubectl exec manually into the pod .. and I notice the port it is trying to use - it is not listening
root@eventstore-0:/# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
eventstor 1 root 17u IPv4 128987236 0t0 TCP eventstore-0.eventstore.eventstore.svc.cluster.local:1113 (LISTEN)
eventstor 1 root 20u IPv4 128986353 0t0 TCP *:2113 (LISTEN)
eventstor 1 root 21u IPv4 129025536 0t0 TCP eventstore-0.eventstore.eventstore.svc.cluster.local:2113->10-42-1-237.eventstore-admin.eventstore.svc.cluster.local:52412 (CLOSE_WAIT)
2113 is there but not 2112 which is what the scavenge job tried to use..
If I use the CURL command manually inside the pod and call it using 2113 - this works.
This is my chart-values
scavenging:
enabled: true
image: lachlanevenson/k8s-kubectl
imageTag: latest
schedule: 0 2 * * *
I haven't changed any other settings - so I assume its using these
## Internal HTTP
port.intHttpPort: 2112
#### External HTTP
port.extHttpPort: 2113
By default.
Any idea what I should be doing ?
should I create an intHttpPort and apply it as 2113.
Isn't intHttpPort to do with the gossip between clusters ?
I do notice the Config Map is using the intHttpPort
kubectl exec ${podname##pod/} -- \
curl \
-si \
-d {} \
-X POST \
-u 'admin:{{ .Values.admin.password | default "changeit" }}' \
"http://localhost:{{ .Values.intHttpPort }}/admin/scavenge"
Any ideas ? - Thanks in advance.