You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
No, one or more services are failed (please provide detail below)
Salt Status
No, there are no failures
Logs
Yes, there are additional clues in /opt/so/log/ (please provide detail below)
Detail
Running a managersearch + forwards nodes + idh.
I have the same problem as explained in discussions 15224 and 15341. Altho my fleetserver is down and does not start properly, highstate will report that it started so-elastic-fleet service, but it fails immediately with the following errors:
# grep ^{ /opt/so/log/elasticfleet/elastic-agent-startup.log | grep Error | jq .message
...
"Error - failed version compatibility check with elasticsearch: x509: certificate has expired or is not yet valid: current time 2026-01-13T07:42:17Z is after 2025-12-20T13:51:22Z"
"Error - failed version compatibility check with elasticsearch: x509: certificate has expired or is not yet valid: current time 2026-01-13T07:42:17Z is after 2025-12-20T13:51:22Z"
"Unit state changed fleet-server-default (STARTING->FAILED): Error - failed version compatibility check with elasticsearch: x509: certificate has expired or is not yet valid: current time 2026-01-13T07:42:17Z is after 2025-12-20T13:51:22Z"
"Unit state changed fleet-server-default-fleet-server (STARTING->FAILED): Error - failed version compatibility check with elasticsearch: x509: certificate has expired or is not yet valid: current time 2026-01-13T07:42:17Z is after 2025-12-20T13:51:22Z"
"Unit state changed fleet-server-default (STARTING->FAILED): Error - failed version compatibility check with elasticsearch: x509: certificate has expired or is not yet valid: current time 2026-01-13T07:42:17Z is after 2025-12-20T13:51:22Z"
"Unit state changed fleet-server-default-fleet-server (STARTING->FAILED): Error - failed version compatibility check with elasticsearch: x509: certificate has expired or is not yet valid: current time 2026-01-13T07:42:17Z is after 2025-12-20T13:51:22Z"
...
I have not recieved any ingested logs since Dec 20, 2025 @ 15:06:31.051, no endpoint logs or zeek logs.
so-status reports so-elastic-fleet as missing.
(so-elastic-fleet-start which calls so-start elastic-fleet $1 is bugged btw, it will run a case check: "elastic-fleet") if docker ps | grep -q so-$1; then printf "\n$1 is already running!\n\n"; else docker rm so-$1 >/dev/null 2>&1 ; salt-call state.apply elasticfleet queue=True; fi ;; that will match so-elastic-fleet-package-registry, so it reports so-elastic-fleet as up, even tho it is not)
# sudo elastic-agent status
┌─ fleet
│ └─ status: (FAILED) fail to checkin to fleet-server: all hosts failed: requester 0/2 to host https://<ip>:8220/ errored: Post "https://<ip>:8220/api/fleet/agents/71735e7c-c305-4a35-8947-427c4361a2ba/checkin?": dial tcp <ip>:8220: connect: no route to host
│ requester 1/2 to host https://<hostname>:8220/ errored: Post "https://<hostname>:8220/api/fleet/agents/71735e7c-c305-4a35-8947-427c4361a2ba/checkin?": dial tcp [::1]:8220: connect: network is unreachable
└─ elastic-agent
└─ status: (HEALTHY) Running
I have modified my logstash output, changed Client SSL certificate and Client SSL certificate key to match /etc/pki/elasticfleet-logstash.crt and .p8 files, but no luck. It's not starting.
sudo salt -C 'G@role:so-manager or G@role:so-managersearch or G@role:so-searchnode or G@role:so-receiver' cmd.run 'for f in /etc/pki/*.crt; do echo "$f"; openssl x509 -in "$f" -noout -enddate; done'
Reports no expired certs, next to expire is the elasticfleet-kafka in October 2026.
# so-elasticsearch-troubleshoot
================ Elasticsearch Status ================
Shards Capacity: Cluster is close to reaching the configured maximum number of shards for data nodes.
Cause: Elasticsearch is about to reach the maximum number of shards it can host as set by [cluster.max_shards_per_node].
Action: Increase the number of nodes in your cluster or remove some non-frozen indices to reduce the number of shards in the cluster.
================ Disk Usage Check ================
LOW:80% HIGH:85% FLOOD:90%
<hostname> disk usage: 68.04%
================ Unassigned Shards Check ================
All shards are assigned
so-elasticsearch-troubleshoot reports no fatal problems.
so-logstash-health is not available in 2.4.180 but checking the command in git from latest release gives a curl command that I ran, and it says:
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Version
2.4.180
Installation Method
Security Onion ISO image
Description
other (please provide detail below)
Installation Type
other (please provide detail below)
Location
on-prem with Internet access
Hardware Specs
Exceeds minimum requirements
CPU
20+
RAM
192GB+
Storage for /
1TB+
Storage for /nsm
8TB+
Network Traffic Collection
span port
Network Traffic Speeds
Less than 1Gbps
Status
No, one or more services are failed (please provide detail below)
Salt Status
No, there are no failures
Logs
Yes, there are additional clues in /opt/so/log/ (please provide detail below)
Detail
Running a managersearch + forwards nodes + idh.
I have the same problem as explained in discussions 15224 and 15341. Altho my fleetserver is down and does not start properly, highstate will report that it started so-elastic-fleet service, but it fails immediately with the following errors:
I have not recieved any ingested logs since Dec 20, 2025 @ 15:06:31.051, no endpoint logs or zeek logs.
so-status reports so-elastic-fleet as missing.
(so-elastic-fleet-start which calls so-start elastic-fleet $1 is bugged btw, it will run a case check:
"elastic-fleet") if docker ps | grep -q so-$1; then printf "\n$1 is already running!\n\n"; else docker rm so-$1 >/dev/null 2>&1 ; salt-call state.apply elasticfleet queue=True; fi ;;that will match so-elastic-fleet-package-registry, so it reports so-elastic-fleet as up, even tho it is not)I have modified my logstash output, changed Client SSL certificate and Client SSL certificate key to match /etc/pki/elasticfleet-logstash.crt and .p8 files, but no luck. It's not starting.
Reports no expired certs, next to expire is the elasticfleet-kafka in October 2026.
so-elasticsearch-troubleshoot reports no fatal problems.
so-logstash-healthis not available in 2.4.180 but checking the command in git from latest release gives a curl command that I ran, and it says:Logstash logfile is beeing filled with similar log entries as the other two discussions threads.
Guidelines
Beta Was this translation helpful? Give feedback.
All reactions