You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/release-notes/known-issues.md
+27-3Lines changed: 27 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,10 +23,34 @@ Known issues are significant defects or limitations that may impact your impleme
23
23
% Workaround description.
24
24
% :::
25
25
26
+
:::{dropdown} Elastic Agent becomes unhealthy with a host URL parsing error related to the Prometheus collector metricset
27
+
**Applies to: {{agent}} 9.2.1**
28
+
29
+
On November 13th 2025, a known issue was discovered that causes Elastic Agent to become unhealthy with the error `host parsing failed for prometheus-collector: error parsing URL: parse "http://localhost:EDOT_COLLECTOR_METRICS_PORT": invalid port ":EDOT_COLLECTOR_METRICS_PORT" after host`.
30
+
31
+
This problem has no effect on the operation of Elastic Agent besides incorrectly marking it as unhealthy. The `prometheus/metrics` input that is
32
+
affected is incorrectly created when certain output types (Logstash, Kafka) or output parameters (for example, `loadbalance`) are used.
33
+
34
+
For more information, check [#11169](https://github.com/elastic/elastic-agent/issues/11169).
35
+
36
+
**Workaround**
37
+
38
+
Affected users must set the **Monitoring runtime** advanced policy setting in {{fleet}} to the **Process** runtime to work around this issue. This is the runtime
39
+
mode that is already being used when this problem occurs. The same can be done in a standalone agent by setting `agent.monitoring._runtime_experimental: process` in its `elastic-agent.yaml` file:
40
+
41
+
```yaml
42
+
agent.monitoring:
43
+
_runtime_experimental: process
44
+
```
45
+
46
+
For more details, check [the comments](https://github.com/elastic/elastic-agent/issues/11169#issuecomment-3553232394) in the related issue.
47
+
48
+
The fix will be included in version 9.2.2.
49
+
:::
26
50
27
51
:::{dropdown} Failed upgrades leave {{agent}} stuck until restart
28
52
29
-
**Applies to: {{agent}} 8.18.7, 9.0.7**
53
+
**Applies to: {{agent}} 8.18.7, 9.0.7**
30
54
31
55
On September 17, 2025, a known issue was discovered that can cause {{agent}} upgrades to get stuck if an upgrade attempt fails under specific conditions. This happens because the coordinator’s `overrideState` remains set, leaving the agent in a state that appears to be upgrading.
32
56
@@ -41,8 +65,8 @@ This issue is triggered if the upgrade fails during one of the early checks insi
41
65
**Symptoms**
42
66
43
67
- {{fleet}} shows the upgrade action in progress, even though the upgrade remains stuck
44
-
- No further upgrade attempts succeed
45
-
- Elastic Agent status shows an override state indicating upgrade
68
+
- No further upgrade attempts succeed
69
+
- Elastic Agent status shows an override state indicating upgrade
0 commit comments