You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/sources/monitor/monitor-linux.md
+82-45Lines changed: 82 additions & 45 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,28 +11,32 @@ weight: 225
11
11
The Linux operating system generates a wide range of metrics and logs that you can use to monitor the health and performance of your hardware and operating system.
12
12
With {{< param "PRODUCT_NAME" >}}, you can collect your metrics and logs, forward them to a Grafana stack, and create dashboards to monitor your Linux servers.
13
13
14
+
This scenario demonstrates how to use {{< param "PRODUCT_NAME" >}} to monitor Linux system metrics and logs using a complete example configuration.
15
+
You'll deploy a containerized monitoring stack that includes {{< param "PRODUCT_NAME" >}}, Prometheus, Loki, and Grafana.
16
+
14
17
The [`alloy-scenarios`][scenarios] repository contains complete examples of {{< param "PRODUCT_NAME" >}} deployments.
15
18
Clone the repository and use the examples to understand how {{< param "PRODUCT_NAME" >}} collects, processes, and exports telemetry signals.
16
19
17
-
In this example scenario, {{< param "PRODUCT_NAME" >}} collects Linux metrics and forwards them to a Loki destination.
*[Docker](https://www.docker.com/) and Docker Compose installed
27
+
*[Git](https://git-scm.com/) for cloning the repository
28
+
* A Linux host or Linux running in a virtual machine
29
+
* Administrator privileges to run Docker commands
30
+
* Available ports: 3000 (Grafana), 9090 (Prometheus), 3100 (Loki), and 12345 ({{< param "PRODUCT_NAME" >}} UI)
24
31
25
-
*[Docker](https://www.docker.com/)
26
-
*[Git](https://git-scm.com/)
27
-
* A Linux host or Linux running in a Virtual Machine
32
+
## Clone and deploy the scenario
28
33
29
-
{{< admonition type="note" >}}
30
-
You need administrator privileges to run `docker` commands.
31
-
{{< /admonition >}}
34
+
This scenario runs {{< param "PRODUCT_NAME" >}} in a container alongside Grafana, Prometheus, and Loki, creating a self-contained monitoring stack.
35
+
The {{< param "PRODUCT_NAME" >}} container acts as a demonstration system to show monitoring capabilities.
32
36
33
-
## Clone and deploy the example
37
+
In a production environment, you would typically install {{< param "PRODUCT_NAME" >}} directly on each Linux server you want to monitor.
34
38
35
-
Follow these steps to clone the repository and deploy the monitoring example:
39
+
Follow these steps to clone the repository and deploy the monitoring scenario:
36
40
37
41
1. Clone the {{< param "PRODUCT_NAME" >}} scenarios repository:
38
42
@@ -53,15 +57,15 @@ Follow these steps to clone the repository and deploy the monitoring example:
53
57
docker ps
54
58
```
55
59
56
-
1. (Optional) Stop Docker to shut down the Grafana stack when you finish exploring this example:
60
+
1. (Optional) Stop Docker to shut down the Grafana stack when you finish exploring this scenario:
57
61
58
62
```shell
59
63
docker compose down
60
64
```
61
65
62
66
## Monitor and visualize your data
63
67
64
-
Use Grafana to monitor deployment health and visualize data.
68
+
After deploying the monitoring stack, you can use the {{< param "PRODUCT_NAME" >}} UI to monitor deployment health and Grafana to visualize your collected data.
65
69
66
70
### Monitor {{% param "PRODUCT_NAME" %}}
67
71
@@ -79,40 +83,39 @@ To create a [dashboard](https://grafana.com/docs/grafana/latest/getting-started/
79
83
80
84
1. Open your browser and go to [http://localhost:3000/dashboards](http://localhost:3000/dashboards).
81
85
1. Download the JSON file for the preconfigured [Linux node dashboard](https://grafana.com/api/dashboards/1860/revisions/37/download).
82
-
1. Go to **Dashboards** > **Import**
86
+
1. Go to **Dashboards** > **Import**.
83
87
1. Upload the JSON file.
84
-
1. Select the Prometheus data source and click **Import**
88
+
1. Select the Prometheus data source and click **Import**.
85
89
86
90
This community dashboard provides comprehensive system metrics including CPU, memory, disk, and network usage.
87
91
88
92
## Understand the {{% param "PRODUCT_NAME" %}} configuration
89
93
90
-
This example uses a `config.alloy` file to configure {{< param "PRODUCT_NAME" >}} components for metrics and logging.
94
+
This scenario uses a `config.alloy` file to configure {{< param "PRODUCT_NAME" >}} components for metrics and logging.
91
95
You can find this file in the cloned repository at `alloy-scenarios/linux/`.
96
+
The configuration demonstrates how to collect Linux system metrics and logs, then forward them to Prometheus and Loki for storage and visualization.
92
97
93
98
### Configure metrics
94
99
95
-
The metrics configuration in this example requires four components:
100
+
The metrics configuration in this scenario requires four components that work together to collect, process, and forward system metrics.
101
+
The components are configured in this order to create a data pipeline:
96
102
97
-
*`prometheus.exporter.unix`
98
-
*`discovery.relabel`
99
-
*`prometheus.scrape`
100
-
*`prometheus.remote_write`
103
+
*`prometheus.exporter.unix` - collects system metrics
104
+
*`discovery.relabel` - adds standard labels to metrics
105
+
*`prometheus.scrape` - scrapes metrics from the exporter
106
+
*`prometheus.remote_write` - sends metrics to Prometheus for storage
101
107
102
108
#### `prometheus.exporter.unix`
103
109
104
110
The [`prometheus.exporter.unix`][prometheus.exporter.unix] component exposes hardware and Linux kernel metrics.
105
-
This is the primary component that you configure to collect your Linux system metrics.
111
+
This component is the primary data source that collects system performance metrics from your Linux server.
106
112
107
-
In this example, this component requires the following arguments:
113
+
The component configuration includes several important sections:
108
114
109
-
*`disable_collectors`: Disable specific collectors to reduce unnecessary overhead.
110
-
*`enable_collectors`: Enable the `meminfo` collector.
111
-
*`fs_types_exclude`: A regular expression of filesystem types to ignore.
112
-
*`mount_points_exclude`: A regular expression of mount types to ignore.
113
-
*`mount_timeout`: How long to wait for a mount to respond before marking it as stale.
114
-
*`ignored_devices`: Regular expression of virtual and container network interfaces to ignore.
115
-
*`device_exclude`: Regular expression of virtual and container network interfaces to exclude.
115
+
*`disable_collectors`: Disables specific collectors to reduce unnecessary overhead
116
+
*`enable_collectors`: Enables the `meminfo` collector for memory information
This component provides the `prometheus.exporter.unix.integrations_node_exporter.output` target for `prometheus.scrape`.
141
+
This component provides the `prometheus.exporter.unix.integrations_node_exporter.targets` output that feeds into the `discovery.relabel` component.
139
142
140
143
#### `discovery.relabel` instance and job labels
141
144
142
-
There are two `discovery.relabel` components in this configuration.
143
-
This [`discovery.relabel`][discovery.relabel] component replaces the instance and job labels that come in from the `node_exporter` with the hostname of the machine and a standard job name for all metrics.
145
+
The first [`discovery.relabel`][discovery.relabel] component in this configuration replaces the instance and job labels from the `node_exporter` with standardized values.
146
+
This ensures consistent labeling across all metrics for easier querying and dashboard creation.
144
147
145
148
In this example, this component requires the following arguments:
This component provides the `discovery.relabel.logs_integrations_integrations_node_exporter_journal_scrape.rules` relabeling rules that feed into the `loki.source.journal` component.
209
+
203
210
#### `prometheus.scrape`
204
211
205
212
The [`prometheus.scrape`][prometheus.scrape] component scrapes `node_exporter` metrics and forwards them to a receiver.
213
+
This component consumes the labeled targets from the `discovery.relabel.integrations_node_exporter.output`.
214
+
206
215
In this example, the component requires the following arguments:
207
216
208
217
*`targets`: The target to scrape metrics from. Use the targets with labels from the `discovery.relabel` component.
This component provides the `local.file_match.logs_integrations_integrations_node_exporter_direct_scrape.targets` file list that feeds into the `loki.source.file` component.
303
+
287
304
#### `loki.source.file`
288
305
289
306
The [`loki.source.file`][loki.source.file] component reads log entries from files and forwards them to other Loki components.
307
+
This component consumes the file targets from `local.file_match.logs_integrations_integrations_node_exporter_direct_scrape.targets`.
308
+
290
309
In this example, the component requires the following arguments:
This component provides file-based log entries that feed into the `loki.write.local.receiver` for storage in Loki.
322
+
302
323
#### `loki.write`
303
324
304
325
The [`loki.write`][loki.write] component writes logs to a Loki destination.
@@ -314,12 +335,12 @@ loki.write "local" {
314
335
}
315
336
```
316
337
317
-
### Configure `livedebugging`
338
+
This component provides the `loki.write.local.receiver` destination that receives log entries from both `loki.source.journal` and `loki.source.file` components.
318
339
319
-
`livedebugging` streams real-time data from your components directly to the {{< param "PRODUCT_NAME" >}} UI.
320
-
Refer to the [Troubleshooting documentation][troubleshooting] for more details about this feature.
0 commit comments