Skip to content

Commit 0be0d8b

Browse files
authored
Merge branch 'main' into marcsanmi/update-ebpf-profiler-upstream-2026-02-12
2 parents 4423a2c + 6087b4e commit 0be0d8b

2 files changed

Lines changed: 92 additions & 45 deletions

File tree

docs/sources/monitor/monitor-linux.md

Lines changed: 82 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -11,28 +11,32 @@ weight: 225
1111
The Linux operating system generates a wide range of metrics and logs that you can use to monitor the health and performance of your hardware and operating system.
1212
With {{< param "PRODUCT_NAME" >}}, you can collect your metrics and logs, forward them to a Grafana stack, and create dashboards to monitor your Linux servers.
1313

14+
This scenario demonstrates how to use {{< param "PRODUCT_NAME" >}} to monitor Linux system metrics and logs using a complete example configuration.
15+
You'll deploy a containerized monitoring stack that includes {{< param "PRODUCT_NAME" >}}, Prometheus, Loki, and Grafana.
16+
1417
The [`alloy-scenarios`][scenarios] repository contains complete examples of {{< param "PRODUCT_NAME" >}} deployments.
1518
Clone the repository and use the examples to understand how {{< param "PRODUCT_NAME" >}} collects, processes, and exports telemetry signals.
1619

17-
In this example scenario, {{< param "PRODUCT_NAME" >}} collects Linux metrics and forwards them to a Loki destination.
18-
1920
[scenarios]: https://github.com/grafana/alloy-scenarios/
2021

2122
## Before you begin
2223

23-
Ensure you have the following:
24+
Before you begin, ensure you have the following:
25+
26+
* [Docker](https://www.docker.com/) and Docker Compose installed
27+
* [Git](https://git-scm.com/) for cloning the repository
28+
* A Linux host or Linux running in a virtual machine
29+
* Administrator privileges to run Docker commands
30+
* Available ports: 3000 (Grafana), 9090 (Prometheus), 3100 (Loki), and 12345 ({{< param "PRODUCT_NAME" >}} UI)
2431

25-
* [Docker](https://www.docker.com/)
26-
* [Git](https://git-scm.com/)
27-
* A Linux host or Linux running in a Virtual Machine
32+
## Clone and deploy the scenario
2833

29-
{{< admonition type="note" >}}
30-
You need administrator privileges to run `docker` commands.
31-
{{< /admonition >}}
34+
This scenario runs {{< param "PRODUCT_NAME" >}} in a container alongside Grafana, Prometheus, and Loki, creating a self-contained monitoring stack.
35+
The {{< param "PRODUCT_NAME" >}} container acts as a demonstration system to show monitoring capabilities.
3236

33-
## Clone and deploy the example
37+
In a production environment, you would typically install {{< param "PRODUCT_NAME" >}} directly on each Linux server you want to monitor.
3438

35-
Follow these steps to clone the repository and deploy the monitoring example:
39+
Follow these steps to clone the repository and deploy the monitoring scenario:
3640

3741
1. Clone the {{< param "PRODUCT_NAME" >}} scenarios repository:
3842

@@ -53,15 +57,15 @@ Follow these steps to clone the repository and deploy the monitoring example:
5357
docker ps
5458
```
5559

56-
1. (Optional) Stop Docker to shut down the Grafana stack when you finish exploring this example:
60+
1. (Optional) Stop Docker to shut down the Grafana stack when you finish exploring this scenario:
5761

5862
```shell
5963
docker compose down
6064
```
6165

6266
## Monitor and visualize your data
6367

64-
Use Grafana to monitor deployment health and visualize data.
68+
After deploying the monitoring stack, you can use the {{< param "PRODUCT_NAME" >}} UI to monitor deployment health and Grafana to visualize your collected data.
6569

6670
### Monitor {{% param "PRODUCT_NAME" %}}
6771

@@ -79,40 +83,39 @@ To create a [dashboard](https://grafana.com/docs/grafana/latest/getting-started/
7983

8084
1. Open your browser and go to [http://localhost:3000/dashboards](http://localhost:3000/dashboards).
8185
1. Download the JSON file for the preconfigured [Linux node dashboard](https://grafana.com/api/dashboards/1860/revisions/37/download).
82-
1. Go to **Dashboards** > **Import**
86+
1. Go to **Dashboards** > **Import**.
8387
1. Upload the JSON file.
84-
1. Select the Prometheus data source and click **Import**
88+
1. Select the Prometheus data source and click **Import**.
8589

8690
This community dashboard provides comprehensive system metrics including CPU, memory, disk, and network usage.
8791

8892
## Understand the {{% param "PRODUCT_NAME" %}} configuration
8993

90-
This example uses a `config.alloy` file to configure {{< param "PRODUCT_NAME" >}} components for metrics and logging.
94+
This scenario uses a `config.alloy` file to configure {{< param "PRODUCT_NAME" >}} components for metrics and logging.
9195
You can find this file in the cloned repository at `alloy-scenarios/linux/`.
96+
The configuration demonstrates how to collect Linux system metrics and logs, then forward them to Prometheus and Loki for storage and visualization.
9297

9398
### Configure metrics
9499

95-
The metrics configuration in this example requires four components:
100+
The metrics configuration in this scenario requires four components that work together to collect, process, and forward system metrics.
101+
The components are configured in this order to create a data pipeline:
96102

97-
* `prometheus.exporter.unix`
98-
* `discovery.relabel`
99-
* `prometheus.scrape`
100-
* `prometheus.remote_write`
103+
* `prometheus.exporter.unix` - collects system metrics
104+
* `discovery.relabel` - adds standard labels to metrics
105+
* `prometheus.scrape` - scrapes metrics from the exporter
106+
* `prometheus.remote_write` - sends metrics to Prometheus for storage
101107

102108
#### `prometheus.exporter.unix`
103109

104110
The [`prometheus.exporter.unix`][prometheus.exporter.unix] component exposes hardware and Linux kernel metrics.
105-
This is the primary component that you configure to collect your Linux system metrics.
111+
This component is the primary data source that collects system performance metrics from your Linux server.
106112

107-
In this example, this component requires the following arguments:
113+
The component configuration includes several important sections:
108114

109-
* `disable_collectors`: Disable specific collectors to reduce unnecessary overhead.
110-
* `enable_collectors`: Enable the `meminfo` collector.
111-
* `fs_types_exclude`: A regular expression of filesystem types to ignore.
112-
* `mount_points_exclude`: A regular expression of mount types to ignore.
113-
* `mount_timeout`: How long to wait for a mount to respond before marking it as stale.
114-
* `ignored_devices`: Regular expression of virtual and container network interfaces to ignore.
115-
* `device_exclude`: Regular expression of virtual and container network interfaces to exclude.
115+
* `disable_collectors`: Disables specific collectors to reduce unnecessary overhead
116+
* `enable_collectors`: Enables the `meminfo` collector for memory information
117+
* `filesystem`: Configures filesystem monitoring options
118+
* `netclass` and `netdev`: Configure network interface monitoring
116119

117120
```alloy
118121
prometheus.exporter.unix "integrations_node_exporter" {
@@ -135,12 +138,12 @@ prometheus.exporter.unix "integrations_node_exporter" {
135138
}
136139
```
137140

138-
This component provides the `prometheus.exporter.unix.integrations_node_exporter.output` target for `prometheus.scrape`.
141+
This component provides the `prometheus.exporter.unix.integrations_node_exporter.targets` output that feeds into the `discovery.relabel` component.
139142

140143
#### `discovery.relabel` instance and job labels
141144

142-
There are two `discovery.relabel` components in this configuration.
143-
This [`discovery.relabel`][discovery.relabel] component replaces the instance and job labels that come in from the `node_exporter` with the hostname of the machine and a standard job name for all metrics.
145+
The first [`discovery.relabel`][discovery.relabel] component in this configuration replaces the instance and job labels from the `node_exporter` with standardized values.
146+
This ensures consistent labeling across all metrics for easier querying and dashboard creation.
144147

145148
In this example, this component requires the following arguments:
146149

@@ -164,6 +167,8 @@ discovery.relabel "integrations_node_exporter" {
164167
}
165168
```
166169

170+
This component provides the `discovery.relabel.integrations_node_exporter.output` target list that feeds into the `prometheus.scrape` component.
171+
167172
#### `discovery.relabel` for systemd journal logs
168173

169174
This [`discovery.relabel`][discovery.relabel] component defines the relabeling rules for the systemd journal logs.
@@ -200,9 +205,13 @@ discovery.relabel "logs_integrations_integrations_node_exporter_journal_scrape"
200205
}
201206
```
202207

208+
This component provides the `discovery.relabel.logs_integrations_integrations_node_exporter_journal_scrape.rules` relabeling rules that feed into the `loki.source.journal` component.
209+
203210
#### `prometheus.scrape`
204211

205212
The [`prometheus.scrape`][prometheus.scrape] component scrapes `node_exporter` metrics and forwards them to a receiver.
213+
This component consumes the labeled targets from the `discovery.relabel.integrations_node_exporter.output`.
214+
206215
In this example, the component requires the following arguments:
207216

208217
* `targets`: The target to scrape metrics from. Use the targets with labels from the `discovery.relabel` component.
@@ -217,15 +226,15 @@ prometheus.scrape "integrations_node_exporter" {
217226
}
218227
```
219228

229+
This component provides scraped metrics that feed into the `prometheus.remote_write.local.receiver` for storage in Prometheus.
230+
220231
#### `prometheus.remote_write`
221232

222233
The [`prometheus.remote_write`][prometheus.remote_write] component sends metrics to a Prometheus server.
223234
In this example, the component requires the following argument:
224235

225236
* `url`: Defines the full URL endpoint to send metrics to.
226237

227-
This component provides the `prometheus.remote_write.local.receiver` destination for `prometheus.scrape`.
228-
229238
```alloy
230239
prometheus.remote_write "local" {
231240
endpoint {
@@ -234,20 +243,24 @@ prometheus.remote_write "local" {
234243
}
235244
```
236245

237-
This component provides the `prometheus.remote_write.local.receiver` destination for `prometheus.scrape`.
246+
This component provides the `prometheus.remote_write.local.receiver` destination that receives metrics from the `prometheus.scrape` component.
238247

239248
### Configure logging
240249

241-
The logging configuration in this example requires four components:
250+
The logging configuration in this scenario collects logs from both systemd journal and standard log files.
251+
This dual approach ensures comprehensive log coverage for most Linux systems.
252+
The configuration requires four main components that work together to discover, collect, and forward logs to Loki:
242253

243-
* `loki.source.journal`
244-
* `local.file_match`
245-
* `loki.source.file`
246-
* `loki.write`
254+
* `loki.source.journal` - collects logs from systemd journal
255+
* `local.file_match` - discovers standard log files using glob patterns
256+
* `loki.source.file` - reads logs from discovered files
257+
* `loki.write` - sends all collected logs to Loki for storage
247258

248259
#### `loki.source.journal`
249260

250261
The [`loki.source.journal`][loki.source.journal] component collects logs from the systemd journal and forwards them to a Loki receiver.
262+
This component consumes the relabeling rules from `discovery.relabel.logs_integrations_integrations_node_exporter_journal_scrape.rules`.
263+
251264
In this example, the component requires the following arguments:
252265

253266
* `max_age`: Only collect logs from the last 24 hours.
@@ -262,6 +275,8 @@ loki.source.journal "logs_integrations_integrations_node_exporter_journal_scrape
262275
}
263276
```
264277

278+
This component provides systemd journal log entries that feed into the `loki.write.local.receiver` for storage in Loki.
279+
265280
#### `local.file_match`
266281

267282
The [`local.file_match`][local.file_match] component discovers files on the local filesystem using glob patterns.
@@ -284,9 +299,13 @@ local.file_match "logs_integrations_integrations_node_exporter_direct_scrape" {
284299
}
285300
```
286301

302+
This component provides the `local.file_match.logs_integrations_integrations_node_exporter_direct_scrape.targets` file list that feeds into the `loki.source.file` component.
303+
287304
#### `loki.source.file`
288305

289306
The [`loki.source.file`][loki.source.file] component reads log entries from files and forwards them to other Loki components.
307+
This component consumes the file targets from `local.file_match.logs_integrations_integrations_node_exporter_direct_scrape.targets`.
308+
290309
In this example, the component requires the following arguments:
291310

292311
* `targets`: The list of files to read logs from.
@@ -299,6 +318,8 @@ loki.source.file "logs_integrations_integrations_node_exporter_direct_scrape" {
299318
}
300319
```
301320

321+
This component provides file-based log entries that feed into the `loki.write.local.receiver` for storage in Loki.
322+
302323
#### `loki.write`
303324

304325
The [`loki.write`][loki.write] component writes logs to a Loki destination.
@@ -314,12 +335,12 @@ loki.write "local" {
314335
}
315336
```
316337

317-
### Configure `livedebugging`
338+
This component provides the `loki.write.local.receiver` destination that receives log entries from both `loki.source.journal` and `loki.source.file` components.
318339

319-
`livedebugging` streams real-time data from your components directly to the {{< param "PRODUCT_NAME" >}} UI.
320-
Refer to the [Troubleshooting documentation][troubleshooting] for more details about this feature.
340+
### Configure `livedebugging`
321341

322-
[troubleshooting]: https://grafana.com/docs/alloy/latest/troubleshoot/debug/#live-debugging-page
342+
The `livedebugging` feature streams real-time data from your components directly to the {{< param "PRODUCT_NAME" >}} UI.
343+
This capability helps you troubleshoot configuration issues and monitor component behavior in real-time.
323344

324345
#### `livedebugging`
325346

@@ -331,6 +352,22 @@ You can use an empty configuration for this block and {{< param "PRODUCT_NAME" >
331352
livedebugging{}
332353
```
333354

355+
For more information about using this feature for troubleshooting, refer to the [Troubleshooting documentation][troubleshooting].
356+
357+
[troubleshooting]: https://grafana.com/docs/alloy/latest/troubleshoot/debug/#live-debugging-page
358+
359+
## Next steps
360+
361+
Now that you've successfully deployed and configured {{< param "PRODUCT_NAME" >}} to monitor Linux systems, you can:
362+
363+
* [Configure {{< param "PRODUCT_NAME" >}} to collect metrics from applications](https://grafana.com/docs/alloy/latest/tutorials/)
364+
* [Set up alerting rules in Grafana](https://grafana.com/docs/grafana/latest/alerting/)
365+
* [Explore advanced {{< param "PRODUCT_NAME" >}} component configurations](https://grafana.com/docs/alloy/latest/reference/components/)
366+
* [Deploy {{< param "PRODUCT_NAME" >}} in production environments](https://grafana.com/docs/alloy/latest/set-up/)
367+
* [Monitor multiple Linux servers with a centralized configuration](https://grafana.com/docs/alloy/latest/configure/)
368+
369+
For additional examples and configurations, refer to the [alloy-scenarios repository](https://github.com/grafana/alloy-scenarios).
370+
334371
[prometheus.scrape]: https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/prometheus/prometheus.scrape/
335372
[prometheus.remote_write]: https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/prometheus/prometheus.remote_write/
336373
[discovery.relabel]: https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/discovery/discovery.relabel/

syntax/internal/value/tag_cache.go

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@ package value
22

33
import (
44
"reflect"
5+
"sync"
56

67
"github.com/grafana/alloy/syntax/internal/syntaxtags"
78
)
@@ -11,14 +12,20 @@ import (
1112
// of the process, this will consume a negligible amount of memory.
1213
var tagsCache = make(map[reflect.Type]*objectFields)
1314

15+
// tagsCacheMutex protects concurrent reads and writes to tagsCache.
16+
var tagsCacheMutex sync.RWMutex
17+
1418
func getCachedTags(t reflect.Type) *objectFields {
1519
if t.Kind() != reflect.Struct {
1620
panic("getCachedTags called with non-struct type")
1721
}
1822

23+
tagsCacheMutex.RLock()
1924
if entry, ok := tagsCache[t]; ok {
25+
tagsCacheMutex.RUnlock()
2026
return entry
2127
}
28+
tagsCacheMutex.RUnlock()
2229

2330
ff := syntaxtags.Get(t)
2431

@@ -62,7 +69,10 @@ func getCachedTags(t reflect.Type) *objectFields {
6269
}
6370
}
6471

72+
tagsCacheMutex.Lock()
6573
tagsCache[t] = tree
74+
tagsCacheMutex.Unlock()
75+
6676
return tree
6777
}
6878

0 commit comments

Comments
 (0)