diff --git a/solutions/observability/logs/filter-aggregate-logs.md b/solutions/observability/logs/filter-aggregate-logs.md index 0ce47aade..27f168ca6 100644 --- a/solutions/observability/logs/filter-aggregate-logs.md +++ b/solutions/observability/logs/filter-aggregate-logs.md @@ -91,16 +91,16 @@ Add some logs with varying timestamps and log levels to your data stream: ```console POST logs-example-default/_bulk { "create": {} } -{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "message": "2025-04-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } { "create": {} } -{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } +{ "message": "2025-04-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } { "create": {} } -{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } +{ "message": "2025-04-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } { "create": {} } -{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } +{ "message": "2025-04-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } ``` -For this example, let’s look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Discover: +For this example, let’s look for logs with a `WARN` or `ERROR` log level that occurred on April 14th or 15th. From Discover: 1. Make sure **All logs** is selected in the **Data views** menu. 1. Add the following KQL query in the search bar to filter for logs with log levels of `WARN` or `ERROR`: @@ -108,11 +108,11 @@ For this example, let’s look for logs with a `WARN` or `ERROR` log level that ```text log.level: ("ERROR" or "WARN") ``` -1. Click the current time range, select **Absolute**, and set the **Start date** to `Sep 14, 2023 @ 00:00:00.000`. +1. Click the current time range, select **Absolute**, and set the **Start date** to `Apr 14, 2025 @ 00:00:00.000`. ![Set the time range start date](../../images/serverless-logs-start-date.png "") -1. Click the end of the current time range, select **Absolute**, and set the **End date** to `Sep 15, 2023 @ 23:59:59.999`. +1. Click the end of the current time range, select **Absolute**, and set the **End date** to `Apr 15, 2025 @ 23:59:59.999`. ![Set the time range end date](/solutions/images/serverless-logs-end-date.png "") @@ -138,16 +138,16 @@ First, from **Developer Tools**, add some logs with varying timestamps and log l ```console POST logs-example-default/_bulk { "create": {} } -{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "message": "2025-04-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } { "create": {} } -{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } +{ "message": "2025-04-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } { "create": {} } -{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } +{ "message": "2025-04-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } { "create": {} } -{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } +{ "message": "2025-04-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } ``` -Let’s say you want to look into an event that occurred between September 14th and 15th. The following boolean query filters for logs with timestamps during those days that also have a log level of `ERROR` or `WARN`. +Let’s say you want to look into an event that occurred between April 14th and 15th. The following boolean query filters for logs with timestamps during those days that also have a log level of `ERROR` or `WARN`. ```console POST /logs-example-default/_search @@ -158,8 +158,8 @@ POST /logs-example-default/_search { "range": { "@timestamp": { - "gte": "2023-09-14T00:00:00", - "lte": "2023-09-15T23:59:59" + "gte": "2025-04-14T00:00:00", + "lte": "2025-04-15T23:59:59" } } }, @@ -183,7 +183,7 @@ The filtered results should show `WARN` and `ERROR` logs that occurred within th ... "hits": [ { - "_index": ".ds-logs-example-default-2023.09.25-000001", + "_index": ".ds-logs-example-default-2025.04.25-000001", "_id": "JkwPzooBTddK4OtTQToP", "_score": 0, "_source": { @@ -191,11 +191,11 @@ The filtered results should show `WARN` and `ERROR` logs that occurred within th "log": { "level": "WARN" }, - "@timestamp": "2023-09-15T08:15:20.234Z" + "@timestamp": "2025-04-15T08:15:20.234Z" } }, { - "_index": ".ds-logs-example-default-2023.09.25-000001", + "_index": ".ds-logs-example-default-2025.04.25-000001", "_id": "A5YSzooBMYFrNGNwH75O", "_score": 0, "_source": { @@ -203,7 +203,7 @@ The filtered results should show `WARN` and `ERROR` logs that occurred within th "log": { "level": "ERROR" }, - "@timestamp": "2023-09-14T10:30:45.789Z" + "@timestamp": "2025-04-14T10:30:45.789Z" } } ] @@ -223,19 +223,19 @@ First, from **Developer Tools**, add some logs with varying log levels to your d ```console POST logs-example-default/_bulk { "create": {} } -{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "message": "2025-04-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } { "create": {} } -{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } +{ "message": "2025-04-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } { "create": {} } -{ "message": "2023-09-15T12:45:55.123Z INFO 192.168.1.103 Application successfully started." } +{ "message": "2025-04-15T12:45:55.123Z INFO 192.168.1.103 Application successfully started." } { "create": {} } -{ "message": "2023-09-14T15:20:10.789Z WARN 192.168.1.104 Network latency exceeding threshold." } +{ "message": "2025-04-14T15:20:10.789Z WARN 192.168.1.104 Network latency exceeding threshold." } { "create": {} } -{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } +{ "message": "2025-04-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } { "create": {} } -{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } +{ "message": "2025-04-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } { "create": {} } -{ "message": "2023-09-21T15:20:55.678Z DEBUG 192.168.1.102 Database connection established." } +{ "message": "2025-04-21T15:20:55.678Z DEBUG 192.168.1.102 Database connection established." } ``` Next, run this command to aggregate your log data using the `log.level` field: @@ -297,8 +297,8 @@ GET /logs-example-default/_search "query": { "range": { "@timestamp": { - "gte": "2023-09-14T00:00:00", - "lte": "2023-09-15T23:59:59" + "gte": "2025-04-14T00:00:00", + "lte": "2025-04-15T23:59:59" } } }, diff --git a/solutions/observability/logs/parse-route-logs.md b/solutions/observability/logs/parse-route-logs.md index b51c1753d..846dc3c92 100644 --- a/solutions/observability/logs/parse-route-logs.md +++ b/solutions/observability/logs/parse-route-logs.md @@ -33,7 +33,7 @@ Make your logs more useful by extracting structured fields from your unstructure Follow the steps below to see how the following unstructured log data is indexed by default: ```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. +2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. ``` Start by storing the document in the `logs-example-default` data stream: @@ -44,7 +44,7 @@ Start by storing the document in the `logs-example-default` data stream: ```console POST logs-example-default/_doc { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." + "message": "2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } ``` @@ -64,11 +64,11 @@ The results should look like this: ... "hits": [ { - "_index": ".ds-logs-example-default-2023.08.09-000001", + "_index": ".ds-logs-example-default-2025.05.09-000001", ... "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.", - "@timestamp": "2023-08-09T17:19:27.73312243Z" + "message": "2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.", + "@timestamp": "2025-05-09T17:19:27.73312243Z" } } ] @@ -93,7 +93,7 @@ GET logs-example-default/_search While you can search for phrases in the `message` field, you can’t use this field to filter log data. Your message, however, contains all of the following potential fields you can extract and use to filter and aggregate your log data: -* **@timestamp** (`2023-08-08T13:45:12.123Z`): Extracting this field lets you sort logs by date and time. This is helpful when you want to view your logs in the order that they occurred or identify when issues happened. +* **@timestamp** (`2025-05-08T13:45:12.123Z`): Extracting this field lets you sort logs by date and time. This is helpful when you want to view your logs in the order that they occurred or identify when issues happened. * **log.level** (`WARN`): Extracting this field lets you filter logs by severity. This is helpful if you want to focus on high-severity WARN or ERROR-level logs, and reduce noise by filtering out low-severity INFO-level logs. * **host.ip** (`192.168.1.101`): Extracting this field lets you filter logs by the host IP addresses. This is helpful if you want to focus on specific hosts that you’re having issues with or if you want to find disparities between hosts. * **message** (`Disk usage exceeds 90%.`): You can search for phrases or words in the message field. @@ -112,8 +112,8 @@ When you added the log to Elastic in the previous section, the `@timestamp` fiel ```json ... "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.", <1> - "@timestamp": "2023-08-09T17:19:27.73312243Z" <2> + "message": "2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.", <1> + "@timestamp": "2025-05-09T17:19:27.73312243Z" <2> } ... ``` @@ -157,7 +157,7 @@ The previous command sets the following values for your ingest pipeline: * `_ingest/pipeline/logs-example-default`: The name of the pipeline,`logs-example-default`, needs to match the name of your data stream. You’ll set up your data stream in the next section. For more information, refer to the [data stream naming scheme](/reference/fleet/data-streams.md#data-streams-naming-scheme). * `field`: The field you’re extracting data from, `message` in this case. -* `pattern`: The pattern of the elements in your log data. The `%{@timestamp} %{{message}}` pattern extracts the timestamp, `2023-08-08T13:45:12.123Z`, to the `@timestamp` field, while the rest of the message, `WARN 192.168.1.101 Disk usage exceeds 90%.`, stays in the `message` field. The dissect processor looks for the space as a separator defined by the pattern. +* `pattern`: The pattern of the elements in your log data. The `%{@timestamp} %{{message}}` pattern extracts the timestamp, `2025-05-08T13:45:12.123Z`, to the `@timestamp` field, while the rest of the message, `WARN 192.168.1.101 Disk usage exceeds 90%.`, stays in the `message` field. The dissect processor looks for the space as a separator defined by the pattern. #### Test the pipeline with the simulate pipeline API [observability-parse-log-data-test-the-pipeline-with-the-simulate-pipeline-api] @@ -172,7 +172,7 @@ POST _ingest/pipeline/logs-example-default/_simulate "docs": [ { "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." + "message": "2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } } ] @@ -191,7 +191,7 @@ The results should show the `@timestamp` field extracted from the `message` fiel "_version": "-3", "_source": { "message": "WARN 192.168.1.101 Disk usage exceeds 90%.", - "@timestamp": "2023-08-08T13:45:12.123Z" + "@timestamp": "2025-05-08T13:45:12.123Z" }, ... } @@ -260,7 +260,7 @@ Create your data stream using the [data stream naming scheme](/reference/fleet/d ```console POST logs-example-default/_doc { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." + "message": "2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } ``` @@ -281,12 +281,12 @@ You should see the pipeline has extracted the `@timestamp` field: ... "hits": [ { - "_index": ".ds-logs-example-default-2023.08.09-000001", + "_index": ".ds-logs-example-default-2025.05.09-000001", "_id": "RsWy3IkB8yCtA5VGOKLf", "_score": 1, "_source": { "message": "WARN 192.168.1.101 Disk usage exceeds 90%.", - "@timestamp": "2023-08-08T13:45:12.123Z" <1> + "@timestamp": "2025-05-08T13:45:12.123Z" <1> } } ] @@ -315,7 +315,7 @@ Check the following common issues and solutions with timestamps: Extracting the `log.level` field lets you filter by severity and focus on critical issues. This section shows you how to extract the `log.level` field from this example log: ```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. +2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. ``` To extract and use the `log.level` field: @@ -346,7 +346,7 @@ PUT _ingest/pipeline/logs-example-default Now your pipeline will extract these fields: -* The `@timestamp` field: `2023-08-08T13:45:12.123Z` +* The `@timestamp` field: `2025-05-08T13:45:12.123Z` * The `log.level` field: `WARN` * The `message` field: `192.168.1.101 Disk usage exceeds 90%.` @@ -363,7 +363,7 @@ POST _ingest/pipeline/logs-example-default/_simulate "docs": [ { "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." + "message": "2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } } ] @@ -385,7 +385,7 @@ The results should show the `@timestamp` and the `log.level` fields extracted fr "log": { "level": "WARN" }, - "@timestamp": "2023-8-08T13:45:12.123Z", + "@timestamp": "2025-5-08T13:45:12.123Z", }, ... } @@ -402,10 +402,10 @@ Once you’ve extracted the `log.level` field, you can query for high-severity l Let’s say you have the following logs with varying severities: ```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. -2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. -2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. +2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. +2025-05-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. +2025-05-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. +2025-05-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. ``` Add them to your data stream using this command: @@ -413,13 +413,13 @@ Add them to your data stream using this command: ```console POST logs-example-default/_bulk { "create": {} } -{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "message": "2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } { "create": {} } -{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } +{ "message": "2025-05-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } { "create": {} } -{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } +{ "message": "2025-05-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } { "create": {} } -{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } +{ "message": "2025-05-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } ``` Then, query for documents with a log level of `WARN` or `ERROR` with this command: @@ -445,7 +445,7 @@ The results should show only the high-severity logs: ... "hits": [ { - "_index": ".ds-logs-example-default-2023.08.14-000001", + "_index": ".ds-logs-example-default-2025.05.14-000001", "_id": "3TcZ-4kB3FafvEVY4yKx", "_score": 1, "_source": { @@ -453,11 +453,11 @@ The results should show only the high-severity logs: "log": { "level": "WARN" }, - "@timestamp": "2023-08-08T13:45:12.123Z" + "@timestamp": "2025-05-08T13:45:12.123Z" } }, { - "_index": ".ds-logs-example-default-2023.08.14-000001", + "_index": ".ds-logs-example-default-2025.05.14-000001", "_id": "3jcZ-4kB3FafvEVY4yKx", "_score": 1, "_source": { @@ -465,7 +465,7 @@ The results should show only the high-severity logs: "log": { "level": "ERROR" }, - "@timestamp": "2023-08-08T13:45:14.003Z" + "@timestamp": "2025-05-08T13:45:14.003Z" } } ] @@ -483,10 +483,10 @@ The `host.ip` field is part of the [Elastic Common Schema (ECS)](ecs://reference This section shows you how to extract the `host.ip` field from the following example logs and query based on the extracted fields: ```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. -2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. -2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. +2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. +2025-05-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. +2025-05-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. +2025-05-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. ``` To extract and use the `host.ip` field: @@ -517,7 +517,7 @@ PUT _ingest/pipeline/logs-example-default Your pipeline will extract these fields: -* The `@timestamp` field: `2023-08-08T13:45:12.123Z` +* The `@timestamp` field: `2025-05-08T13:45:12.123Z` * The `log.level` field: `WARN` * The `host.ip` field: `192.168.1.101` * The `message` field: `Disk usage exceeds 90%.` @@ -535,7 +535,7 @@ POST _ingest/pipeline/logs-example-default/_simulate "docs": [ { "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." + "message": "2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } } ] @@ -554,7 +554,7 @@ The results should show the `host.ip`, `@timestamp`, and `log.level` fields extr "host": { "ip": "192.168.1.101" }, - "@timestamp": "2023-08-08T13:45:12.123Z", + "@timestamp": "2025-05-08T13:45:12.123Z", "message": "Disk usage exceeds 90%.", "log": { "level": "WARN" @@ -577,13 +577,13 @@ Before querying your logs, add them to your data stream using this command: ```console POST logs-example-default/_bulk { "create": {} } -{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "message": "2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } { "create": {} } -{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } +{ "message": "2025-05-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } { "create": {} } -{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } +{ "message": "2025-05-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } { "create": {} } -{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } +{ "message": "2025-05-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } ``` @@ -611,14 +611,14 @@ Because all of the example logs are in this range, you’ll get the following re "hits": { ... { - "_index": ".ds-logs-example-default-2023.08.16-000001", + "_index": ".ds-logs-example-default-2025.05.16-000001", "_id": "ak4oAIoBl7fe5ItIixuB", "_score": 1, "_source": { "host": { "ip": "192.168.1.101" }, - "@timestamp": "2023-08-08T13:45:12.123Z", + "@timestamp": "2025-05-08T13:45:12.123Z", "message": "Disk usage exceeds 90%.", "log": { "level": "WARN" @@ -626,14 +626,14 @@ Because all of the example logs are in this range, you’ll get the following re } }, { - "_index": ".ds-logs-example-default-2023.08.16-000001", + "_index": ".ds-logs-example-default-2025.05.16-000001", "_id": "a04oAIoBl7fe5ItIixuC", "_score": 1, "_source": { "host": { "ip": "192.168.1.103" }, - "@timestamp": "2023-08-08T13:45:14.003Z", + "@timestamp": "2025-05-08T13:45:14.003Z", "message": "Database connection failed.", "log": { "level": "ERROR" @@ -641,14 +641,14 @@ Because all of the example logs are in this range, you’ll get the following re } }, { - "_index": ".ds-logs-example-default-2023.08.16-000001", + "_index": ".ds-logs-example-default-2025.05.16-000001", "_id": "bE4oAIoBl7fe5ItIixuC", "_score": 1, "_source": { "host": { "ip": "192.168.1.104" }, - "@timestamp": "2023-08-08T13:45:15.004Z", + "@timestamp": "2025-05-08T13:45:15.004Z", "message": "Debugging connection issue.", "log": { "level": "DEBUG" @@ -656,14 +656,14 @@ Because all of the example logs are in this range, you’ll get the following re } }, { - "_index": ".ds-logs-example-default-2023.08.16-000001", + "_index": ".ds-logs-example-default-2025.05.16-000001", "_id": "bU4oAIoBl7fe5ItIixuC", "_score": 1, "_source": { "host": { "ip": "192.168.1.102" }, - "@timestamp": "2023-08-08T13:45:16.005Z", + "@timestamp": "2025-05-08T13:45:16.005Z", "message": "User changed profile picture.", "log": { "level": "INFO" @@ -709,14 +709,14 @@ You’ll get the following results only showing logs in the range you’ve set: "hits": { ... { - "_index": ".ds-logs-example-default-2023.08.16-000001", + "_index": ".ds-logs-example-default-2025.05.16-000001", "_id": "ak4oAIoBl7fe5ItIixuB", "_score": 1, "_source": { "host": { "ip": "192.168.1.101" }, - "@timestamp": "2023-08-08T13:45:12.123Z", + "@timestamp": "2025-05-08T13:45:12.123Z", "message": "Disk usage exceeds 90%.", "log": { "level": "WARN" @@ -724,14 +724,14 @@ You’ll get the following results only showing logs in the range you’ve set: } }, { - "_index": ".ds-logs-example-default-2023.08.16-000001", + "_index": ".ds-logs-example-default-2025.05.16-000001", "_id": "bU4oAIoBl7fe5ItIixuC", "_score": 1, "_source": { "host": { "ip": "192.168.1.102" }, - "@timestamp": "2023-08-08T13:45:16.005Z", + "@timestamp": "2025-05-08T13:45:16.005Z", "message": "User changed profile picture.", "log": { "level": "INFO" @@ -751,10 +751,10 @@ By default, an ingest pipeline sends your log data to a single data stream. To s This section shows you how to use a reroute processor to send the high-severity logs (`WARN` or `ERROR`) from the following example logs to a specific data stream and keep the regular logs (`DEBUG` and `INFO`) in the default data stream: ```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. -2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. -2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. +2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. +2025-05-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. +2025-05-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. +2025-05-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. ``` ::::{note} @@ -812,13 +812,13 @@ Add the example logs to your data stream with this command: ```console POST logs-example-default/_bulk { "create": {} } -{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "message": "2025-05-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } { "create": {} } -{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } +{ "message": "2025-05-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } { "create": {} } -{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } +{ "message": "2025-05-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } { "create": {} } -{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } +{ "message": "2025-05-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } ``` @@ -843,7 +843,7 @@ Your should see similar results to the following showing that the high-severity "host": { "ip": "192.168.1.101" }, - "@timestamp": "2023-08-08T13:45:12.123Z", + "@timestamp": "2025-05-08T13:45:12.123Z", "message": "Disk usage exceeds 90%.", "log": { "level": "WARN" @@ -859,7 +859,7 @@ Your should see similar results to the following showing that the high-severity "host": { "ip": "192.168.1.103" }, - "@timestamp": "2023-08-08T13:45:14.003Z", + "@timestamp": "2025-05-08T13:45:14.003Z", "message": "Database connection failed.", "log": { "level": "ERROR"