From 5f07f0bf4bea5ed389d3bb79bc34c6495af51b74 Mon Sep 17 00:00:00 2001 From: Colleen McGinnis Date: Tue, 15 Apr 2025 18:00:51 -0500 Subject: [PATCH] fix image paths for docs-assembler --- docs/reference/advanced-pipeline.md | 4 +--- .../dashboard-monitoring-with-elastic-agent.md | 18 ++++++------------ docs/reference/dead-letter-queues.md | 4 +--- docs/reference/deploying-scaling-logstash.md | 16 ++++------------ docs/reference/first-event.md | 4 +--- ...logstash-centralized-pipeline-management.md | 4 +--- docs/reference/logstash-monitoring-ui.md | 8 ++------ docs/reference/logstash-pipeline-viewer.md | 12 ++++-------- .../monitoring-internal-collection-legacy.md | 4 +--- .../reference/monitoring-with-elastic-agent.md | 12 ++++-------- ...serverless-monitoring-with-elastic-agent.md | 6 ++---- docs/reference/tuning-logstash.md | 10 ++++------ 12 files changed, 31 insertions(+), 71 deletions(-) diff --git a/docs/reference/advanced-pipeline.md b/docs/reference/advanced-pipeline.md index 0fce51f820c..ab052976529 100644 --- a/docs/reference/advanced-pipeline.md +++ b/docs/reference/advanced-pipeline.md @@ -601,9 +601,7 @@ A few log entries come from Buffalo, so the query produces the following respons If you are using Kibana to visualize your data, you can also explore the Filebeat data in Kibana: -:::{image} images/kibana-filebeat-data.png -:alt: Discovering Filebeat data in Kibana -::: +![Discovering Filebeat data in Kibana](images/kibana-filebeat-data.png) See the [Filebeat quick start docs](beats://reference/filebeat/filebeat-installation-configuration.md) for info about loading the Kibana index pattern for Filebeat. diff --git a/docs/reference/dashboard-monitoring-with-elastic-agent.md b/docs/reference/dashboard-monitoring-with-elastic-agent.md index 3d4f9e8d47a..9b3743cdc84 100644 --- a/docs/reference/dashboard-monitoring-with-elastic-agent.md +++ b/docs/reference/dashboard-monitoring-with-elastic-agent.md @@ -70,10 +70,8 @@ Check out [Installing {{agent}}](docs-content://reference/fleet/install-elastic- 1. Go to the {{kib}} home page, and click **Add integrations**. - :::{image} images/kibana-home.png - :alt: {{kib}} home page - :class: screenshot - ::: + % TO DO: Use `:class: screenshot` + ![{{kib}} home page](images/kibana-home.png) 2. In the query bar, search for **{{ls}}** and select the integration to see more details. 3. Click **Add {{ls}}**. @@ -135,10 +133,8 @@ After you have confirmed enrollment and data is coming in, click **View assets* For traditional Stack Monitoring UI, the dashboards marked **[Logs {{ls}}]** are used to visualize the logs produced by your {{ls}} instances, with those marked **[Metrics {{ls}}]** for metrics dashboards. These are populated with data only if you selected the **Metrics (Elastic Agent)** checkbox. -:::{image} images/integration-assets-dashboards.png -:alt: Integration assets -:class: screenshot -::: +% TO DO: Use `:class: screenshot` +![Integration assets](images/integration-assets-dashboards.png) A number of dashboards are included to view {{ls}} as a whole, and dashboards that allow you to drill-down into how {{ls}} is performing on a node, pipeline and plugin basis. @@ -147,9 +143,7 @@ A number of dashboards are included to view {{ls}} as a whole, and dashboards th From the list of assets, open the **[Metrics {{ls}}] {{ls}} overview** dashboard to view overall performance. Then follow the navigation panel to further drill down into {{ls}} performance. -:::{image} images/integration-dashboard-overview.png -:alt: The {{ls}} Overview dashboard in {{kib}} with various metrics from your monitored {ls} -:class: screenshot -::: +% TO DO: Use `:class: screenshot` +![The {{ls}} Overview dashboard in {{kib}} with various metrics from your monitored {ls}](images/integration-dashboard-overview.png) You can hover over any visualization to adjust its settings, or click the **Edit** button to make changes to the dashboard. To learn more, refer to [Dashboard and visualizations](docs-content://explore-analyze/dashboards.md). diff --git a/docs/reference/dead-letter-queues.md b/docs/reference/dead-letter-queues.md index 141ccd3c520..d2effd827b9 100644 --- a/docs/reference/dead-letter-queues.md +++ b/docs/reference/dead-letter-queues.md @@ -24,9 +24,7 @@ Each event written to the dead letter queue includes the original event, metadat To process events in the dead letter queue, create a Logstash pipeline configuration that uses the [`dead_letter_queue` input plugin](logstash-docs-md://lsr/plugins-inputs-dead_letter_queue.md) to read from the queue. See [Processing events in the dead letter queue](#processing-dlq-events) for more information. -:::{image} images/dead_letter_queue.png -:alt: Diagram showing pipeline reading from the dead letter queue -::: +![Diagram showing pipeline reading from the dead letter queue](images/dead_letter_queue.png) ## {{es}} processing and the dead letter queue [es-proc-dlq] diff --git a/docs/reference/deploying-scaling-logstash.md b/docs/reference/deploying-scaling-logstash.md index 676a27d57cc..ea19ddaac82 100644 --- a/docs/reference/deploying-scaling-logstash.md +++ b/docs/reference/deploying-scaling-logstash.md @@ -14,9 +14,7 @@ The goal of this document is to highlight the most common architecture patterns For first time users, if you simply want to tail a log file to grasp the power of the Elastic Stack, we recommend trying [Filebeat Modules](beats://reference/filebeat/filebeat-modules-overview.md). Filebeat Modules enable you to quickly collect, parse, and index popular log types and view pre-built Kibana dashboards within minutes. [Metricbeat Modules](beats://reference/metricbeat/metricbeat-modules.md) provide a similar experience, but with metrics data. In this context, Beats will ship data directly to Elasticsearch where [Ingest Nodes](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) will process and index your data. -:::{image} images/deploy1.png -:alt: deploy1 -::: +![deploy1](images/deploy1.png) ### Introducing Logstash [_introducing_logstash] @@ -42,9 +40,7 @@ Beats and Logstash make ingest awesome. Together, they provide a comprehensive s Beats run across thousands of edge host servers, collecting, tailing, and shipping logs to Logstash. Logstash serves as the centralized streaming engine for data unification and enrichment. The [Beats input plugin](logstash-docs-md://lsr/plugins-inputs-beats.md) exposes a secure, acknowledgement-based endpoint for Beats to send data to Logstash. -:::{image} images/deploy2.png -:alt: deploy2 -::: +![deploy2](images/deploy2.png) ::::{note} Enabling persistent queues is strongly recommended, and these architecture characteristics assume that they are enabled. We encourage you to review the [Persistent queues (PQ)](/reference/persistent-queues.md) documentation for feature benefits and more details on resiliency. @@ -97,9 +93,7 @@ If external monitoring is preferred, there are [monitoring APIs](monitoring-logs Users may have other mechanisms of collecting logging data, and it’s easy to integrate and centralize them into the Elastic Stack. Let’s walk through a few scenarios: -:::{image} images/deploy3.png -:alt: deploy3 -::: +![deploy3](images/deploy3.png) ### TCP, UDP, and HTTP Protocols [_tcp_udp_and_http_protocols] @@ -145,9 +139,7 @@ If you are leveraging message queuing technologies as part of your existing infr For users who want to integrate data from existing Kafka deployments or require the underlying usage of ephemeral storage, Kafka can serve as a data hub where Beats can persist to and Logstash nodes can consume from. -:::{image} images/deploy4.png -:alt: deploy4 -::: +![deploy4](images/deploy4.png) The other TCP, UDP, and HTTP sources can persist to Kafka with Logstash as a conduit to achieve high availability in lieu of a load balancer. A group of Logstash nodes can then consume from topics with the [Kafka input](logstash-docs-md://lsr/plugins-inputs-kafka.md) to further transform and enrich the data in transit. diff --git a/docs/reference/first-event.md b/docs/reference/first-event.md index 7c105a5bfa9..45c73d1addf 100644 --- a/docs/reference/first-event.md +++ b/docs/reference/first-event.md @@ -9,9 +9,7 @@ First, let’s test your Logstash installation by running the most basic *Logsta A Logstash pipeline has two required elements, `input` and `output`, and one optional element, `filter`. The input plugins consume data from a source, the filter plugins modify the data as you specify, and the output plugins write the data to a destination. -:::{image} images/basic_logstash_pipeline.png -:alt: basic logstash pipeline -::: +![basic logstash pipeline](images/basic_logstash_pipeline.png) To test your Logstash installation, run the most basic Logstash pipeline. diff --git a/docs/reference/logstash-centralized-pipeline-management.md b/docs/reference/logstash-centralized-pipeline-management.md index 8b2f9d3001b..524b53c6549 100644 --- a/docs/reference/logstash-centralized-pipeline-management.md +++ b/docs/reference/logstash-centralized-pipeline-management.md @@ -30,9 +30,7 @@ To manage Logstash pipelines in {{kib}}: 1. Open {{kib}} in your browser and go to the Management tab. If you’ve set up configuration management correctly, you’ll see an area for managing Logstash. - :::{image} images/centralized_config.png - :alt: centralized config - ::: + ![centralized config](images/centralized_config.png) 2. Click the **Pipelines** link. 3. To add a new pipeline, click **Create pipeline** and specify values. diff --git a/docs/reference/logstash-monitoring-ui.md b/docs/reference/logstash-monitoring-ui.md index 32fa253f09c..7df8d8ef82a 100644 --- a/docs/reference/logstash-monitoring-ui.md +++ b/docs/reference/logstash-monitoring-ui.md @@ -7,15 +7,11 @@ mapped_pages: Use the {{stack}} {{monitor-features}} to view metrics and gain insight into how your {{ls}} deployment is running. In the overview dashboard, you can see all events received and sent by Logstash, plus info about memory usage and uptime: -:::{image} images/overviewstats.png -:alt: Logstash monitoring overview dashboard in Kibana -::: +![Logstash monitoring overview dashboard in Kibana](images/overviewstats.png) Then you can drill down to see stats about a specific node: -:::{image} images/nodestats.png -:alt: Logstash monitoring node stats dashboard in Kibana -::: +![Logstash monitoring node stats dashboard in Kibana](images/nodestats.png) ::::{note} A {{ls}} node is considered unique based on its persistent UUID, which is written to the [`path.data`](/reference/logstash-settings-file.md) directory when the node starts. diff --git a/docs/reference/logstash-pipeline-viewer.md b/docs/reference/logstash-pipeline-viewer.md index 54ef2056d8d..57c36936eab 100644 --- a/docs/reference/logstash-pipeline-viewer.md +++ b/docs/reference/logstash-pipeline-viewer.md @@ -9,10 +9,8 @@ The pipeline viewer UI offers additional visibility into the behavior and perfor The pipeline viewer highlights CPU% and event latency in cases where the values are anomalous. This information helps you quickly identify processing that is disproportionately slow. -:::{image} images/pipeline-tree.png -:alt: Pipeline Viewer -:class: screenshot -::: +% TO DO: Use `:class: screenshot` +![Pipeline Viewer](images/pipeline-tree.png) ## Prerequisites [_prerequisites] @@ -35,10 +33,8 @@ Each pipeline is identified by a pipeline ID (`main` by default). For each pipel Many elements in the tree are clickable. For example, you can click the plugin name to expand the detail view. -:::{image} images/pipeline-input-detail.png -:alt: Pipeline Input Detail -:class: screenshot -::: +% TO DO: Use `:class: screenshot` +![Pipeline Input Detail](images/pipeline-input-detail.png) Click the arrow beside a branch name to collapse or expand it. diff --git a/docs/reference/monitoring-internal-collection-legacy.md b/docs/reference/monitoring-internal-collection-legacy.md index a7bb1f5a6b5..3a05407dce0 100644 --- a/docs/reference/monitoring-internal-collection-legacy.md +++ b/docs/reference/monitoring-internal-collection-legacy.md @@ -169,9 +169,7 @@ To monitor Logstash nodes: 5. Restart your Logstash nodes. 6. To verify your monitoring configuration, point your web browser at your {{kib}} host, and select **Stack Monitoring** from the side navigation. If this is an initial setup, select **set up with self monitoring** and click **Turn on monitoring**. Metrics reported from your Logstash nodes should be visible in the Logstash section. When security is enabled, to view the monitoring dashboards you must log in to {{kib}} as a user who has the `kibana_user` and `monitoring_user` roles. - :::{image} images/monitoring-ui.png - :alt: Monitoring - ::: + ![Monitoring](images/monitoring-ui.png) diff --git a/docs/reference/monitoring-with-elastic-agent.md b/docs/reference/monitoring-with-elastic-agent.md index adb896eb71f..58ab1e6ecb7 100644 --- a/docs/reference/monitoring-with-elastic-agent.md +++ b/docs/reference/monitoring-with-elastic-agent.md @@ -77,10 +77,8 @@ Check out [Installing {{agent}}](docs-content://reference/fleet/install-elastic- 1. Go to the {{kib}} home page, and click **Add integrations**. - :::{image} images/kibana-home.png - :alt: {{kib}} home page - :class: screenshot - ::: + % TO DO: Use `:class: screenshot` + ![{{kib}} home page](images/kibana-home.png) 2. In the query bar, search for **{{ls}}** and select the integration to see more details about it. 3. Click **Add {{ls}}**. @@ -142,10 +140,8 @@ After you have confirmed enrollment and data is coming in, click **View assets* For traditional Stack Monitoring UI, the dashboards marked **[Logs {{ls}}]** are used to visualize the logs produced by your {{ls}} instances, with those marked **[Metrics {{ls}}]** for metrics dashboards. These are populated with data only if you selected the **Metrics (Elastic Agent)** checkbox. -:::{image} images/integration-assets-dashboards.png -:alt: Integration assets -:class: screenshot -::: +% TO DO: Use `:class: screenshot` +![Integration assets](images/integration-assets-dashboards.png) A number of dashboards are included to view {{ls}} as a whole, and dashboards that allow you to drill-down into how {{ls}} is performing on a node, pipeline and plugin basis. diff --git a/docs/reference/serverless-monitoring-with-elastic-agent.md b/docs/reference/serverless-monitoring-with-elastic-agent.md index 26eae7254a3..0b0346b6eb5 100644 --- a/docs/reference/serverless-monitoring-with-elastic-agent.md +++ b/docs/reference/serverless-monitoring-with-elastic-agent.md @@ -64,9 +64,7 @@ For the best experience with the Logstash dashboards, we recommend collecting al From the list of assets, open the **[Metrics {{ls}}] {{ls}} overview** dashboard to view overall performance. Then follow the navigation panel to further drill down into {{ls}} performance. -:::{image} images/integration-dashboard-overview.png -:alt: The {{ls}} Overview dashboard in {{kib}} with various metrics from your monitored {ls} -:class: screenshot -::: +% TO DO: Use `:class: screenshot` +![The {{ls}} Overview dashboard in {{kib}} with various metrics from your monitored {ls}](images/integration-dashboard-overview.png) You can hover over any visualization to adjust its settings, or click the **Edit** button to make changes to the dashboard. To learn more, refer to [Dashboard and visualizations](docs-content://explore-analyze/dashboards.md). diff --git a/docs/reference/tuning-logstash.md b/docs/reference/tuning-logstash.md index 97a36b72022..d7958a026e5 100644 --- a/docs/reference/tuning-logstash.md +++ b/docs/reference/tuning-logstash.md @@ -61,13 +61,11 @@ If you plan to modify the default pipeline settings, take into account the follo When tuning Logstash you may have to adjust the heap size. You can use the [VisualVM](https://visualvm.github.io/) tool to profile the heap. The **Monitor** pane in particular is useful for checking whether your heap allocation is sufficient for the current workload. The screenshots below show sample **Monitor** panes. The first pane examines a Logstash instance configured with too many inflight events. The second pane examines a Logstash instance configured with an appropriate amount of inflight events. Note that the specific batch sizes used here are most likely not applicable to your specific workload, as the memory demands of Logstash vary in large part based on the type of messages you are sending. -:::{image} images/pipeline_overload.png -:alt: pipeline overload -::: +% TO DO: Use `:class: screenshot` +![pipeline overload](images/pipeline_overload.png) -:::{image} images/pipeline_correct_load.png -:alt: pipeline correct load -::: +% TO DO: Use `:class: screenshot` +![pipeline correct load](images/pipeline_correct_load.png) In the first example we see that the CPU isn’t being used very efficiently. In fact, the JVM is often times having to stop the VM for “full GCs”. Full garbage collections are a common symptom of excessive memory pressure. This is visible in the spiky pattern on the CPU chart. In the more efficiently configured example, the GC graph pattern is more smooth, and the CPU is used in a more uniform manner. You can also see that there is ample headroom between the allocated heap size, and the maximum allowed, giving the JVM GC a lot of room to work with.