Skip to content

Commit 9f2a999

Browse files
authored
Doc: Fix image paths for docs-assembler (#17566) (#17568)
(cherry picked from commit 47d430d) Co-authored-by: Colleen McGinnis <[email protected]>
1 parent 71c133e commit 9f2a999

12 files changed

+31
-71
lines changed

docs/reference/advanced-pipeline.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -601,9 +601,7 @@ A few log entries come from Buffalo, so the query produces the following respons
601601

602602
If you are using Kibana to visualize your data, you can also explore the Filebeat data in Kibana:
603603

604-
:::{image} images/kibana-filebeat-data.png
605-
:alt: Discovering Filebeat data in Kibana
606-
:::
604+
![Discovering Filebeat data in Kibana](images/kibana-filebeat-data.png)
607605

608606
See the [Filebeat quick start docs](beats://reference/filebeat/filebeat-installation-configuration.md) for info about loading the Kibana index pattern for Filebeat.
609607

docs/reference/dashboard-monitoring-with-elastic-agent.md

+6-12
Original file line numberDiff line numberDiff line change
@@ -70,10 +70,8 @@ Check out [Installing {{agent}}](docs-content://reference/fleet/install-elastic-
7070

7171
1. Go to the {{kib}} home page, and click **Add integrations**.
7272

73-
:::{image} images/kibana-home.png
74-
:alt: {{kib}} home page
75-
:class: screenshot
76-
:::
73+
% TO DO: Use `:class: screenshot`
74+
![{{kib}} home page](images/kibana-home.png)
7775

7876
2. In the query bar, search for **{{ls}}** and select the integration to see more details.
7977
3. Click **Add {{ls}}**.
@@ -135,10 +133,8 @@ After you have confirmed enrollment and data is coming in, click **View assets*
135133

136134
For traditional Stack Monitoring UI, the dashboards marked **[Logs {{ls}}]** are used to visualize the logs produced by your {{ls}} instances, with those marked **[Metrics {{ls}}]** for metrics dashboards. These are populated with data only if you selected the **Metrics (Elastic Agent)** checkbox.
137135

138-
:::{image} images/integration-assets-dashboards.png
139-
:alt: Integration assets
140-
:class: screenshot
141-
:::
136+
% TO DO: Use `:class: screenshot`
137+
![Integration assets](images/integration-assets-dashboards.png)
142138

143139
A number of dashboards are included to view {{ls}} as a whole, and dashboards that allow you to drill-down into how {{ls}} is performing on a node, pipeline and plugin basis.
144140

@@ -147,9 +143,7 @@ A number of dashboards are included to view {{ls}} as a whole, and dashboards th
147143

148144
From the list of assets, open the **[Metrics {{ls}}] {{ls}} overview** dashboard to view overall performance. Then follow the navigation panel to further drill down into {{ls}} performance.
149145

150-
:::{image} images/integration-dashboard-overview.png
151-
:alt: The {{ls}} Overview dashboard in {{kib}} with various metrics from your monitored {ls}
152-
:class: screenshot
153-
:::
146+
% TO DO: Use `:class: screenshot`
147+
![The {{ls}} Overview dashboard in {{kib}} with various metrics from your monitored {ls}](images/integration-dashboard-overview.png)
154148

155149
You can hover over any visualization to adjust its settings, or click the **Edit** button to make changes to the dashboard. To learn more, refer to [Dashboard and visualizations](docs-content://explore-analyze/dashboards.md).

docs/reference/dead-letter-queues.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -24,9 +24,7 @@ Each event written to the dead letter queue includes the original event, metadat
2424

2525
To process events in the dead letter queue, create a Logstash pipeline configuration that uses the [`dead_letter_queue` input plugin](logstash-docs-md://lsr/plugins-inputs-dead_letter_queue.md) to read from the queue. See [Processing events in the dead letter queue](#processing-dlq-events) for more information.
2626

27-
:::{image} images/dead_letter_queue.png
28-
:alt: Diagram showing pipeline reading from the dead letter queue
29-
:::
27+
![Diagram showing pipeline reading from the dead letter queue](images/dead_letter_queue.png)
3028

3129

3230
## {{es}} processing and the dead letter queue [es-proc-dlq]

docs/reference/deploying-scaling-logstash.md

+4-12
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,7 @@ The goal of this document is to highlight the most common architecture patterns
1414

1515
For first time users, if you simply want to tail a log file to grasp the power of the Elastic Stack, we recommend trying [Filebeat Modules](beats://reference/filebeat/filebeat-modules-overview.md). Filebeat Modules enable you to quickly collect, parse, and index popular log types and view pre-built Kibana dashboards within minutes. [Metricbeat Modules](beats://reference/metricbeat/metricbeat-modules.md) provide a similar experience, but with metrics data. In this context, Beats will ship data directly to Elasticsearch where [Ingest Nodes](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) will process and index your data.
1616

17-
:::{image} images/deploy1.png
18-
:alt: deploy1
19-
:::
17+
![deploy1](images/deploy1.png)
2018

2119

2220
### Introducing Logstash [_introducing_logstash]
@@ -42,9 +40,7 @@ Beats and Logstash make ingest awesome. Together, they provide a comprehensive s
4240

4341
Beats run across thousands of edge host servers, collecting, tailing, and shipping logs to Logstash. Logstash serves as the centralized streaming engine for data unification and enrichment. The [Beats input plugin](logstash-docs-md://lsr/plugins-inputs-beats.md) exposes a secure, acknowledgement-based endpoint for Beats to send data to Logstash.
4442

45-
:::{image} images/deploy2.png
46-
:alt: deploy2
47-
:::
43+
![deploy2](images/deploy2.png)
4844

4945
::::{note}
5046
Enabling persistent queues is strongly recommended, and these architecture characteristics assume that they are enabled. We encourage you to review the [Persistent queues (PQ)](/reference/persistent-queues.md) documentation for feature benefits and more details on resiliency.
@@ -97,9 +93,7 @@ If external monitoring is preferred, there are [monitoring APIs](monitoring-logs
9793

9894
Users may have other mechanisms of collecting logging data, and it’s easy to integrate and centralize them into the Elastic Stack. Let’s walk through a few scenarios:
9995

100-
:::{image} images/deploy3.png
101-
:alt: deploy3
102-
:::
96+
![deploy3](images/deploy3.png)
10397

10498

10599
### TCP, UDP, and HTTP Protocols [_tcp_udp_and_http_protocols]
@@ -145,9 +139,7 @@ If you are leveraging message queuing technologies as part of your existing infr
145139

146140
For users who want to integrate data from existing Kafka deployments or require the underlying usage of ephemeral storage, Kafka can serve as a data hub where Beats can persist to and Logstash nodes can consume from.
147141

148-
:::{image} images/deploy4.png
149-
:alt: deploy4
150-
:::
142+
![deploy4](images/deploy4.png)
151143

152144
The other TCP, UDP, and HTTP sources can persist to Kafka with Logstash as a conduit to achieve high availability in lieu of a load balancer. A group of Logstash nodes can then consume from topics with the [Kafka input](logstash-docs-md://lsr/plugins-inputs-kafka.md) to further transform and enrich the data in transit.
153145

docs/reference/first-event.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,7 @@ First, let’s test your Logstash installation by running the most basic *Logsta
99

1010
A Logstash pipeline has two required elements, `input` and `output`, and one optional element, `filter`. The input plugins consume data from a source, the filter plugins modify the data as you specify, and the output plugins write the data to a destination.
1111

12-
:::{image} images/basic_logstash_pipeline.png
13-
:alt: basic logstash pipeline
14-
:::
12+
![basic logstash pipeline](images/basic_logstash_pipeline.png)
1513

1614
To test your Logstash installation, run the most basic Logstash pipeline.
1715

docs/reference/logstash-centralized-pipeline-management.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,7 @@ To manage Logstash pipelines in {{kib}}:
3030

3131
1. Open {{kib}} in your browser and go to the Management tab. If you’ve set up configuration management correctly, you’ll see an area for managing Logstash.
3232

33-
:::{image} images/centralized_config.png
34-
:alt: centralized config
35-
:::
33+
![centralized config](images/centralized_config.png)
3634

3735
2. Click the **Pipelines** link.
3836
3. To add a new pipeline, click **Create pipeline** and specify values.

docs/reference/logstash-monitoring-ui.md

+2-6
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,11 @@ mapped_pages:
77

88
Use the {{stack}} {{monitor-features}} to view metrics and gain insight into how your {{ls}} deployment is running. In the overview dashboard, you can see all events received and sent by Logstash, plus info about memory usage and uptime:
99

10-
:::{image} images/overviewstats.png
11-
:alt: Logstash monitoring overview dashboard in Kibana
12-
:::
10+
![Logstash monitoring overview dashboard in Kibana](images/overviewstats.png)
1311

1412
Then you can drill down to see stats about a specific node:
1513

16-
:::{image} images/nodestats.png
17-
:alt: Logstash monitoring node stats dashboard in Kibana
18-
:::
14+
![Logstash monitoring node stats dashboard in Kibana](images/nodestats.png)
1915

2016
::::{note}
2117
A {{ls}} node is considered unique based on its persistent UUID, which is written to the [`path.data`](/reference/logstash-settings-file.md) directory when the node starts.

docs/reference/logstash-pipeline-viewer.md

+4-8
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,8 @@ The pipeline viewer UI offers additional visibility into the behavior and perfor
99

1010
The pipeline viewer highlights CPU% and event latency in cases where the values are anomalous. This information helps you quickly identify processing that is disproportionately slow.
1111

12-
:::{image} images/pipeline-tree.png
13-
:alt: Pipeline Viewer
14-
:class: screenshot
15-
:::
12+
% TO DO: Use `:class: screenshot`
13+
![Pipeline Viewer](images/pipeline-tree.png)
1614

1715

1816
## Prerequisites [_prerequisites]
@@ -35,10 +33,8 @@ Each pipeline is identified by a pipeline ID (`main` by default). For each pipel
3533

3634
Many elements in the tree are clickable. For example, you can click the plugin name to expand the detail view.
3735

38-
:::{image} images/pipeline-input-detail.png
39-
:alt: Pipeline Input Detail
40-
:class: screenshot
41-
:::
36+
% TO DO: Use `:class: screenshot`
37+
![Pipeline Input Detail](images/pipeline-input-detail.png)
4238

4339
Click the arrow beside a branch name to collapse or expand it.
4440

docs/reference/monitoring-internal-collection-legacy.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -169,9 +169,7 @@ To monitor Logstash nodes:
169169
5. Restart your Logstash nodes.
170170
6. To verify your monitoring configuration, point your web browser at your {{kib}} host, and select **Stack Monitoring** from the side navigation. If this is an initial setup, select **set up with self monitoring** and click **Turn on monitoring**. Metrics reported from your Logstash nodes should be visible in the Logstash section. When security is enabled, to view the monitoring dashboards you must log in to {{kib}} as a user who has the `kibana_user` and `monitoring_user` roles.
171171

172-
:::{image} images/monitoring-ui.png
173-
:alt: Monitoring
174-
:::
172+
![Monitoring](images/monitoring-ui.png)
175173

176174

177175

docs/reference/monitoring-with-elastic-agent.md

+4-8
Original file line numberDiff line numberDiff line change
@@ -77,10 +77,8 @@ Check out [Installing {{agent}}](docs-content://reference/fleet/install-elastic-
7777

7878
1. Go to the {{kib}} home page, and click **Add integrations**.
7979

80-
:::{image} images/kibana-home.png
81-
:alt: {{kib}} home page
82-
:class: screenshot
83-
:::
80+
% TO DO: Use `:class: screenshot`
81+
![{{kib}} home page](images/kibana-home.png)
8482

8583
2. In the query bar, search for **{{ls}}** and select the integration to see more details about it.
8684
3. Click **Add {{ls}}**.
@@ -142,10 +140,8 @@ After you have confirmed enrollment and data is coming in, click **View assets*
142140

143141
For traditional Stack Monitoring UI, the dashboards marked **[Logs {{ls}}]** are used to visualize the logs produced by your {{ls}} instances, with those marked **[Metrics {{ls}}]** for metrics dashboards. These are populated with data only if you selected the **Metrics (Elastic Agent)** checkbox.
144142

145-
:::{image} images/integration-assets-dashboards.png
146-
:alt: Integration assets
147-
:class: screenshot
148-
:::
143+
% TO DO: Use `:class: screenshot`
144+
![Integration assets](images/integration-assets-dashboards.png)
149145

150146
A number of dashboards are included to view {{ls}} as a whole, and dashboards that allow you to drill-down into how {{ls}} is performing on a node, pipeline and plugin basis.
151147

docs/reference/serverless-monitoring-with-elastic-agent.md

+2-4
Original file line numberDiff line numberDiff line change
@@ -64,9 +64,7 @@ For the best experience with the Logstash dashboards, we recommend collecting al
6464

6565
From the list of assets, open the **[Metrics {{ls}}] {{ls}} overview** dashboard to view overall performance. Then follow the navigation panel to further drill down into {{ls}} performance.
6666

67-
:::{image} images/integration-dashboard-overview.png
68-
:alt: The {{ls}} Overview dashboard in {{kib}} with various metrics from your monitored {ls}
69-
:class: screenshot
70-
:::
67+
% TO DO: Use `:class: screenshot`
68+
![The {{ls}} Overview dashboard in {{kib}} with various metrics from your monitored {ls}](images/integration-dashboard-overview.png)
7169

7270
You can hover over any visualization to adjust its settings, or click the **Edit** button to make changes to the dashboard. To learn more, refer to [Dashboard and visualizations](docs-content://explore-analyze/dashboards.md).

docs/reference/tuning-logstash.md

+4-6
Original file line numberDiff line numberDiff line change
@@ -61,13 +61,11 @@ If you plan to modify the default pipeline settings, take into account the follo
6161

6262
When tuning Logstash you may have to adjust the heap size. You can use the [VisualVM](https://visualvm.github.io/) tool to profile the heap. The **Monitor** pane in particular is useful for checking whether your heap allocation is sufficient for the current workload. The screenshots below show sample **Monitor** panes. The first pane examines a Logstash instance configured with too many inflight events. The second pane examines a Logstash instance configured with an appropriate amount of inflight events. Note that the specific batch sizes used here are most likely not applicable to your specific workload, as the memory demands of Logstash vary in large part based on the type of messages you are sending.
6363

64-
:::{image} images/pipeline_overload.png
65-
:alt: pipeline overload
66-
:::
64+
% TO DO: Use `:class: screenshot`
65+
![pipeline overload](images/pipeline_overload.png)
6766

68-
:::{image} images/pipeline_correct_load.png
69-
:alt: pipeline correct load
70-
:::
67+
% TO DO: Use `:class: screenshot`
68+
![pipeline correct load](images/pipeline_correct_load.png)
7169

7270
In the first example we see that the CPU isn’t being used very efficiently. In fact, the JVM is often times having to stop the VM for “full GCs”. Full garbage collections are a common symptom of excessive memory pressure. This is visible in the spiky pattern on the CPU chart. In the more efficiently configured example, the GC graph pattern is more smooth, and the CPU is used in a more uniform manner. You can also see that there is ample headroom between the allocated heap size, and the maximum allowed, giving the JVM GC a lot of room to work with.
7371

0 commit comments

Comments
 (0)