Skip to content

Commit a716de3

Browse files
committed
Start fleshing out content and areas to explore
1 parent 2af381a commit a716de3

File tree

2 files changed

+71
-8
lines changed

2 files changed

+71
-8
lines changed

docs/static/mem-queue.asciidoc

Lines changed: 49 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,51 @@
11
[[memory-queue]]
2-
=== Memory queue
3-
4-
By default, Logstash uses in-memory bounded queues between pipeline stages
5-
(inputs → pipeline workers) to buffer events. The size of these in-memory
6-
queues is fixed and not configurable. If Logstash experiences a temporary
7-
machine failure, the contents of the in-memory queue will be lost. Temporary machine
8-
failures are scenarios where Logstash or its host machine are terminated
9-
abnormally but are capable of being restarted.
2+
=== Memory queue
3+
4+
By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events.
5+
If Logstash experiences a temporary machine failure, the contents of the memory queue will be lost.
6+
Temporary machine failures are scenarios where Logstash or its host machine are terminated abnormally, but are capable of being restarted.
7+
8+
[[mem-queue-benefits]]
9+
==== Benefits of memory queues
10+
11+
The memory queue might be a good choice if you value throughput over data resiliency.
12+
13+
* Easier configuration
14+
* Easier management and administration
15+
* Faster throughput
16+
17+
[[mem-queue-limitations]]
18+
==== Limitations of memory queues
19+
20+
* Can lose data in abnormal termination
21+
* Don't do well handling sudden bursts of data, where extra capacity in needed for {ls} to catch up
22+
* Not a good choice for data you can't afford to lose
23+
24+
TIP: Consider using <<persistent-queues,persistent queues>> to avoid these limitations.
25+
26+
[[sizing-mem-queue]]
27+
==== Memory queue size
28+
29+
Memory queue size is not configured directly.
30+
Multiply the `pipeline.batch.size` and `pipeline.workers` values to get the size of the memory queue.
31+
This value is called the "inflight count."
32+
33+
[[backpressure-mem-queue]]
34+
==== Back pressure
35+
36+
When the queue is full, Logstash puts back pressure on the inputs to stall data
37+
flowing into Logstash.
38+
This mechanism helps Logstash control the rate of data flow at the input stage
39+
without overwhelming outputs like Elasticsearch.
40+
41+
ToDo: Is the next paragraph accurate for MQ?
42+
43+
Each input handles back pressure independently.
44+
For example, when the
45+
<<plugins-inputs-beats,beats input>> encounters back pressure, it no longer
46+
accepts new connections.
47+
It waits until the queue has space to accept more events.
48+
After the filter and output stages finish processing existing
49+
events in the queue and ACKs them, Logstash automatically starts accepting new
50+
events.
1051

docs/static/resiliency.asciidoc

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,28 @@
11
[[resiliency]]
22
== Data resiliency
33

4+
5+
/////
6+
What happens when the queue is full?
7+
Input plugins push data into the queue, and filters pull out. If the queue (persistent or memory) is full then the input plugin thread blocks.
8+
9+
See handling backpressure topic. Relocate this info for better visibility?
10+
/////
11+
12+
13+
/////
14+
Settings in logstash.yml and pipelines.yml can interract in unintuitive ways
15+
16+
A setting on a pipeline in pipelines.yml takes precedence, falling back to the value in logstash.yml if there is no setting present for the specific pipeline, falling back to the default if there is no value present in logstash.yml
17+
18+
^^ This is true for any setting in both logstash.yml and pipelines.yml, but seems to trip people up in PQs. Other queues, too?
19+
/////
20+
21+
22+
//ToDo: Add MQ to discussion (for compare/constrast), even thought it's not really considered a "resiliency feature". Messaging will need to be updated.
23+
24+
25+
426
As data flows through the event processing pipeline, Logstash may encounter
527
situations that prevent it from delivering events to the configured
628
output. For example, the data might contain unexpected data types, or

0 commit comments

Comments
 (0)