You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: guides/throttle.md
+23-16
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,22 @@
1
1
## API
2
2
3
-
See `amoc_throttle`
3
+
See `m:amoc_throttle`.
4
4
5
5
## Overview
6
6
7
7
Amoc throttle is a module that allows limiting the number of users' actions per given interval, no matter how many users there are in a test.
8
-
It works in both local and distributed environments, allows for dynamic rate changes during a test and exposes metrics which show the number of requests and executions.
8
+
It works in both local and distributed environments, allows for dynamic rate changes during a test and exposes telemetry events showing the number of requests and executions.
9
9
10
-
Amoc throttle allows setting the execution `Rate` per `Interval` or limiting the number of parallel executions when `Interval` is set to `0`.
11
-
Each `Rate` is identified with a `Name`.
12
-
The rate limiting mechanism allows responding to a request only when it does not exceed the given `Rate`.
13
-
Amoc throttle makes sure that the given `Rate` per `Interval` is maintained on a constant level.
10
+
Amoc throttle allows to:
11
+
12
+
- Setting the execution `Rate` per `Interval`, or inversely, the `Interarrival` time between actions.
13
+
- Limiting the number of parallel executions when `interval` is set to `0`.
14
+
15
+
Each throttle is identified with a `Name`.
16
+
The rate limiting mechanism allows responding to a request only when it does not exceed the given throttle.
17
+
Amoc throttle makes sure that the given throttle is maintained on a constant level.
14
18
It prevents bursts of executions which could blurry the results, as they technically produce a desired rate in a given interval.
15
-
Because of that, it may happen that the actual `Rate` would be slightly below the demanded rate. However, it will never be exceeded.
19
+
Because of that, it may happen that the actual throttle rate would be slightly below the demanded rate. However, it will never be exceeded.
16
20
17
21
## Examples
18
22
@@ -42,18 +46,21 @@ user_loop(Id) ->
42
46
user_loop(Id).
43
47
```
44
48
Here a system should be under a continuous load of 100 messages per minute.
45
-
Note that if we used something like `amoc_throttle:run(messages_rate, fun() -> send_message(Id) end)` instead of `amoc_throttle:send_and_wait/2` the system would be flooded with requests.
49
+
Note that if we used something like `amoc_throttle:run(messages_rate, fun() -> send_message(Id) end)` instead of `amoc_throttle:wait/1` the system would be flooded with requests.
46
50
47
51
A test may of course be much more complicated.
48
52
For example it can have the load changing in time.
49
53
A plan for that can be set for the whole test in `init/1`:
50
54
```erlang
51
55
init() ->
52
-
%% init metrics
53
56
amoc_throttle:start(messages_rate, 100),
54
57
%% 9 steps of 100 increases in Rate, each lasting one minute
Normal Erlang messages can be used to schedule tasks for users by themselves or by some controller process.
@@ -97,13 +104,13 @@ For a more comprehensive example please refer to the `throttle_test` scenario, w
97
104
-`amoc_throttle_controller.erl` - a gen_server which is responsible for reacting to requests, and managing `throttle_processes`.
98
105
In a distributed environment an instance of `throttle_controller` runs on every node, and the one running on the master Amoc node stores the state for all nodes.
99
106
-`amoc_throttle_process.erl` - gen_server module, implements the logic responsible for limiting the rate.
100
-
For every `Name`, a `NoOfProcesses`are created, each responsible for keeping executions at a level proportional to their part of `Rate`.
107
+
For every `Name`, a number of processes are created, each responsible for keeping executions at a level proportional to their part of the throttle.
101
108
102
109
### Distributed environment
103
110
104
111
#### Metrics
105
-
In a distributed environment every Amoc node with a throttle started, exposes metrics showing the numbers of requests and executions.
106
-
Those exposed by the master node show the sum of all metrics from all nodes.
112
+
In a distributed environment every Amoc node with a throttle started, exposes telemetry events showing the numbers of requests and executions.
113
+
Those exposed by the master node show the aggregate of all telemetry events from all nodes.
107
114
This allows to quickly see the real rates across the whole system.
108
115
109
116
#### Workflow
@@ -112,12 +119,12 @@ Then a runner process is spawned on the same node.
112
119
Its task will be to execute `Fun` asynchronously.
113
120
A random throttle process which is assigned to the `Name` is asked for a permission for asynchronous runner to execute `Fun`.
114
121
When the request reaches the master node, where throttle processes reside, the request metric on the master node is updated and the throttle process which got the request starts monitoring the asynchronous runner process.
115
-
Then, depending on the system's load and the current rate of executions, the asynchronous runner is allowed to run the `Fun` or compelled to wait, because executing the function would exceed the calculated `Rate` in an `Interval`.
122
+
Then, depending on the system's load and the current rate of executions, the asynchronous runner is allowed to run the `Fun` or compelled to wait, because executing the function would exceed the calculated throttle.
116
123
When the rate finally allows it, the asynchronous runner gets the permission to run the function from the throttle process.
117
124
Both processes increase the metrics which count executions, but for each the metric is assigned to their own node.
118
125
Then the asynchronous runner tries to execute `Fun`.
119
126
It may succeed or fail, either way it dies and an `'EXIT'` signal is sent to the throttle process.
120
-
This way it knows that the execution of a task has ended, and can allow a different process to run its task connected to the same `Name` if the current `Rate` allows it.
127
+
This way it knows that the execution of a task has ended, and can allow a different process to run its task connected to the same `Name` if the current throttle allows it.
121
128
122
129
Below is a graph showing the communication between processes on different nodes described above.
0 commit comments