Skip to content
eugene-retunsky edited this page Nov 21, 2020 · 2 revisions

This package simulates two different forms of processing:

  • Synchronous/blocking - each thread can handle a single task at a time.
  • Asynchronous/non-blocking - a few threads process all tasks concurrently.

Model

Each task has two attributes:

struct Task {
    start: Instant,
    cost: u64, /// time to complete in seconds.
}

If cost > TIMEOUT - then it's considered failed.

Each task is performed and a new object is created:

struct TaskStats {
    success: bool,
    start_time: Instant,
    completion_time: Instant,
    overhead: f64, /// (time_spent - cost) in seconds
}

Then all stats are aggregated and throughput/latency graphs are plotted.

Synchronous model

There is a fixed number of worker-threads which access a queue of tasks. To emulate task duration sleep(task.cost) is used (which just blocks the thread for a given duration).

Asynchronous model

There are few threads that process a common task queue (similar to the synchronous model). However, instead of sleep it uses delay_for(task.cost).await which yields execution to other tasks that have pending events.

The granularity of both sleep and delay_for is 1 ms, that's why the overhead is 1ms..2ms, even for low task rates.

Clone this wiki locally