This project uses egui for it's GUI, together with
egui_dock for the docking layout and
eframe, which is a simple framework that
handles the setup of egui and the application lifecycle.
nalgebra is used for algebraic operations.
tokio is used as a concurrent runtime, serving the
backbone of the pipelines execution. tokio enables the use of async/await
operations.
To accomplish advanced parallelism, this project uses
rayon. Many algorithms are implemented using
rayons parallel iterators. This means a simple, serial for loop without
interdependent iterations is replaced by a parallel iterator, like so:
let mut data: Vec<f32> = vec![ ... ];
for e in data.iter_mut() {
*e *= 2.0;
}becomes:
let mut data: Vec<f32> = vec![ ... ];
data.par_iter_mut()
.for_each(|e| {
*e *= 2.0
});The main entry point uses eframe to run the main app
structure.
The main app consists of the high level
Pipeline, the pipelines execution
system and the pipelines editing
state. As well as a
DataViewsState, a
DataViewsManager and a
ViewsExecutor. It also stores the
global DockState and provides a
Cache.
The main update cycle is run in the update method and
consists of the following steps:
- Render all the UI.
- In this step also the
Pipelinedescription gets updated by the node graph editor.
- In this step also the
- Update the
DataViewsStateusing theDataViewsManagerand additional information got from the node graph editor. - Update the pipeline execution system using the
Pipelinedescription. - Update the data view execution system using the
DataViewsState. - If the user requested, load a new pipeline.
High level Pipeline
The Pipeline structure is only a high level
description, which is easy to modify in an editing environment. It consists of
multiple nodes, each identified by a NodeId. Each node has a set of inputs and
outputs, identified by an InputId and OutputId respectively. Only the input
side knows if it is connected to any output, preventing redundancy and allowing
for multiple inputs connecting to the same output.
Every node is described by the PipelineNode
trait. Implementers are required to implement methods to test for changes in the
settings, query their inputs, and create a
NodeTask.
All nodes are implemented here.
classDiagram
Pipeline *-- "0..*" PipelineNode
PipelineNode <|-- FilterNode
PipelineNode <|-- FollowLumenNode
class Pipeline
class PipelineNode {
<<trait>>
inputs() NodeOutput[]
changed(PipelineNode) bool
create_task() NodeTask
}
class FilterNode {
input: NodeOutput
}
class FollowLumenNode {
input1: NodeOutput
input2: NodeOutput
}
FilterNode o-- NodeOutput
FollowLumenNode o-- NodeOutput
class NodeOutput {
node_id: NodeId
output_id: OutputId
}
The NodeGraphEditor is
completely decoupled from the pipeline. This means it does not know what the
pipeline is. It only knows about a node graph, described using the
EditNodeGraph trait and nodes described by
the EditNode trait.
EditNodeGraph defines that in a node graph, nodes can be queried, added and
removed. A node described by EditNode defines some visual attributes, as well
as how to connect and disconnect their inputs to outputs. The EditNode::ui
method describes the UI of the node in a procedural way, including all inputs,
outputs and all settings a node has.
All nodes are implemented here.
The execution system is represented by
PipelineExecutor. The execution
model is based on concurrent
tokio::tasks. Every node in the
pipeline has an associated task. This task is referred to as a node task and
runs an event loop, which listens to multiple message channels from
tokio::sync. Some
channels are connected to the PipelineExecutor, which sends notices about
configuration changes. Others are connected to other node tasks, to receive and
send data trough the pipeline.
The NodeTask trait describes a specific
nodes task. The implementer must provide methods to connect and disconnect
inputs and sync the task settings with the high level
PipelineNode. The implementer is also responsible for
implementing the logic to listen to outputs and handle their request. He is free
to use his inputs to request data that is needed to respond to a request.
All nodes are implemented here. The implementation of algorithms is always found at the bottom of the file.
Algorithm implementations:
- All filters: /src/pipeline/nodes/filter.rs (line 490)
- Follow lumen: /src/pipeline/nodes/follow_lumen.rs (line 371)
- Follow catheter: /src/pipeline/nodes/follow_catheter.rs (line 327)
- Generate mesh: /src/pipeline/nodes/generate_mesh.rs (line 249)
- Calculate diameter: /src/pipeline/nodes/diameter.rs (line 228)
- Process raw M scan
- Remove detector offset: /src/pipeline/nodes/remove_detector_defect.rs (line 156)
- Segment B scan: /src/pipeline/nodes/segment_b_scans.rs (line 241)