|
1 | | -//! # Cano: Simple & Fast Async Workflows in Rust |
| 1 | +//! # Cano: Type-Safe Async Workflow Engine |
2 | 2 | //! |
3 | | -//! Cano is an async workflow engine that makes complex data processing simple. Whether you need |
4 | | -//! to process one item or millions, Cano provides a clean API with minimal overhead for maximum performance. |
| 3 | +//! Cano is a high-performance orchestration engine designed for building resilient, self-healing systems in Rust. |
| 4 | +//! Unlike simple task queues, Cano uses **Finite State Machines (FSM)** to define strict, type-safe transitions between processing steps. |
| 5 | +//! |
| 6 | +//! It excels at managing complex lifecycles where state transitions matter: |
| 7 | +//! * **Data Pipelines**: ETL jobs with parallel processing (Split/Join) and aggregation. |
| 8 | +//! * **AI Agents**: Multi-step inference chains with shared context and memory. |
| 9 | +//! * **Background Systems**: Scheduled maintenance, periodic reporting, and distributed cron jobs. |
5 | 10 | //! |
6 | 11 | //! ## 🚀 Quick Start |
7 | 12 | //! |
|
11 | 16 | //! |
12 | 17 | //! ## 🎯 Core Concepts |
13 | 18 | //! |
| 19 | +//! ### Finite State Machines (FSM) |
| 20 | +//! |
| 21 | +//! Workflows in Cano are state machines. You define your states as an `enum`, and register |
| 22 | +//! handlers ([`Task`] or [`Node`]) for each state. The engine ensures type safety and |
| 23 | +//! manages transitions between states. |
| 24 | +//! |
14 | 25 | //! ### Tasks & Nodes - Your Processing Units |
15 | 26 | //! |
16 | 27 | //! **Two approaches for implementing processing logic:** |
|
19 | 30 | //! |
20 | 31 | //! **Every [`Node`] automatically implements [`Task`]**, providing seamless interoperability and upgrade paths. |
21 | 32 | //! |
| 33 | +//! ### Parallel Execution (Split/Join) |
| 34 | +//! |
| 35 | +//! Run tasks concurrently and join results with strategies like `All`, `Any`, `Quorum`, or `PartialResults`. |
| 36 | +//! This allows for powerful patterns like scatter-gather, redundant execution, and latency optimization. |
| 37 | +//! |
22 | 38 | //! ### Store - Share Data Between Processing Units |
23 | 39 | //! |
24 | 40 | //! Use [`MemoryStore`] to pass data around your workflow. Store different types of data |
25 | 41 | //! using key-value pairs, and retrieve them later with type safety. All values are |
26 | 42 | //! wrapped in `std::borrow::Cow` for memory efficiency. |
27 | 43 | //! |
28 | | -//! ### Custom Logic - Your Business Implementation |
29 | | -//! |
30 | | -//! **Choose the right approach for your needs:** |
31 | | -//! - Implement the [`Task`] trait for simple, single-method processing |
32 | | -//! - Implement the [`Node`] trait for structured processing with three phases: |
33 | | -//! Prep (load data, validate inputs), Exec (core processing), and Post (store results, determine next action) |
34 | | -//! |
35 | 44 | //! ## 🏗️ Processing Lifecycle |
36 | 45 | //! |
37 | 46 | //! **Task**: Single `run()` method with full control over execution flow |
|
56 | 65 | //! - Fluent configuration API via [`TaskConfig`] |
57 | 66 | //! |
58 | 67 | //! - **[`workflow`]**: Core workflow orchestration |
59 | | -//! - [`Workflow`] for state machine-based workflows |
| 68 | +//! - [`Workflow`] for state machine-based workflows with Split/Join support |
60 | 69 | //! |
61 | 70 | //! - **[`scheduler`]** (optional `scheduler` feature): Advanced workflow scheduling |
62 | 71 | //! - [`Scheduler`] for managing multiple flows with cron support |
|
0 commit comments