Skip to content

nassor/cano

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cano Logo

Cano: Type-Safe Async Workflow Engine

Crates.io Documentation Website Downloads License CI Rust Version

Orchestrate complex async processes with finite state machines, parallel execution, and built-in scheduling.

Cano is still far from a 1.0 release. The API is subject to changes and may include breaking changes.

Overview

Cano is a high-performance orchestration engine designed for building resilient, self-healing systems in Rust. Unlike simple task queues, Cano uses Finite State Machines (FSM) to define strict, type-safe transitions between processing steps.

It excels at managing complex lifecycles where state transitions matter:

  • Data Pipelines: ETL jobs with parallel processing (Split/Join) and aggregation.
  • AI Agents: Multi-step inference chains with shared context and memory.
  • Background Systems: Scheduled maintenance, periodic reporting, and distributed cron jobs.

The engine is built on three core concepts: Tasks/Nodes for logic, Workflows for state transitions, and Schedulers for timing.

Features

  • Type-Safe State Machines: Enum-driven transitions with compile-time guarantees.
  • Flexible Processing Units: Choose between simple Tasks or structured Nodes (Prep/Exec/Post lifecycle).
  • Parallel Execution (Split/Join): Run tasks concurrently and join results with strategies like All, Any, Quorum, or PartialResults.
  • Robust Retry Logic: Configurable strategies including exponential backoff with jitter and per-attempt timeouts.
  • Circuit Breaker: Shared CircuitBreaker short-circuits calls to failing dependencies before the retry loop, with configurable failure threshold, cool-down, and half-open probing.
  • Built-in Scheduling: Cron-based, interval, and manual triggers for background jobs.
  • Observability: Integrated tracing support for deep insights into workflow execution.
  • Performance-Focused: Minimizes heap allocations by leveraging stack-based objects wherever possible, giving you control over where allocations occur.

Simple Example: Parallel Processing

Here is a real-world example showing how to split execution into parallel tasks and join them back together.

graph TD
    Start([Start]) --> Split{Split}
    Split -->|Source 1| T1[FetchSourceTask 1]
    Split -->|Source 2| T2[FetchSourceTask 2]
    Split -->|Source 3| T3[FetchSourceTask 3]
    T1 --> Join{Join All}
    T2 --> Join
    T3 --> Join
    Join --> Complete([Complete])
Loading
use cano::prelude::*;
use std::time::Duration;

#[derive(Debug, Clone, PartialEq, Eq, Hash)]
enum FlowState {
    Start,
    Complete,
}

// A task that simulates fetching data from a source.
#[derive(Clone)]
struct FetchSourceTask {
    source_id: u32,
}

#[task]
impl Task<FlowState> for FetchSourceTask {
    async fn run(&self, res: &Resources) -> Result<TaskResult<FlowState>, CanoError> {
        // Look up the shared store from the workflow's resources.
        let store = res.get::<MemoryStore, str>("store")?;

        // Simulate async work.
        tokio::time::sleep(Duration::from_millis(100)).await;

        // Store the per-source result for downstream aggregation.
        let key = format!("source_{}", self.source_id);
        store.put(&key, format!("data_from_{}", self.source_id))?;

        Ok(TaskResult::Single(FlowState::Complete))
    }
}

#[tokio::main]
async fn main() -> Result<(), CanoError> {
    // 1. Register shared resources (the store is one resource among many).
    let resources = Resources::new().insert("store", MemoryStore::new());

    // 2. Define parallel tasks.
    let sources = vec![
        FetchSourceTask { source_id: 1 },
        FetchSourceTask { source_id: 2 },
        FetchSourceTask { source_id: 3 },
    ];

    // 3. Configure the join strategy.
    // Wait for ALL tasks to complete successfully before moving to Complete.
    let join_config = JoinConfig::new(JoinStrategy::All, FlowState::Complete)
        .with_timeout(Duration::from_secs(5));

    // 4. Build the workflow: Start -> Split into parallel tasks -> Complete.
    let workflow = Workflow::new(resources)
        .register_split(FlowState::Start, sources, join_config)
        .add_exit_state(FlowState::Complete);

    // 5. Run.
    let result = workflow.orchestrate(FlowState::Start).await?;
    println!("Workflow finished: {:?}", result);

    Ok(())
}

Documentation

For complete documentation, examples, and guides, please visit our website:

👉 https://nassor.github.io/cano/

You can also find:

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

About

Type-Safe Async Workflow Engine

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages