An embedded Raft consensus library backed by sled. Skiff lets you add replicated, persistent key-value storage directly to your Rust application without running a separate consensus service.
- Leader election and re-election
- Log replication with majority-commit semantics
- Hard state persistence (survives restarts)
- Dynamic cluster membership (
add_server/remove_server) - Follower request forwarding (connect to any node)
- Change subscriptions (
watchprefix for streaming updates)
Not yet implemented: log compaction / snapshotting.
[dependencies]
skiff-rs = "0.1"
tokio = { version = "1", features = ["full"] }Note: skiff-rs requires
protocto be installed at build time. On Debian/Ubuntu:sudo apt install protobuf-compilerOn macOS:brew install protobuf
use skiff_rs::{Builder, Client};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Build and start a single-node cluster.
let node = Builder::new()
.set_dir("/tmp/my-skiff-node")
.bind("127.0.0.1".parse()?)
.build()?;
let node_ref = node.clone();
tokio::spawn(async move { node_ref.start().await });
// Block until a leader is elected before connecting a client.
node.wait_for_leader(std::time::Duration::from_secs(2)).await?;
// Connect a client and perform some operations.
let mut client = Client::new(vec!["127.0.0.1".parse()?]);
client.connect().await?;
client.insert("greeting", "hello world").await?;
let value: Option<String> = client.get("greeting").await?;
println!("{:?}", value); // Some("hello world")
Ok(())
}Pass existing node addresses to join_cluster when constructing additional nodes. The new node contacts a peer and registers itself via the Raft add_server RPC.
use skiff_rs::Builder;
use std::time::Duration;
// Node 1 — bootstraps a new single-node cluster.
let node1 = Builder::new()
.set_dir("/tmp/node1")
.bind("127.0.0.1".parse()?)
.build()?;
// Node 2 — joins the cluster through node 1.
let node2 = Builder::new()
.set_dir("/tmp/node2")
.bind("127.0.0.2".parse()?)
.join_cluster(vec!["127.0.0.1".parse()?])
.build()?;
let node1_ref = node1.clone();
tokio::spawn(async move { node1_ref.start().await });
// Wait for node1 to elect itself leader before node2 tries to join.
node1.wait_for_leader(Duration::from_secs(2)).await?;
tokio::spawn(async move { node2.start().await });Keys use / as a path separator, providing a simple namespace hierarchy:
client.insert("users/alice", alice_data).await?;
client.insert("users/bob", bob_data).await?;
// List all keys under "users/"
let keys = client.list_keys("users/").await?;
// Get all top-level prefixes
let prefixes = client.get_prefixes().await?; // ["users"]let mut sub = client.watch("users/").await?;
loop {
let (key, value): (String, MyType) = sub.recv().await?;
println!("updated: {} = {:?}", key, value);
}Call shutdown() before dropping a node to allow background tasks and the sled database lock to be released cleanly:
node.shutdown();
drop(node);Each Skiff node runs two background tasks:
- Cluster-join task — on startup, contacts a peer and calls
add_serverto register the node if the cluster has more than one known member. - Election manager — drives the Raft state machine: sends heartbeats as leader (every 75 ms) or fires a randomised election timeout as follower (150–300 ms).
All inter-node communication uses gRPC (via tonic). Persistent state is stored in sled named trees:
| Tree | Contents |
|---|---|
__raft_meta |
current term, voted_for, last_applied, node ID |
__raft_log |
Raft log entries (keyed by big-endian u32 index) |
base |
root-level key-value pairs |
base_<prefix> |
key-value pairs under a namespace prefix |
v0.1 — suitable for experimentation and projects that can tolerate the missing features noted above. The on-disk format is not yet considered stable.