-
Notifications
You must be signed in to change notification settings - Fork 3.6k
03.1 Trait Driven Design
Relevant source files
The following files were used as context for generating this wiki page:
- Cargo.lock
- Cargo.toml
- README.md
- src/channels/mod.rs
- src/config/schema.rs
- src/onboard/wizard.rs
- src/providers/anthropic.rs
- src/providers/compatible.rs
- src/providers/gemini.rs
- src/providers/mod.rs
- src/providers/ollama.rs
- src/providers/openai.rs
- src/providers/openrouter.rs
- src/providers/reliable.rs
- src/providers/traits.rs
This document explains ZeroClaw's trait-based architecture, which enables zero-code configuration changes for all major subsystems. Every pluggable component (providers, channels, tools, memory, runtime) implements a trait that defines its contract. Configuration changes select implementations without modifying code.
For specific provider implementations, see Built-in Providers. For channel implementations, see Channel Implementations. For the overall system architecture, see Overview.
ZeroClaw's architecture follows a single principle: every major subsystem is a trait. This means:
-
Zero-code swapping - Change provider from OpenAI to Ollama by editing
config.toml -
Uniform contracts - All providers implement
Provider, all channels implementChannel - Type-safe composition - The agent core depends on traits, not concrete types
-
Runtime polymorphism -
Arc<dyn Provider>enables dynamic dispatch without generics
Configuration drives everything. The Config struct (src/config/schema.rs:48-144) orchestrates initialization of trait implementations based on TOML settings.
Sources: src/config/schema.rs:48-144, README.md:302-322, src/channels/mod.rs:1-31
graph TB
subgraph "Configuration Layer"
ConfigToml["config.toml"]
ConfigStruct["Config struct"]
end
subgraph "Trait Layer"
ProviderTrait["trait Provider"]
ChannelTrait["trait Channel"]
ToolTrait["trait Tool"]
MemoryTrait["trait Memory"]
RuntimeTrait["trait RuntimeAdapter"]
ObserverTrait["trait Observer"]
end
subgraph "Implementation Layer"
OpenAI["OpenAiProvider"]
Anthropic["AnthropicProvider"]
OpenRouter["OpenRouterProvider"]
Ollama["OllamaProvider"]
Compatible["OpenAiCompatibleProvider"]
Telegram["TelegramChannel"]
Discord["DiscordChannel"]
CLI["CliChannel"]
SQLite["SqliteMemory"]
Postgres["PostgresMemory"]
Native["NativeRuntime"]
Docker["DockerRuntime"]
end
subgraph "Agent Core"
AgentLoop["run_tool_call_loop"]
ChannelDispatch["process_channel_message"]
end
ConfigToml --> ConfigStruct
ConfigStruct --> ProviderTrait
ConfigStruct --> ChannelTrait
ConfigStruct --> MemoryTrait
ConfigStruct --> RuntimeTrait
ProviderTrait --> OpenAI
ProviderTrait --> Anthropic
ProviderTrait --> OpenRouter
ProviderTrait --> Ollama
ProviderTrait --> Compatible
ChannelTrait --> Telegram
ChannelTrait --> Discord
ChannelTrait --> CLI
MemoryTrait --> SQLite
MemoryTrait --> Postgres
RuntimeTrait --> Native
RuntimeTrait --> Docker
ProviderTrait --> AgentLoop
ChannelTrait --> ChannelDispatch
MemoryTrait --> AgentLoop
ToolTrait --> AgentLoop
Sources: src/config/schema.rs:48-144, src/providers/traits.rs:1-195, src/channels/traits.rs, src/memory/mod.rs
The Provider trait defines the contract for all LLM providers:
| Method | Purpose | Return Type |
|---|---|---|
name() |
Provider identifier | &str |
chat() |
Send messages, get response | ChatResponse |
chat_native() |
Native tool calling | ChatResponse |
stream_chat() |
Streaming responses | BoxStream<StreamChunk> |
warmup() |
Pre-initialize connections | Result<()> |
supports_streaming() |
Capability query | bool |
supports_native_tools() |
Native tool support | bool |
Sources: src/providers/traits.rs:195-354
graph LR
subgraph "Trait Contract"
ProviderTrait["trait Provider {<br/> fn name(&self) -> &str<br/> async fn chat(...) -> ChatResponse<br/> async fn chat_native(...) -> ChatResponse<br/>}"]
end
subgraph "Factory Function"
CreateProvider["create_provider(name, key, url)"]
CreateResilient["create_resilient_provider(...)"]
end
subgraph "Implementations"
OpenAI["OpenAiProvider<br/>src/providers/openai.rs"]
Anthropic["AnthropicProvider<br/>src/providers/anthropic.rs"]
OpenRouter["OpenRouterProvider<br/>src/providers/openrouter.rs"]
Ollama["OllamaProvider<br/>src/providers/ollama.rs"]
Gemini["GeminiProvider<br/>src/providers/gemini.rs"]
Compatible["OpenAiCompatibleProvider<br/>28+ providers via one impl"]
end
subgraph "Wrapper"
Reliable["ReliableProvider<br/>retry + fallback + rotation"]
end
ProviderTrait -.implements.-> OpenAI
ProviderTrait -.implements.-> Anthropic
ProviderTrait -.implements.-> OpenRouter
ProviderTrait -.implements.-> Ollama
ProviderTrait -.implements.-> Gemini
ProviderTrait -.implements.-> Compatible
CreateProvider --> OpenAI
CreateProvider --> Anthropic
CreateProvider --> OpenRouter
CreateProvider --> Ollama
CreateProvider --> Gemini
CreateProvider --> Compatible
CreateResilient --> Reliable
Reliable --> CreateProvider
ProviderTrait -.implements.-> Reliable
Sources: src/providers/traits.rs:195-354, src/providers/mod.rs:1-11, src/providers/reliable.rs:183-209
Providers support tool calling in two modes:
Native Tool Calling (chat_native):
- Provider API natively understands tool definitions
- Returns structured
ToolCallobjects - Used by: OpenAI, Anthropic, OpenRouter, Gemini, most compatible providers
- Implementation: src/providers/anthropic.rs:209-226
Prompt-Guided Tool Calling (chat):
- Tool definitions injected into system prompt
- Agent parses
<tool_call>tags from text response - Used by: Ollama (some models), custom endpoints without tool support
- Fallback when
supports_native_tools()returnsfalse
The agent loop (src/agent/loop_.rs) automatically selects the appropriate mode based on capability queries.
Sources: src/providers/traits.rs:195-354, src/agent/loop_.rs:33-75
The Channel trait defines the contract for messaging platforms:
| Method | Purpose | Return Type |
|---|---|---|
name() |
Channel identifier | &str |
listen() |
Receive messages | Result<()> |
send() |
Send message | Result<()> |
send_draft() |
Send draft message | Result<Option<String>> |
update_draft() |
Update draft | Result<()> |
finalize_draft() |
Finalize draft | Result<()> |
start_typing() |
Show typing indicator | Result<()> |
stop_typing() |
Hide typing indicator | Result<()> |
supports_draft_updates() |
Streaming support | bool |
Sources: src/channels/traits.rs:8-64
sequenceDiagram
participant Impl as "TelegramChannel<br/>DiscordChannel<br/>etc."
participant Trait as "trait Channel"
participant Dispatcher as "spawn_supervised_listener"
participant Queue as "mpsc::channel<br/>ChannelMessage"
participant Worker as "process_channel_message"
participant Agent as "run_tool_call_loop"
participant Provider as "Arc<dyn Provider>"
Impl->>Trait: listen(tx: Sender<ChannelMessage>)
Trait->>Dispatcher: Wrapped in supervised restart
loop Message polling/websocket
Impl->>Queue: tx.send(ChannelMessage)
end
Queue->>Worker: Dispatch to worker pool
Worker->>Agent: Forward to agent core
Agent->>Provider: chat_native(messages, tools)
Provider-->>Agent: ChatResponse
Agent-->>Worker: Response text
Worker->>Trait: send(SendMessage)
Trait->>Impl: Platform-specific send
Sources: src/channels/mod.rs:471-509, src/channels/mod.rs:556-814, src/channels/traits.rs:8-64
Channels are instantiated in create_all_channels() (src/channels/mod.rs). Each implementation:
-
Checks configuration -
config.channels_config.telegram.is_some() -
Creates instance -
TelegramChannel::new(&telegram_config)? -
Wraps in
Arc<dyn Channel>- Type erasure for uniform handling -
Registers in map -
channels_by_name.insert("telegram", channel)
The dispatch loop spawns supervised listeners for each registered channel (src/channels/mod.rs:471-509), which forward messages to a shared worker pool.
Sources: src/channels/mod.rs:1-31, src/channels/mod.rs:471-509
Tools are simpler than channels/providers - they're synchronous functions:
pub trait Tool: Send + Sync {
fn name(&self) -> &str;
fn description(&self) -> &str;
fn parameters(&self) -> serde_json::Value;
fn execute(&self, args: &str, context: &ToolContext) -> Result<String>;
}Sources: src/tools/mod.rs
graph TB
subgraph "Configuration"
ConfigToml["[autonomy]<br/>level = supervised<br/>allowed_commands = [...]"]
RuntimeConfig["[runtime]<br/>kind = docker"]
end
subgraph "Tool Registry"
BuildRegistry["build_tool_registry()"]
CoreTools["Core Tools<br/>shell, file_read, file_write"]
MemoryTools["Memory Tools<br/>store, recall, forget"]
CronTools["Cron Tools<br/>cron_add, cron_list"]
BrowserTools["Browser Tools<br/>browser_open, screenshot"]
ComposioTools["Composio Tools<br/>1000+ integrations"]
HardwareTools["Hardware Tools<br/>gpio_read, gpio_write"]
end
subgraph "Execution Context"
ToolContext["ToolContext {<br/> security_policy<br/> runtime_adapter<br/> memory<br/> workspace_dir<br/>}"]
SecurityPolicy["SecurityPolicy"]
RuntimeAdapter["Arc<dyn RuntimeAdapter>"]
end
subgraph "Agent Loop"
ExecuteTool["execute_tool(name, args)"]
end
ConfigToml --> BuildRegistry
RuntimeConfig --> RuntimeAdapter
BuildRegistry --> CoreTools
BuildRegistry --> MemoryTools
BuildRegistry --> CronTools
BuildRegistry --> BrowserTools
BuildRegistry --> ComposioTools
BuildRegistry --> HardwareTools
CoreTools --> ToolContext
MemoryTools --> ToolContext
CronTools --> ToolContext
ToolContext --> SecurityPolicy
ToolContext --> RuntimeAdapter
ExecuteTool --> CoreTools
ExecuteTool --> ToolContext
Sources: src/tools/mod.rs, src/agent/loop_.rs:700-800
Tools execute through a security-gated pipeline:
-
Security check -
security_policy.can_act()validates tool + args -
Rate limiting -
security_policy.record_action()enforces quotas - Argument parsing - JSON arguments deserialized to tool-specific types
-
Execution -
tool.execute(args, context)runs in selected runtime -
Result formatting - Success/error wrapped in
<tool_result>tags
For commands (shell tool), execution goes through RuntimeAdapter:
pub trait RuntimeAdapter: Send + Sync {
fn execute_command(&self, command: &str, context: &ExecutionContext) -> Result<CommandOutput>;
}Native runtime (src/runtime/native.rs) uses std::process::Command. Docker runtime (src/runtime/docker.rs) wraps execution in docker run --network=none --read-only.
Sources: src/tools/mod.rs, src/runtime/mod.rs, src/agent/loop_.rs:700-800
The Memory trait provides vector + keyword hybrid search:
| Method | Purpose | Return Type |
|---|---|---|
store() |
Save entry with embedding | Result<()> |
recall() |
Hybrid search (vector + FTS5) | Result<Vec<MemoryEntry>> |
forget() |
Delete entry | Result<()> |
list() |
List all entries | Result<Vec<MemoryEntry>> |
snapshot() |
Export all data | Result<MemorySnapshot> |
hydrate() |
Import snapshot | Result<()> |
Sources: src/memory/mod.rs
graph TB
subgraph "Config"
MemConfig["[memory]<br/>backend = sqlite<br/>embedding_provider = none"]
end
subgraph "Factory"
CreateMemory["create_memory(config)"]
end
subgraph "Backends"
SQLite["SqliteMemory<br/>FTS5 + vector BLOB<br/>full-stack search"]
Postgres["PostgresMemory<br/>pg_trgm + vector<br/>remote backend"]
Lucid["LucidMemory<br/>Bridge to lucid CLI<br/>subprocess exec"]
Markdown["MarkdownMemory<br/>File-based persistence<br/>~/.zeroclaw/memory/"]
NoOp["NoOpMemory<br/>Explicit no-op<br/>no persistence"]
end
subgraph "Agent Usage"
StoreCall["memory.store(key, content)"]
RecallCall["memory.recall(query, limit)"]
end
MemConfig --> CreateMemory
CreateMemory --> SQLite
CreateMemory --> Postgres
CreateMemory --> Lucid
CreateMemory --> Markdown
CreateMemory --> NoOp
SQLite --> StoreCall
SQLite --> RecallCall
Postgres --> StoreCall
Postgres --> RecallCall
Lucid --> StoreCall
Lucid --> RecallCall
Sources: src/memory/mod.rs:8-30, src/memory/sqlite.rs, src/memory/lucid.rs
SQLite backend (src/memory/sqlite.rs) implements full-stack search:
- Vector search - Embeddings stored as BLOB, cosine similarity computed in Rust
- Keyword search - FTS5 virtual table with BM25 scoring
- Merge - Weighted combination in src/memory/vector.rs:weighted_merge()
- Cache - Embedding cache table with LRU eviction
Configuration controls the mix:
[memory]
vector_weight = 0.7
keyword_weight = 0.3Sources: src/memory/sqlite.rs, src/memory/vector.rs, README.md:330-378
Runtime adapters isolate tool execution:
pub trait RuntimeAdapter: Send + Sync {
fn execute_command(&self, command: &str, context: &ExecutionContext) -> Result<CommandOutput>;
}Sources: src/runtime/mod.rs
graph LR
subgraph "Config"
RuntimeConfig["[runtime]<br/>kind = docker"]
DockerConfig["[runtime.docker]<br/>image = alpine:3.20<br/>network = none<br/>read_only_rootfs = true"]
end
subgraph "Factory"
CreateRuntime["create_runtime_adapter(config)"]
end
subgraph "Adapters"
Native["NativeRuntime<br/>std::process::Command<br/>Direct subprocess"]
Docker["DockerRuntime<br/>docker run<br/>Sandboxed container"]
end
subgraph "Shell Tool"
ShellExec["ShellTool::execute()"]
RuntimeCall["runtime.execute_command(cmd)"]
end
RuntimeConfig --> CreateRuntime
DockerConfig --> Docker
CreateRuntime --> Native
CreateRuntime --> Docker
ShellExec --> RuntimeCall
RuntimeCall --> Native
RuntimeCall --> Docker
Sources: src/runtime/mod.rs, src/runtime/native.rs, src/runtime/docker.rs
Docker runtime wraps execution in a container:
docker run \
--rm \
--network=none \
--read-only \
--memory=512m \
--cpus=1.0 \
-v "$workspace:/workspace:ro" \
alpine:3.20 \
sh -c "$command"Configuration: src/config/schema.rs:541-547
Sources: src/runtime/docker.rs, src/config/schema.rs:541-547
The agent loop (src/agent/loop_.rs:33-255) composes all traits:
sequenceDiagram
participant Channel as "Arc<dyn Channel>"
participant Dispatcher as "process_channel_message"
participant Memory as "Arc<dyn Memory>"
participant Provider as "Arc<dyn Provider>"
participant Tools as "Vec<Box<dyn Tool>>"
participant Security as "SecurityPolicy"
participant Runtime as "Arc<dyn RuntimeAdapter>"
Channel->>Dispatcher: ChannelMessage
Dispatcher->>Memory: recall(query, limit)
Memory-->>Dispatcher: context entries
Dispatcher->>Provider: chat_native(history + context)
Provider-->>Dispatcher: ChatResponse with tool_calls
loop For each tool_call
Dispatcher->>Security: can_act(tool, args)?
alt Approved
Dispatcher->>Tools: find(tool.name)
Tools->>Runtime: execute_command(...)
Runtime-->>Tools: CommandOutput
Tools-->>Dispatcher: Result
else Denied
Dispatcher->>Dispatcher: denial message
end
end
Dispatcher->>Memory: store(key, conversation)
Dispatcher->>Channel: send(response)
Sources: src/agent/loop_.rs:33-255, src/channels/mod.rs:556-814
All traits receive context objects, not concrete types:
ToolContext (src/tools/mod.rs):
pub struct ToolContext {
pub security_policy: Arc<SecurityPolicy>,
pub runtime_adapter: Arc<dyn RuntimeAdapter>,
pub memory: Arc<dyn Memory>,
pub workspace_dir: PathBuf,
}ExecutionContext (src/runtime/mod.rs):
pub struct ExecutionContext {
pub security_policy: Arc<SecurityPolicy>,
pub workspace_dir: PathBuf,
pub env_vars: HashMap<String, String>,
}This enables tools to call other subsystems without knowing their implementations.
Sources: src/tools/mod.rs, src/runtime/mod.rs, src/agent/loop_.rs
The Config struct (src/config/schema.rs:48-144) drives all trait selection:
| Config Section | Trait | Factory Function |
|---|---|---|
default_provider |
Provider |
create_resilient_provider() |
channels_config.telegram |
Channel |
TelegramChannel::new() |
channels_config.discord |
Channel |
DiscordChannel::new() |
memory.backend |
Memory |
create_memory() |
runtime.kind |
RuntimeAdapter |
create_runtime_adapter() |
observability.enabled |
Observer |
create_observer() |
Sources: src/config/schema.rs:48-144, src/providers/mod.rs:520-670, src/memory/mod.rs
graph TB
subgraph "Config Loading"
LoadConfig["Config::load_or_init()"]
ParseToml["Parse config.toml"]
ApplyEnv["Apply env vars"]
end
subgraph "Trait Initialization"
InitProvider["create_resilient_provider<br/>(name, key, url, reliability)"]
InitMemory["create_memory<br/>(backend, config)"]
InitRuntime["create_runtime_adapter<br/>(kind, docker_config)"]
InitChannels["create_all_channels<br/>(channels_config)"]
InitTools["build_tool_registry<br/>(config, context)"]
end
subgraph "Runtime Objects"
ProviderArc["Arc<dyn Provider>"]
MemoryArc["Arc<dyn Memory>"]
RuntimeArc["Arc<dyn RuntimeAdapter>"]
ChannelsMap["HashMap<String, Arc<dyn Channel>>"]
ToolsVec["Vec<Box<dyn Tool>>"]
end
subgraph "Agent Core"
AgentContext["Agent turn cycle<br/>composes all traits"]
end
LoadConfig --> ParseToml
ParseToml --> ApplyEnv
ApplyEnv --> InitProvider
ApplyEnv --> InitMemory
ApplyEnv --> InitRuntime
ApplyEnv --> InitChannels
ApplyEnv --> InitTools
InitProvider --> ProviderArc
InitMemory --> MemoryArc
InitRuntime --> RuntimeArc
InitChannels --> ChannelsMap
InitTools --> ToolsVec
ProviderArc --> AgentContext
MemoryArc --> AgentContext
RuntimeArc --> AgentContext
ChannelsMap --> AgentContext
ToolsVec --> AgentContext
Sources: src/config/mod.rs, src/main.rs, src/channels/mod.rs
Change from OpenRouter to Ollama by editing config.toml:
Before:
default_provider = "openrouter"
default_model = "anthropic/claude-sonnet-4.6"
api_key = "sk-or-..."After:
default_provider = "ollama"
default_model = "llama3.2"
# No API key needed for local OllamaThe create_resilient_provider() function (src/providers/mod.rs:520-670) reads the default_provider field and dispatches to the correct implementation:
match name {
"openai" => Box::new(OpenAiProvider::new(api_key)),
"anthropic" => Box::new(AnthropicProvider::new(api_key)),
"openrouter" => Box::new(OpenRouterProvider::new(api_key)),
"ollama" => Box::new(OllamaProvider::new(api_url, api_key)),
"gemini" => Box::new(GeminiProvider::new(api_key)),
// ... 20+ more providers
}Sources: src/providers/mod.rs:520-670, README.md:302-322
ReliableProvider (src/providers/reliable.rs:183-209) wraps any provider with:
- Retry logic - Exponential backoff with configurable max attempts
- API key rotation - Round-robin through multiple keys on rate limits
- Model fallback - Chain of fallback models per primary model
- Error classification - Distinguish retryable vs non-retryable errors
Wrapping flow:
let provider = create_provider(name, api_key)?;
let reliable = ReliableProvider::new(vec![(name.to_string(), provider)], max_retries, backoff_ms)
.with_api_keys(additional_keys)
.with_model_fallbacks(fallback_map);The agent always uses Arc<dyn Provider> pointing to a ReliableProvider, which internally holds one or more concrete providers.
Sources: src/providers/reliable.rs:183-209, src/providers/mod.rs:290-308
graph TB
Start["Request"]
Attempt["Attempt {n}"]
Success["Success"]
RateLimit["Rate limit<br/>HTTP 429?"]
NonRetry["Non-retryable<br/>4xx error?"]
Retry["Retryable<br/>5xx / timeout?"]
RotateKey["Rotate API key<br/>key_index++"]
BackoffWait["Exponential backoff<br/>base * 2^attempt"]
FallbackModel["Model fallback<br/>chain[n+1]"]
MaxRetries{"Max retries?"}
HasFallback{"Has fallback?"}
Fail["Fail with error"]
Start --> Attempt
Attempt --> Success
Attempt --> RateLimit
Attempt --> NonRetry
Attempt --> Retry
RateLimit --> RotateKey
RotateKey --> BackoffWait
BackoffWait --> MaxRetries
Retry --> BackoffWait
NonRetry --> HasFallback
HasFallback -->|Yes| FallbackModel
HasFallback -->|No| Fail
FallbackModel --> Attempt
MaxRetries -->|No| Attempt
MaxRetries -->|Yes| Fail
Sources: src/providers/reliable.rs:8-159, src/providers/reliable.rs:242-380
Swap providers, channels, memory backends without touching code:
# Switch from cloud to local LLM
default_provider = "ollama" # was: "openrouter"
# Switch from SQLite to PostgreSQL
[memory]
backend = "postgres" # was: "sqlite"
# Switch from native to Docker sandbox
[runtime]
kind = "docker" # was: "native"Sources: README.md:302-322, src/config/schema.rs:48-144
Agent core depends on traits, not implementations:
pub async fn run_tool_call_loop(
provider: &dyn Provider, // Any provider
tools: &[Box<dyn Tool>], // Any tools
memory: &dyn Memory, // Any memory backend
// ...
) -> Result<String>This prevents coupling and enables testing with mock implementations.
Sources: src/agent/loop_.rs:33-75
Add new providers by implementing Provider:
pub struct MyCustomProvider { /* ... */ }
#[async_trait]
impl Provider for MyCustomProvider {
fn name(&self) -> &str { "my-provider" }
async fn chat(&self, req: ChatRequest<'_>) -> Result<ChatResponse> {
// Your implementation
}
// ... other required methods
}Register in create_provider() match statement (src/providers/mod.rs:520-670).
Sources: src/providers/traits.rs:195-354, src/providers/mod.rs:520-670
All traits return anyhow::Result, enabling:
- Propagation with
?operator - Context with
.context() - Unified error logging
Resilience wrapper (src/providers/reliable.rs) handles retries transparently.
Sources: src/providers/reliable.rs:8-159, src/providers/mod.rs:441-450
Using Arc<dyn Trait> adds minimal overhead (~5-10ns per virtual call on modern CPUs). This is negligible compared to:
- Network I/O: 10-500ms per LLM request
- Database queries: 1-50ms per recall
- Process spawning: 5-20ms per shell command
The flexibility gained far outweighs the virtual call cost.
Sources: src/channels/mod.rs:103-123, src/agent/loop_.rs:33-75
ZeroClaw's trait-driven architecture enables:
| Trait | Implementations | Config Key |
|---|---|---|
Provider |
28+ (OpenAI, Anthropic, Ollama, etc.) | default_provider |
Channel |
13+ (Telegram, Discord, CLI, etc.) | channels_config.* |
Tool |
70+ (shell, file, memory, browser, etc.) |
autonomy.*, browser.*, etc. |
Memory |
5 (SQLite, Postgres, Lucid, Markdown, None) | memory.backend |
RuntimeAdapter |
2 (Native, Docker) | runtime.kind |
Observer |
3 (Noop, Log, Multi) | observability.enabled |
Every subsystem is swappable via config.toml changes. The agent core composes traits without knowing concrete types. This architecture supports rapid experimentation, deployment flexibility, and zero-downtime provider switches.
Sources: README.md:302-322, src/config/schema.rs:48-144, src/providers/mod.rs:1-670, src/channels/mod.rs:1-814, src/memory/mod.rs, src/runtime/mod.rs