A persistent Rust AI agent runtime for Tizen and embedded Linux.
TizenClaw turns a device into an always-on agent system with Tizen-aware
integration, multi-surface access, plugin-ready boundaries, and a Telegram
coding workflow that can drive local codex, gemini,
and claude CLIs remotely.
Why TizenClaw • At a Glance • Telegram Coding Over Chat • Install on Ubuntu or WSL • Deploy to a Tizen Target
TizenClaw is not a one-shot assistant wrapper. It is a long-running agent daemon built for devices that need to stay alive, react to platform events, expose stable control surfaces, and survive the messy reality of embedded Linux deployments.
The project is designed around the constraints that matter on Tizen-class systems:
- a persistent runtime instead of a fire-and-forget script
- explicit Tizen and generic-Linux boundaries instead of hidden platform assumptions
- dynamic loading for platform libraries that may differ by image or firmware
- deploy-first validation through the real Tizen packaging path
- host workflows that still reuse the same workspace and runtime model
If you want an agent that feels closer to an embedded control plane than a demo chatbot, this is what TizenClaw is for.
| Area | What TizenClaw Provides |
|---|---|
| Runtime model | A persistent Tokio-based daemon with IPC, scheduling, storage, and background automation |
| Platform focus | Tizen-first behavior with generic Linux fallbacks where device APIs are unavailable |
| Access surfaces | CLI, web dashboard, Telegram, webhook, Slack, Discord, MCP, and other channel layers present in the workspace |
| Coding workflow | Telegram can switch into coding mode and drive local codex, gemini, or claude CLIs on the host |
| Extensibility | Dedicated tool executor, metadata plugins, C-facing library, and dynamic .so loading |
| Deployment story | deploy.sh for emulator/device packaging and deployment, deploy_host.sh for Ubuntu/WSL host runs |
TizenClaw keeps orchestration, concurrency, IPC, and state management in Rust, which makes the system easier to reason about when the process has to stay up for long periods on constrained hardware.
Tizen-specific integrations live behind dedicated crates and adapters. Generic Linux infrastructure is available in parallel, so the runtime can remain useful on host Linux while still speaking to device-oriented services where they exist.
One of the most distinctive pieces of the project is the Telegram coding mode: you can chat with the device over Telegram, switch the chat into coding mode, choose a local coding-agent CLI backend, point that chat at a project directory, and receive progress and result messages back in Telegram while the host executes the request.
The repository includes libtizenclaw, libtizenclaw-core, and metadata
plugin crates so runtime extensions and C-facing integrations do not have to be
bolted onto the daemon as afterthoughts.
TizenClaw can use Telegram as a remote control surface for coding workflows.
This is not just "send a prompt to the daemon" behavior. The Telegram channel
can switch into a host-backed coding mode that runs real coding-agent CLIs.
The backend list is config-driven, so codex, gemini, claude, or
additional host CLIs can be described in telegram_config.json without
changing Rust code.
- Switch the chat into coding mode with
/select coding - Choose a backend with
/coding_agent codex,/coding_agent gemini, or/coding_agent claude - Bind the chat to a repository with
/project /path/to/repo - Choose execution style with
/mode planor/mode fast - Toggle auto-approval where supported with
/auto_approve on - Inspect the current state with
/statusor start fresh with/new_session
- Per-chat backend selection
- Per-chat project directory overrides
- Separate chat and coding sessions
- Progress updates while the CLI is still running
- Chat token usage plus backend-reported CLI token usage
- Host-auth hints when a CLI has not been logged in yet
The built-in defaults cover codex, gemini, and claude, but
telegram_config.json can now carry richer backend definitions:
{
"cli_backends": {
"default_backend": "codex",
"backends": {
"custom_agent": {
"aliases": ["custom", "agentx"],
"binary_path": "/home/user/.local/bin/custom-agent",
"usage_hint": "`custom-agent run --json --cwd <project> --prompt <prompt>`",
"auth_hint": "Custom Agent CLI must already be authenticated.",
"invocation": {
"args": ["run", "--json", "--cwd", "{project_dir}", "--prompt", "{prompt}"]
},
"response_extractors": [
{ "source": "stdout", "format": "json", "text_path": "result" }
],
"usage_extractors": [
{
"source": "stdout",
"format": "json",
"input_tokens_path": "usage.input_tokens",
"output_tokens_path": "usage.output_tokens",
"total_tokens_path": "usage.total_tokens"
}
]
}
}
}
}That means the command help shown in Telegram, the CLI invocation shape, and the token usage extraction rules can all be supplied through config.
| Backend | Example execution shape |
|---|---|
| Codex | codex exec --json --full-auto -C <project> <prompt> |
| Gemini | gemini --model <model> --prompt <prompt> --output-format json --approval-mode auto_edit |
| Claude | claude --print --output-format json --permission-mode auto <prompt> |
This makes TizenClaw useful as a mobile coding bridge: Telegram becomes the control surface, while the actual code work happens through the local CLI tools you already trust on the host.
Telegram / CLI / Dashboard / Channels
|
v
+-------------------+
| TizenClaw Daemon |
| Tokio runtime |
| IPC + scheduling |
| storage + routing |
+---------+---------+
|
+-----------+--------------------+
| | |
v v v
Tizen adapters Generic Linux LLM backends
and dynloaded infrastructure and plugins
platform APIs fallbacks
|
+-------------------------------+
|
v
Tool executor / C API / metadata plugins
Telegram coding mode can also invoke:
codex / gemini / claude
on the host and stream progress back into chat.
If you want to try TizenClaw on host Linux first, the repository now includes a
GitHub-friendly bootstrap script that downloads a prebuilt host bundle from
GitHub Releases, installs it under ~/.tizenclaw, and launches the setup
wizard.
curl -fsSL https://raw.githubusercontent.com/hjhun/tizenclaw/main/install.sh | bashUseful variants:
curl -fsSL https://raw.githubusercontent.com/hjhun/tizenclaw/main/install.sh | bash -s -- --version v1.0.0
curl -fsSL https://raw.githubusercontent.com/hjhun/tizenclaw/main/install.sh | bash -s -- --skip-setup
curl -fsSL https://raw.githubusercontent.com/hjhun/tizenclaw/main/install.sh | bash -s -- --source-install --ref mainWhat the bootstrap does:
- installs the runtime packages needed for host execution
- downloads the matching
tizenclaw-host-bundle-...tar.gzasset from GitHub Releases - installs the bundled binaries, web assets, configs, and management script
- starts the host services from the installed bundle
- launches
tizenclaw-cli setupso you can either configure now or defer setup and jump straight to the dashboard
After installation, the setup wizard can help with:
- choosing an LLM backend and entering its API key
- optional Telegram bot setup for coding mode
- showing the local dashboard URL and the command to rerun setup later
- letting you choose "configure later" so you can open the dashboard first
If you are actively developing TizenClaw and want a full repository checkout, switch the installer into source mode:
curl -fsSL https://raw.githubusercontent.com/hjhun/tizenclaw/main/install.sh | bash -s -- --source-install --ref mainOr run the classic manual flow:
git clone https://github.com/hjhun/tizenclaw.git
cd tizenclaw
./deploy_host.shUseful host commands:
./deploy_host.sh -b
./deploy_host.sh --status
./deploy_host.sh --log
./deploy_host.sh -s
tizenclaw-cli dashboard start
tizenclaw-cli dashboard statusThe host dashboard defaults to http://localhost:9091, and the setup wizard
prints the active URL again at the end so first-time users can jump in right
away.
For the emulator or device-oriented workflow, use the repository's Tizen deploy pipeline:
./deploy.sh -a x86_64Useful variants:
./deploy.sh -a x86_64 -n
./deploy.sh -a x86_64 -d <device-serial>
./deploy.sh -a x86_64 -sThis path is the canonical Tizen validation flow. It handles build, packaging, deployment, and service restart on the target.
TizenClaw is a Rust workspace with clearly separated runtime roles:
src/tizenclaw: main daemonsrc/tizenclaw-cli: IPC client and operational CLIsrc/tizenclaw-web-dashboard: standalone web dashboardsrc/tizenclaw-tool-executor: isolated tool-execution sidecarsrc/libtizenclaw-core: shared framework and plugin/runtime supportsrc/libtizenclaw: C-facing client librarysrc/tizenclaw-metadata-*: metadata plugin crates for skills, CLI, and LLM backend extensions
Additional repository docs:
The project is actively evolving, but the central direction is already clear: TizenClaw aims to be a serious autonomous agent runtime for Tizen and embedded Linux, not just a sample app. Its strengths are persistence, explicit platform boundaries, flexible access surfaces, and unusually practical remote coding control through Telegram plus local coding-agent CLIs.