Skip to content

jusung-son/tizenclaw

 
 

Repository files navigation

TizenClaw Logo

TizenClaw

A persistent Rust AI agent runtime for Tizen and embedded Linux.
TizenClaw turns a device into an always-on agent system with Tizen-aware integration, multi-surface access, plugin-ready boundaries, and a Telegram coding workflow that can drive local codex, gemini, and claude CLIs remotely.

License Rust Platform Tokio

Why TizenClawAt a GlanceTelegram Coding Over ChatInstall on Ubuntu or WSLDeploy to a Tizen Target


Why TizenClaw

TizenClaw is not a one-shot assistant wrapper. It is a long-running agent daemon built for devices that need to stay alive, react to platform events, expose stable control surfaces, and survive the messy reality of embedded Linux deployments.

The project is designed around the constraints that matter on Tizen-class systems:

  • a persistent runtime instead of a fire-and-forget script
  • explicit Tizen and generic-Linux boundaries instead of hidden platform assumptions
  • dynamic loading for platform libraries that may differ by image or firmware
  • deploy-first validation through the real Tizen packaging path
  • host workflows that still reuse the same workspace and runtime model

If you want an agent that feels closer to an embedded control plane than a demo chatbot, this is what TizenClaw is for.

At a Glance

Area What TizenClaw Provides
Runtime model A persistent Tokio-based daemon with IPC, scheduling, storage, and background automation
Platform focus Tizen-first behavior with generic Linux fallbacks where device APIs are unavailable
Access surfaces CLI, web dashboard, Telegram, webhook, Slack, Discord, MCP, and other channel layers present in the workspace
Coding workflow Telegram can switch into coding mode and drive local codex, gemini, or claude CLIs on the host
Extensibility Dedicated tool executor, metadata plugins, C-facing library, and dynamic .so loading
Deployment story deploy.sh for emulator/device packaging and deployment, deploy_host.sh for Ubuntu/WSL host runs

What Makes It Strong

Built for real device runtimes

TizenClaw keeps orchestration, concurrency, IPC, and state management in Rust, which makes the system easier to reason about when the process has to stay up for long periods on constrained hardware.

Tizen-aware without hard-wiring the whole system to Tizen

Tizen-specific integrations live behind dedicated crates and adapters. Generic Linux infrastructure is available in parallel, so the runtime can remain useful on host Linux while still speaking to device-oriented services where they exist.

Remote coding from Telegram

One of the most distinctive pieces of the project is the Telegram coding mode: you can chat with the device over Telegram, switch the chat into coding mode, choose a local coding-agent CLI backend, point that chat at a project directory, and receive progress and result messages back in Telegram while the host executes the request.

Clean boundaries for plugins and external consumers

The repository includes libtizenclaw, libtizenclaw-core, and metadata plugin crates so runtime extensions and C-facing integrations do not have to be bolted onto the daemon as afterthoughts.

Telegram Coding Over Chat

TizenClaw can use Telegram as a remote control surface for coding workflows. This is not just "send a prompt to the daemon" behavior. The Telegram channel can switch into a host-backed coding mode that runs real coding-agent CLIs. The backend list is config-driven, so codex, gemini, claude, or additional host CLIs can be described in telegram_config.json without changing Rust code.

Supported flow

  1. Switch the chat into coding mode with /select coding
  2. Choose a backend with /coding_agent codex, /coding_agent gemini, or /coding_agent claude
  3. Bind the chat to a repository with /project /path/to/repo
  4. Choose execution style with /mode plan or /mode fast
  5. Toggle auto-approval where supported with /auto_approve on
  6. Inspect the current state with /status or start fresh with /new_session

What you get

  • Per-chat backend selection
  • Per-chat project directory overrides
  • Separate chat and coding sessions
  • Progress updates while the CLI is still running
  • Chat token usage plus backend-reported CLI token usage
  • Host-auth hints when a CLI has not been logged in yet

Backend configuration

The built-in defaults cover codex, gemini, and claude, but telegram_config.json can now carry richer backend definitions:

{
  "cli_backends": {
    "default_backend": "codex",
    "backends": {
      "custom_agent": {
        "aliases": ["custom", "agentx"],
        "binary_path": "/home/user/.local/bin/custom-agent",
        "usage_hint": "`custom-agent run --json --cwd <project> --prompt <prompt>`",
        "auth_hint": "Custom Agent CLI must already be authenticated.",
        "invocation": {
          "args": ["run", "--json", "--cwd", "{project_dir}", "--prompt", "{prompt}"]
        },
        "response_extractors": [
          { "source": "stdout", "format": "json", "text_path": "result" }
        ],
        "usage_extractors": [
          {
            "source": "stdout",
            "format": "json",
            "input_tokens_path": "usage.input_tokens",
            "output_tokens_path": "usage.output_tokens",
            "total_tokens_path": "usage.total_tokens"
          }
        ]
      }
    }
  }
}

That means the command help shown in Telegram, the CLI invocation shape, and the token usage extraction rules can all be supplied through config.

Backend Example execution shape
Codex codex exec --json --full-auto -C <project> <prompt>
Gemini gemini --model <model> --prompt <prompt> --output-format json --approval-mode auto_edit
Claude claude --print --output-format json --permission-mode auto <prompt>

This makes TizenClaw useful as a mobile coding bridge: Telegram becomes the control surface, while the actual code work happens through the local CLI tools you already trust on the host.

Architecture Snapshot

Telegram / CLI / Dashboard / Channels
                |
                v
        +-------------------+
        | TizenClaw Daemon  |
        | Tokio runtime     |
        | IPC + scheduling  |
        | storage + routing |
        +---------+---------+
                  |
      +-----------+--------------------+
      |           |                    |
      v           v                    v
  Tizen adapters  Generic Linux        LLM backends
  and dynloaded   infrastructure        and plugins
  platform APIs   fallbacks
      |
      +-------------------------------+
                                      |
                                      v
                         Tool executor / C API / metadata plugins

Telegram coding mode can also invoke:
  codex / gemini / claude
on the host and stream progress back into chat.

Install on Ubuntu or WSL

If you want to try TizenClaw on host Linux first, the repository now includes a GitHub-friendly bootstrap script that downloads a prebuilt host bundle from GitHub Releases, installs it under ~/.tizenclaw, and launches the setup wizard.

One-line bootstrap

curl -fsSL https://raw.githubusercontent.com/hjhun/tizenclaw/main/install.sh | bash

Useful variants:

curl -fsSL https://raw.githubusercontent.com/hjhun/tizenclaw/main/install.sh | bash -s -- --version v1.0.0
curl -fsSL https://raw.githubusercontent.com/hjhun/tizenclaw/main/install.sh | bash -s -- --skip-setup
curl -fsSL https://raw.githubusercontent.com/hjhun/tizenclaw/main/install.sh | bash -s -- --source-install --ref main

What the bootstrap does:

  • installs the runtime packages needed for host execution
  • downloads the matching tizenclaw-host-bundle-...tar.gz asset from GitHub Releases
  • installs the bundled binaries, web assets, configs, and management script
  • starts the host services from the installed bundle
  • launches tizenclaw-cli setup so you can either configure now or defer setup and jump straight to the dashboard

After installation, the setup wizard can help with:

  • choosing an LLM backend and entering its API key
  • optional Telegram bot setup for coding mode
  • showing the local dashboard URL and the command to rerun setup later
  • letting you choose "configure later" so you can open the dashboard first

Source Install for Contributors

If you are actively developing TizenClaw and want a full repository checkout, switch the installer into source mode:

curl -fsSL https://raw.githubusercontent.com/hjhun/tizenclaw/main/install.sh | bash -s -- --source-install --ref main

Or run the classic manual flow:

git clone https://github.com/hjhun/tizenclaw.git
cd tizenclaw
./deploy_host.sh

Useful host commands:

./deploy_host.sh -b
./deploy_host.sh --status
./deploy_host.sh --log
./deploy_host.sh -s
tizenclaw-cli dashboard start
tizenclaw-cli dashboard status

The host dashboard defaults to http://localhost:9091, and the setup wizard prints the active URL again at the end so first-time users can jump in right away.

Deploy to a Tizen Target

For the emulator or device-oriented workflow, use the repository's Tizen deploy pipeline:

./deploy.sh -a x86_64

Useful variants:

./deploy.sh -a x86_64 -n
./deploy.sh -a x86_64 -d <device-serial>
./deploy.sh -a x86_64 -s

This path is the canonical Tizen validation flow. It handles build, packaging, deployment, and service restart on the target.

Workspace

TizenClaw is a Rust workspace with clearly separated runtime roles:

  • src/tizenclaw: main daemon
  • src/tizenclaw-cli: IPC client and operational CLI
  • src/tizenclaw-web-dashboard: standalone web dashboard
  • src/tizenclaw-tool-executor: isolated tool-execution sidecar
  • src/libtizenclaw-core: shared framework and plugin/runtime support
  • src/libtizenclaw: C-facing client library
  • src/tizenclaw-metadata-*: metadata plugin crates for skills, CLI, and LLM backend extensions

Documentation

Additional repository docs:

Status

The project is actively evolving, but the central direction is already clear: TizenClaw aims to be a serious autonomous agent runtime for Tizen and embedded Linux, not just a sample app. Its strengths are persistence, explicit platform boundaries, flexible access surfaces, and unusually practical remote coding control through Telegram plus local coding-agent CLIs.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Rust 53.7%
  • C++ 35.4%
  • Shell 4.0%
  • JavaScript 2.7%
  • C 1.6%
  • CSS 1.1%
  • Other 1.5%