Welcome! This guide will get you from telemetry refresh to triage in under 5 minutes.
- How to install secopsai
- How to run your first detection
- How to understand findings
- How to investigate and triage them
- Where to go next
secopsai is a local-first detection and triage platform for OpenClaw and host telemetry. It comes with:
- Detection rules for dangerous execution, policy abuse, data exfiltration, malware, and supply-chain release review
- Reproducible benchmark corpus for OpenClaw validation
- Live telemetry support for OpenClaw and host adapters
- Native triage workflow with investigation reports and queued analyst actions
macOS/Linux:
curl -fsSL https://secopsai.dev/install.sh | bashSecurity note: only run a curl | bash installer if you trust the publisher and the source code. If you prefer a safer path, clone the repo and inspect docs/install.sh + setup.sh before running.
This will:
- Clone
https://github.com/Techris93/secopsai.gitinto~/secopsai(or$SECOPSAI_HOMEif set) - Create a virtualenv at
~/secopsai/.venv - Install Python dependencies and the
secopsaiCLI (editable install) - Run basic validation + benchmark setup
Default behaviour (non-interactive):
- Optional native surfaces: disabled
- Benchmark generation: enabled
- Live export: disabled
Optional controls:
SECOPSAI_INSTALL_REF=<git ref or commit>– pin to a specific version (by default, a fixed known-good commit is used)SECOPSAI_HOME=/path/to/dir– change the checkout location (default:$HOME/secopsai)
Example to explicitly track latest main instead of the pinned commit:
SECOPSAI_INSTALL_REF=main curl -fsSL https://secopsai.dev/install.sh | bashAfter install, activate the environment:
cd ~/secopsai
source .venv/bin/activateYou now have the secopsai CLI available:
secopsai refresh # run the OpenClaw live pipeline
secopsai refresh --platform macos # run adapter collection for a specific platform
secopsai live --platform macos # stream adapter events live
secopsai correlate # run cross-platform correlation
secopsai list --severity high # list high-severity findings
secopsai show OCF-XXXX # inspect a finding
secopsai triage orchestrate --search-root ~/secopsai # investigate open findings
# Add --json to any command for machine-friendly output
# (either before or after the subcommand)
secopsai list --severity high --json
secopsai --json list --severity highgit clone https://github.com/Techris93/secopsai.git
cd secopsai
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python prepare.py # Generate data/events.json and data/events_unlabeled.json
python -m pytest tests/ -v # Optional: verify installationCreate a reproducible labeled attack dataset:
python generate_openclaw_attack_mix.py --statsOutput:
┌──────────────────────────────────────────┐
│ Attack-Mix Benchmark Generator │
├──────────────────────────────────────────┤
│ Base benign events: 58 │
│ Simulated attacks: 22 │
│ Total events: 80 │
│ Timestamp range: 2 hours │
└──────────────────────────────────────────┘
Attack Types:
✓ Dangerous Exec (2 events)
✓ Sensitive Config (1 event)
✓ Skill Source Drift (1 event)
✓ Policy Denial Churn (1 event)
✓ Tool Burst (2 events)
✓ Pairing Churn (1 event)
✓ Subagent Fanout (2 events)
✓ Restart Loop (2 events)
✓ Data Exfiltration (3 events)
✓ Malware Presence (2 events)
Files written:
✓ data/openclaw/replay/labeled/attack_mix.json
✓ data/openclaw/replay/unlabeled/attack_mix.json
python evaluate_openclaw.py \
--labeled data/openclaw/replay/labeled/attack_mix.json \
--unlabeled data/openclaw/replay/unlabeled/attack_mix.json \
--mode benchmark --verboseExpected result:
┌─────────────────────────────────────────────┐
│ OpenClaw Attack Detection │
├─────────────────────────────────────────────┤
│ F1 Score: 1.000000 ✓ │
│ Precision: 1.000000 ✓ │
│ Recall: 1.000000 ✓ │
│ False Positive Rate: 0.000000 │
│ │
│ True Positives: 22 (attacks caught) │
│ False Positives: 0 (zero noise) │
│ False Negatives: 0 (nothing missed) │
│ True Negatives: 58 (benign OK) │
└─────────────────────────────────────────────┘
Perfect score! Your detection pipeline is ready.
If you have OpenClaw installed with audit logs in ~/.openclaw/:
python detect.pyThis will:
- Export your local OpenClaw audit logs
- Run detection rules
- Output findings in
findings.json
Once findings exist in the SOC store:
secopsai triage list --status open --limit 20
secopsai triage investigate SCM-XXXX --json
secopsai triage orchestrate --search-root ~/secopsai
secopsai triage queueThis gives you:
- evidence-gathering case files in
reports/triage/ - low-risk auto-closure for clearly safe findings
- a queue for risky actions such as allowlisting or tuning
The findings.json file contains detected attacks with context:
{
"total_findings": 22,
"findings": [
{
"finding_id": "OCF-001",
"title": "Dangerous Exec: curl | bash injection",
"rule_id": "RULE-101",
"attack_type": "T1059 - Command and Scripting Interpreter",
"severity": "CRITICAL",
"confidence": 1.0,
"event_ids": ["evt-042"],
"description": "Detected dangerous pipe execution pattern",
"pattern": "curl ... | bash",
"remediation": "Review command source; disable if unauthorized"
},
{
"finding_id": "OCF-002",
"title": "Data Exfiltration: curl -F upload",
"rule_id": "RULE-109",
"attack_type": "T1048 - Exfiltration Over Alternative Protocol",
"severity": "HIGH",
"timestamp": "2026-03-15T14:23:45Z",
...
}
]
}Each finding shows:
- What was detected — the attack pattern
- Which rule caught it — RULE-101, RULE-109, etc.
- How severe — CRITICAL, HIGH, MEDIUM, LOW
- Confidence — 0.0-1.0 likelihood of being a real attack
- What action to take — remediation guidance
Read Rules Registry to understand what each rule detects and how to tune it.
See Rules Registry for per-rule detection behavior and tuning guidance.
Check Deployment Guide for production deployment patterns.
Read Findings Triage Guide for manual review and Triage Orchestrator for the automated guarded workflow.
Visit API Reference to write custom rules or integrate with your tools.
Install OpenClaw from docs.openclaw.ai/install
This is expected! Live telemetry is usually benign. Try the benchmark instead:
python generate_openclaw_attack_mix.py --stats
python evaluate_openclaw.py --labeled data/openclaw/replay/labeled/attack_mix.json --unlabeled data/openclaw/replay/unlabeled/attack_mix.json --mode benchmarkEnsure Python 3.10+ and run:
pip install --upgrade -r requirements.txt
python -m pytest tests/ -v- Documentation: secopsai.dev
- GitHub Issues: Report a bug
- Discussions: Ask a question
Ready for more? → Read Rules Registry to understand each detection rule.