Skip to content

Add tasks.isolate config for optional task core isolation ##core#25273

Open
dnakov wants to merge 1 commit intoradareorg:masterfrom
dnakov:tasks-isolate-config
Open

Add tasks.isolate config for optional task core isolation ##core#25273
dnakov wants to merge 1 commit intoradareorg:masterfrom
dnakov:tasks-isolate-config

Conversation

@dnakov
Copy link
Contributor

@dnakov dnakov commented Jan 20, 2026

Enable CUSTOMCORE code path with runtime guard via tasks.isolate config (default false). When enabled, tasks get isolated console and block buffer.

  • Mark this if you consider it ready to merge
  • I've added tests (optional)
  • I wrote some lines in the book (optional)

Description

Enable CUSTOMCORE code path with runtime guard via tasks.isolate config
(default false). When enabled, tasks get isolated console and block buffer.
@trufae
Copy link
Collaborator

trufae commented Feb 14, 2026

PR #25273 Review — "Add tasks.isolate config for optional task core isolation"

PR: #25273
Author: @dnakov
Branch: tasks-isolate-configmaster
Files changed: libr/core/cconfig.c, libr/core/task.c (+28 / -6)

Verdict: ❌ Request Changes

The intent is reasonable — provide optional per-task isolation of the console and block buffer so concurrent tasks don't interfere with each other's output or working data. However, the implementation has multiple high-severity correctness and safety issues that would cause crashes, data corruption, and subtle concurrency bugs if merged.


Motivation Analysis

What problem is this trying to solve?

In radare2's task system (& commands, background tasks), all tasks share the same RCore instance. This means:

  1. Console output interleaving — Two concurrent tasks writing to core->cons will mix their output, making task results unreadable or incorrect.
  2. Block buffer racescore->block is a shared mutable buffer. If task A seeks to address X and reads into core->block, and task B seeks to address Y, task A's block data is silently overwritten.

The PR tries to solve this by optionally giving each task its own RCons and its own copy of core->block when the user sets tasks.isolate=true.

Why was CUSTOMCORE=0 before?

The CUSTOMCORE code path was disabled (#define CUSTOMCORE 0) because the previous implementation was incomplete and known to be problematic (the old XXX/TODO comments acknowledge this). This PR enables it (CUSTOMCORE 1) with a runtime config guard, which is a reasonable approach — but the implementation still has the same fundamental problems that caused it to be disabled, plus new ones.


Code Path Analysis

How it works (the intended flow)

  1. User sets e tasks.isolate=true at runtime
  2. When a task runs (task_run()), it calls mycore_new(core)
  3. mycore_new() does memcpy(c, core, sizeof(RCore)) — a shallow copy of the entire ~1KB struct
  4. It replaces c->cons with r_cons_new() and duplicates c->block
  5. The task executes its command via r_core_cmd_str(local_core, task->cmd)
  6. On completion, mycore_free(local_core, core) frees the block, cons, and the shell struct

What r_core_cmd_str does with the core

r_core_cmd_str(core, cmd):
    r_cons_push(core->cons)          // push/pop on the task's isolated cons ✓
    r_core_cmd(core, cmd, 0)         // but the command uses core->io, core->anal, etc.
    result = r_cons_get_buffer(...)   // get output from isolated cons
    r_cons_pop(core->cons)
    return result

The command execution touches core->io, core->anal, core->flags, core->config, core->rcmd, core->bin, core->num, and many other shared subsystems which are NOT isolated.


Issues

  • 🔴 r_cons_new() clobbers the global console singleton

  • 🔴 Shallow memcpy of RCore shares all mutable subsystems

  • 🔴 Block malloc failure silently ignored — will cause SIGSEGV

  • 🟠 Embedded RCoreTaskScheduler tasks is shallow-copied

  • 🟠 Silent fallback to shared core hides isolation failures

  • 🟡 mycore_free missing free(c) in original CUSTOMCORE code path


Additional Risks if Merged

  1. Race on core->addr (seek position) — The cloned core copies the current seek, but it shares core->io. Any seek command in the task will modify core->io->off and affect all other tasks and the main core.

  2. core->num->value is shared — Expression evaluation results ($$, $?) are stored here. Concurrent tasks will overwrite each other's return values.

  3. core->break_loop, core->tmpseek, and other booleans — These are copied by value in the memcpy, but the underlying subsystems they interact with (core->io, etc.) are shared. Modifications create inconsistent state.

  4. core->lock (RThreadLock*) — The shallow copy shares the same lock pointer. Both the original and cloned core calling R_CRITICAL_ENTER(core) will use the same mutex, which is actually correct for mutual exclusion — but it means the "isolation" provides no concurrency benefit for commands that use the core lock.

  5. The r_core_cmd_str already does r_cons_push/pop — This function already isolates console output by pushing a new context. The additional r_cons_new() creates a separate console entirely, which is a heavier-weight isolation. It's worth considering whether the push/pop mechanism is sufficient for the task use case.


Summary

Aspect Assessment
Motivation Valid — task output isolation is needed
Approach Risky — shallow memcpy of 1KB+ struct with 50+ shared pointers
Console isolation Broken — r_cons_new() clobbers global singleton
Block isolation Partial — malloc failure silently ignored
Subsystem isolation None — io, anal, flags, config all shared
Error handling Silent fallback hides failures
Thread safety Worse than no isolation — false sense of safety

The PR's goal is worthwhile, but the implementation needs significant rework. A safer approach might be to:

  1. Use r_cons_new2() to avoid the global singleton problem
  2. Only replace cons and block on the original core (with save/restore) instead of the full memcpy
  3. Or, more ambitiously, implement proper r_core_clone() that deep-copies the necessary subsystems

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants