Skip to content

scx_lavd: scheduler prioritizing E-cores on lunar lake (Intel core Ultra 7 258V) even under heavy single threaded load #3259

@ver4a

Description

@ver4a

Hello,

when playing a game, Mount&Blade: Warband in this case, the scheduler only uses E-cores and seems to only use P-cores when all E-cores get sufficiently loaded, which doesn't happen in this game, because it's not very multithreaded and only loads a single core fully.

I see a similar behavior when running arbitrary stuff like "pv /dev/urandom > /dev/null" and "zstd -b1", where the performance starts off very low (only on E-cores) and gets better if I launch multiple of them, or zstd with multithreading, when they get moved to P-cores.

In the same scene in the game, I get stable 60fps with the default scheduler (EEVDF) and only about 30fps with scx_lavd (both autopilot/default and --performance)

Specs:
OS: Fedora 43 (Kinoite)
CPU: Intel core Ultra 7 258V
kernel: 6.18.6-200.fc43.x86_64
kconfig: config.txt
governor: intel_pstate active powersave
scx_lavd version: scx_lavd 1.0.20-gae55786b x86_64-unknown-linux-gnu (built from main 2 days ago)

all testing is done on AC power and core usage is monitored with btop

logs:

performance:
  • launch:
root@fedora:/var/home/ver4a# /bin/scx_lavd --kconfig /usr/lib/modules/6.18.6-200.fc43.x86_64/config --performance
2026-01-26T22:24:43.357648Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:317: Performance mode is enabled.
2026-01-26T22:24:43.358495Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:354: Pinned task slice mode is enabled (5000 us). Pinned tasks will use per-CPU DSQs.
2026-01-26T22:24:43.358519Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:977: Opts {
    verbose: 0,
    autopilot: false,
    autopower: false,
    performance: true,
    powersave: false,
    balanced: false,
    slice_max_us: 5000,
    slice_min_us: 500,
    mig_delta_pct: 0,
    pinned_slice_us: Some(
        5000,
    ),
    preempt_shift: 6,
    cpu_pref_order: "",
    no_use_em: false,
    no_futex_boost: false,
    no_preemption: false,
    no_wake_sync: false,
    no_slice_boost: false,
    per_cpu_dsq: false,
    enable_cpu_bw: false,
    no_core_compaction: true,
    no_freq_scaling: false,
    stats: None,
    monitor: None,
    monitor_sched_samples: None,
    log_level: "info",
    version: false,
    run_id: None,
    help_stats: false,
    libbpf: LibbpfOpts {
        relaxed_maps: None,
        pin_root_path: None,
        kconfig: Some(
            "/usr/lib/modules/6.18.6-200.fc43.x86_64/config",
        ),
        btf_custom_path: None,
        bpf_token_path: None,
    },
    topology: None,
}
2026-01-26T22:24:43.440016Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  676 (9.997043%)
  primary CPUs:  [7]
  overflow CPUs: [6, 5, 4, 3, 2, 1]
2026-01-26T22:24:43.440032Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  1352 (19.994085%)
  primary CPUs:  [6, 7]
  overflow CPUs: [5, 4, 3, 2, 1]
2026-01-26T22:24:43.440034Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  2028 (29.991127%)
  primary CPUs:  [5, 6, 7]
  overflow CPUs: [4, 3, 2, 1]
2026-01-26T22:24:43.440038Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  2704 (39.98817%)
  primary CPUs:  [4, 5, 6, 7]
  overflow CPUs: [3, 2, 1]
2026-01-26T22:24:43.440040Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  3728 (55.13162%)
  primary CPUs:  [3, 4, 5, 6, 7]
  overflow CPUs: [2, 1]
2026-01-26T22:24:43.440042Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  4057 (59.99704%)
  primary CPUs:  [3, 2, 4, 6, 7]
  overflow CPUs: [5, 1]
2026-01-26T22:24:43.440045Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  4733 (69.99409%)
  primary CPUs:  [3, 2, 4, 5, 6, 7]
  overflow CPUs: [1]
2026-01-26T22:24:43.440047Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  5738 (84.85655%)
  primary CPUs:  [3, 1, 2, 4, 5, 6, 7]
  overflow CPUs: []
2026-01-26T22:24:44.138038Z  WARN ThreadId(01) scx_utils::libbpf_logger: rust/scx_utils/src/libbpf_logger.rs:12: libbpf: map 'lavd_ops': BPF map skeleton link is uninitialized

2026-01-26T22:24:44.156020Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:1010: scx_lavd scheduler is initialized (build ID: 1.0.20-gae55786b x86_64-unknown-linux-gnu)
2026-01-26T22:24:44.156040Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:1014: scx_lavd scheduler starts running.
autopilot/default:
  • launch:
root@fedora:/var/home/ver4a# /bin/scx_lavd --kconfig /usr/lib/modules/6.18.6-200.fc43.x86_64/config
2026-01-26T22:49:43.719448Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:301: Autopilot mode is enabled.
2026-01-26T22:49:43.720129Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:354: Pinned task slice mode is enabled (5000 us). Pinned tasks will use per-CPU DSQs.
2026-01-26T22:49:43.720150Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:977: Opts {
    verbose: 0,
    autopilot: true,
    autopower: false,
    performance: false,
    powersave: false,
    balanced: false,
    slice_max_us: 5000,
    slice_min_us: 500,
    mig_delta_pct: 0,
    pinned_slice_us: Some(
        5000,
    ),
    preempt_shift: 6,
    cpu_pref_order: "",
    no_use_em: false,
    no_futex_boost: false,
    no_preemption: false,
    no_wake_sync: false,
    no_slice_boost: false,
    per_cpu_dsq: false,
    enable_cpu_bw: false,
    no_core_compaction: false,
    no_freq_scaling: false,
    stats: None,
    monitor: None,
    monitor_sched_samples: None,
    log_level: "info",
    version: false,
    run_id: None,
    help_stats: false,
    libbpf: LibbpfOpts {
        relaxed_maps: None,
        pin_root_path: None,
        kconfig: Some(
            "/usr/lib/modules/6.18.6-200.fc43.x86_64/config",
        ),
        btf_custom_path: None,
        bpf_token_path: None,
    },
    topology: None,
}
2026-01-26T22:49:43.800104Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  676 (9.997043%)
  primary CPUs:  [7]
  overflow CPUs: [6, 5, 4, 3, 2, 1]
2026-01-26T22:49:43.800119Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  1352 (19.994085%)
  primary CPUs:  [6, 7]
  overflow CPUs: [5, 4, 3, 2, 1]
2026-01-26T22:49:43.800121Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  2028 (29.991127%)
  primary CPUs:  [5, 6, 7]
  overflow CPUs: [4, 3, 2, 1]
2026-01-26T22:49:43.800123Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  2704 (39.98817%)
  primary CPUs:  [4, 5, 6, 7]
  overflow CPUs: [3, 2, 1]
2026-01-26T22:49:43.800125Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  3728 (55.13162%)
  primary CPUs:  [3, 4, 5, 6, 7]
  overflow CPUs: [2, 1]
2026-01-26T22:49:43.800127Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  4057 (59.99704%)
  primary CPUs:  [3, 2, 4, 5, 6]
  overflow CPUs: [7, 1]
2026-01-26T22:49:43.800128Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  4733 (69.99409%)
  primary CPUs:  [3, 2, 4, 5, 6, 7]
  overflow CPUs: [1]
2026-01-26T22:49:43.800130Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:541: capacity bound:  5738 (84.85655%)
  primary CPUs:  [3, 1, 2, 4, 5, 6, 7]
  overflow CPUs: []
2026-01-26T22:49:44.479777Z  WARN ThreadId(01) scx_utils::libbpf_logger: rust/scx_utils/src/libbpf_logger.rs:12: libbpf: map 'lavd_ops': BPF map skeleton link is uninitialized

2026-01-26T22:49:44.511363Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:1010: scx_lavd scheduler is initialized (build ID: 1.0.20-gae55786b x86_64-unknown-linux-gnu)
2026-01-26T22:49:44.511382Z  INFO ThreadId(01) scx_lavd: scheds/rust/scx_lavd/src/main.rs:1014: scx_lavd scheduler starts running.

I hope there's a solution to this, because scx_lavd gives me a significantly lower energy consumption (hopefully not just because it can't figure out how to use P-cores) than the default scheduler in my testing with low-utilization use cases.

Please LMK if you need me to provide anything else or test anything. Thank you for your work on this project!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions