Skip to content

Replace Khepri topic routing projection with trie + ordered_set (v4) (backport #15619)#15974

Merged
ansd merged 1 commit intov4.3.xfrom
mergify/bp/v4.3.x/pr-15619
Apr 9, 2026
Merged

Replace Khepri topic routing projection with trie + ordered_set (v4) (backport #15619)#15974
ansd merged 1 commit intov4.3.xfrom
mergify/bp/v4.3.x/pr-15619

Conversation

@mergify
Copy link
Copy Markdown

@mergify mergify bot commented Apr 9, 2026

Resolves #15588.

The previous Khepri topic routing projection (v3) stored topic bindings as sets:set(#binding{}) inside trie leaf nodes. This design had a major performance drawback:

On the insertion/deletion path (in the single Khepri Ra process), every binding change required a read-modify-write of the entire sets:set(), making it O(N) in the number of bindings at that leaf. With many MQTT clients connecting concurrently (each subscribing to the same topic filter), this made the Ra process a bottleneck.

Another less severe performance issue was that the entire binding was being copied including the binding arguments containing the MQTT 5.0 subscription options such as:

{<<"x-mqtt-subscription-opts">>,table,
 [{<<"id">>,unsignedint,1},
  {<<"no-local">>,bool,false},
  {<<"qos">>,unsignedbyte,0},
  {<<"retain-as-published">>,bool,false},
  {<<"retain-handling">>,unsignedbyte,0}]}]

Replace the single ETS projection table with two purpose-built tables:

  1. Trie edges table (ETS set, read_concurrency=true):

    • Row: {{XSrc, ParentNodeId, Word}, ChildNodeId, ChildCount}
    • XSrc = {VHost, ExchangeName} (compact 2-tuple of binaries)
    • NodeId = root | reference()
    • ChildCount tracks outgoing edges for garbage collection
  2. Leaf bindings table (ETS ordered_set, read_concurrency=true):

    • Key: {NodeId, BindingKey, Dest}
    • Stored as 1-tuples: {{NodeId, BindingKey, Dest}}
    • No value column; all data is in the key to minimize copying

The trie structure preserves O(depth * 3) routing complexity regardless of the number of overall bindings or wildcard filters. At each trie level, we probe at most 3 edges (literal word, <<"*">>, <<"#">>), each via ets:lookup_element/4 which copies only the ChildNodeId (a reference).

The ordered_set for bindings provides:

  • O(log N) insert and delete per binding (no read-modify-write)
  • The binding key (needed for MQTT subscription identifiers and topic aliases) is part of the key, so it is returned directly during destination collection without additional lookups

Collecting destinations at a matched trie leaf uses a hybrid strategy:

  • Fanout 0-2 (the common case: unicast, device + stream): up to 3 ets:next/2 probes. Each ets:next/2 call costs O(log N) because the CATree (used with read_concurrency) allocates a fresh tree traversal stack on each call.
  • Fanout > 2: ets:select/2 with a partially bound key does an O(log N) seek followed by an O(F) range scan. The match spec compilation overhead amortises over the larger result set.

ets:lookup_element/4 (OTP 26+) returns a default value on miss instead of throwing badarg, and copies only the requested element on hit. This avoids both exception overhead (misses are common during trie traversal of <<"*">> and <<"#">> branches) and unnecessary data copying (we only need the ChildNodeId, not the full row).

Trie node IDs are ephemeral (the tables are rebuilt when the Khepri projection is re-registered). make_ref() is fast, globally unique within a node, and has good hash distribution for the ETS set table.

When a binding is deleted, the trie path from root to leaf is collected in a single downward walk (trie_follow_down_get_path). Empty nodes are then pruned bottom-up: a node is empty when its ChildCount is 0 and it has no bindings in the ordered_set table.

Benchmarks below were run with 500K routing operations per scenario (on the same machine, back-to-back between main (v3) and this commit.

Significant insert/delete improvements:

Churn insert (8K bindings, 4 filters/client): ~1,120 vs ~810 ops/s (+38%)
v3 did a read-modify-write of sets:set() per binding; v4 does
a single ets:insert into the ordered_set plus trie edge updates.

MQTT device insert (20K bindings): ~650 vs ~420 ops/s (+55%)
Same mechanism as churn insert. Particularly impactful when many
clients share the same wildcard filter (e.g. "broadcast.#"),
since v3's sets:set() grew with each client while v4 inserts
are O(log N) regardless.

Same-key fanout insert (10K): ~415 vs ~290 ops/s (+43%)
The worst case for v3: all 10K bindings share the same key,
so each insert copies and rebuilds the growing sets:set().

Routing improvements:

MQTT unicast (10K devices, 20K bindings): ~460K vs ~250K ops/s (+80%)
Each route matches 1 queue among 10K unique exact keys plus
10K queues sharing "broadcast.#". v3 stored bindings in the same
ETS row as the trie edge, so every trie lookup copied the entire
sets:set(). v4 separates trie edges (small rows, set table) from
bindings (ordered_set), so the trie walk copies only references.

Large fanout (10K queues, same key): ~3,100 vs ~1,170 ops/s (+165%)
v3 copied a 10K-element sets:set() out of ETS in a single
ets:lookup, then called sets:to_list/1. v4 uses ets:select/2
with a partially bound key, which does an O(log N) seek and
then an efficient O(F) range scan without intermediate set
conversion.

MQTT broadcast (10K fanout): ~0.6 vs ~0.9 ms/route (+50%)
Same mechanism as above.

Scenarios with no significant change (within benchmark noise):

Exact match, wildcard *, wildcard #, mixed wildcards, and many
wildcard filters showed no clear difference. Both v3 and v4 use
a trie walk, so routing speed is comparable when the fanout is
small and the bottleneck is trie traversal rather than destination
collection.


This is an automatic backport of pull request #15619 done by [Mergify](https://mergify.com).

…15619)

* Replace Khepri topic routing projection with trie + ordered_set (v4)

Resolves #15588.

The previous Khepri topic routing projection (v3) stored topic bindings
as sets:set(#binding{}) inside trie leaf nodes. This design had a
major performance drawback:

On the insertion/deletion path (in the single Khepri Ra process),
every binding change required a read-modify-write of the entire
sets:set(), making it O(N) in the number of bindings at that leaf.
With many MQTT clients connecting concurrently (each subscribing to
the same topic filter), this made the Ra process a bottleneck.

Another less severe performance issue was that the entire binding
was being copied including the binding arguments containing the MQTT 5.0
subscription options such as:
```
{<<"x-mqtt-subscription-opts">>,table,
 [{<<"id">>,unsignedint,1},
  {<<"no-local">>,bool,false},
  {<<"qos">>,unsignedbyte,0},
  {<<"retain-as-published">>,bool,false},
  {<<"retain-handling">>,unsignedbyte,0}]}]
```

Replace the single ETS projection table with two purpose-built tables:

1. Trie edges table (ETS set, read_concurrency=true):
   - Row: `{{XSrc, ParentNodeId, Word}, ChildNodeId, ChildCount}`
   - XSrc = {VHost, ExchangeName} (compact 2-tuple of binaries)
   - NodeId = root | reference()
   - ChildCount tracks outgoing edges for garbage collection

2. Leaf bindings table (ETS ordered_set, read_concurrency=true):
   - Key: {NodeId, BindingKey, Dest}
   - Stored as 1-tuples: {{NodeId, BindingKey, Dest}}
   - No value column; all data is in the key to minimize copying

The trie structure preserves O(depth * 3) routing complexity regardless
of the number of overall bindings or wildcard filters. At each trie level, we
probe at most 3 edges (literal word, <<"*">>, <<"#">>), each via
ets:lookup_element/4 which copies only the ChildNodeId (a reference).

The ordered_set for bindings provides:
- O(log N) insert and delete per binding (no read-modify-write)
- The binding key (needed for MQTT subscription identifiers and topic
  aliases) is part of the key, so it is returned directly during
  destination collection without additional lookups

Collecting destinations at a matched trie leaf uses a hybrid strategy:
- Fanout 0-2 (the common case: unicast, device + stream): up to 3
  ets:next/2 probes. Each ets:next/2 call costs O(log N) because the
  CATree (used with read_concurrency) allocates a fresh tree traversal
  stack on each call.
- Fanout > 2: ets:select/2 with a partially bound key does an O(log N)
  seek followed by an O(F) range scan. The match spec compilation
  overhead amortises over the larger result set.

ets:lookup_element/4 (OTP 26+) returns a default value on miss
instead of throwing badarg, and copies only the requested element
on hit. This avoids both exception overhead (misses are common during
trie traversal of <<"*">> and <<"#">> branches) and unnecessary data
copying (we only need the ChildNodeId, not the full row).

Trie node IDs are ephemeral (the tables are rebuilt when the Khepri
projection is re-registered). make_ref() is fast, globally unique
within a node, and has good hash distribution for the ETS set table.

When a binding is deleted, the trie path from root to leaf is collected
in a single downward walk (trie_follow_down_get_path). Empty nodes are
then pruned bottom-up: a node is empty when its ChildCount is 0 and it
has no bindings in the ordered_set table.

Benchmarks below were run with 500K routing operations per scenario
(on the same machine, back-to-back between main (v3) and this commit.

Significant insert/delete improvements:

  Churn insert (8K bindings, 4 filters/client): ~1,120 vs ~810 ops/s (+38%)
    v3 did a read-modify-write of sets:set() per binding; v4 does
    a single ets:insert into the ordered_set plus trie edge updates.

  MQTT device insert (20K bindings): ~650 vs ~420 ops/s (+55%)
    Same mechanism as churn insert. Particularly impactful when many
    clients share the same wildcard filter (e.g. "broadcast.#"),
    since v3's sets:set() grew with each client while v4 inserts
    are O(log N) regardless.

  Same-key fanout insert (10K): ~415 vs ~290 ops/s (+43%)
    The worst case for v3: all 10K bindings share the same key,
    so each insert copies and rebuilds the growing sets:set().

Routing improvements:

  MQTT unicast (10K devices, 20K bindings): ~460K vs ~250K ops/s (+80%)
    Each route matches 1 queue among 10K unique exact keys plus
    10K queues sharing "broadcast.#". v3 stored bindings in the same
    ETS row as the trie edge, so every trie lookup copied the entire
    sets:set(). v4 separates trie edges (small rows, set table) from
    bindings (ordered_set), so the trie walk copies only references.

  Large fanout (10K queues, same key): ~3,100 vs ~1,170 ops/s (+165%)
    v3 copied a 10K-element sets:set() out of ETS in a single
    ets:lookup, then called sets:to_list/1. v4 uses ets:select/2
    with a partially bound key, which does an O(log N) seek and
    then an efficient O(F) range scan without intermediate set
    conversion.

  MQTT broadcast (10K fanout): ~0.6 vs ~0.9 ms/route (+50%)
    Same mechanism as above.

Scenarios with no significant change (within benchmark noise):

  Exact match, wildcard *, wildcard #, mixed wildcards, and many
  wildcard filters showed no clear difference. Both v3 and v4 use
  a trie walk, so routing speed is comparable when the fanout is
  small and the bottleneck is trie traversal rather than destination
  collection.

* Improve function names

Improve function names and add function specs.

(cherry picked from commit 8929bc5)
@mergify mergify bot assigned ansd Apr 9, 2026
@ansd ansd added this to the 4.3.0 milestone Apr 9, 2026
@ansd ansd merged commit 51f7157 into v4.3.x Apr 9, 2026
362 of 363 checks passed
@ansd ansd deleted the mergify/bp/v4.3.x/pr-15619 branch April 9, 2026 13:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant