Skip to content

Commit 26b4926

Browse files
jhaaaacodabrink
andauthored
XIP-80: Atomic membership (#135)
* first pass * add discussions link * fix lint errors * nit --------- Co-authored-by: Dakota Brink <git@kota.is>
1 parent fa32dce commit 26b4926

File tree

1 file changed

+183
-0
lines changed

1 file changed

+183
-0
lines changed

XIPs/xip-80-atomic-membership.md

Lines changed: 183 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,183 @@
1+
---
2+
xip: 80
3+
title: Atomic membership
4+
description: A scaling proposal to limit the number of installations in a group for an inbox
5+
author: Dakota Brink (@codabrink)
6+
discussions-to: https://improve.xmtp.org/t/xip-80-atomic-membership/2072
7+
status: Draft
8+
type: Standards
9+
category: XRC
10+
created: 2026-01-28
11+
---
12+
13+
## Abstract
14+
15+
This XIP provides a scaling proposal to limit the number of installations in a group for an inbox. It unlocks the ability for devs to have more than one instance of an agent online at a time. It also reduces commit size, reduces the number of commits, and increases security for XMTP's extra privacy-conscious users.
16+
17+
## Motivation
18+
19+
In the current XMTP, every installation of every inbox is a part of every group they’re a member of. This is generally ideal for the average user, but for agents this is a problem when you have more than one agent running. N bots will receive every message, replying n times to every prompt, along with bloating group size with unnecessary installations in the group. In a group, you really only need one agent installation in a group at a time.
20+
21+
## Specification
22+
23+
### New leaf node extension: Atomic membership
24+
25+
Here’s what the extension data will look like. It’ll be serialized into a proto-buf in the extension data.
26+
27+
```rust
28+
struct AtomicMembershipExtension {
29+
flags: u8
30+
}
31+
```
32+
33+
If `flags & 1 == 1` (first bit is 1) in any inbox installation, other inboxes will only ensure that at least one active installation is present in the group. It’s expected that atomic inboxes will largely manage their own installations in groups.
34+
35+
When other inboxes search for which installation to add, they will only randomly select from a pool of installations where `flags & 2 == 2` (second bit is 1). This allows installations to turn themselves off from being added to new groups. If all installations have this bit off, then this bit is ignored.
36+
37+
**Possibly:** The third flag (`flags & 4 == 4`) tells the other users to select the oldest valid installation instead of a random installation. (Details at the end)
38+
39+
### Example scenarios
40+
41+
1. Adding an atomic inbox to a group
42+
43+
```mermaid
44+
graph TD
45+
ADD[Alix wishes to add Bo. Alix downloads Bo's KPs]
46+
ADD --> CHECK_ATOMIC[Is any KP flagged as atomic?]
47+
CHECK_ATOMIC -->|no| ADD_ALL[Add all installations]
48+
CHECK_ATOMIC -->|yes| ADD_RANDOM
49+
50+
ADD_RANDOM[Add a single valid random installation.]
51+
```
52+
53+
1. When other inboxes check for missing installations
54+
55+
```mermaid
56+
graph TD
57+
CHECK_MISSING[Caro checks for missing installations] --> ATOMIC_CHECK[Are any of Bo's leaf nodes flagged at atomic?]
58+
ATOMIC_CHECK -->|no| ADD_ALL[Add all missing installations.]
59+
ATOMIC_CHECK -->|yes| PARTICIPATE[Does Bo have >= 1 installation in group?]
60+
PARTICIPATE -->|yes| SKIP[Bo should manage his own installations.]
61+
PARTICIPATE -->|no| AT_LEAST_ONE[Bo needs at least one active installation in the group. Add random installation.]
62+
```
63+
64+
1. When an atomic inbox checks their own installations
65+
66+
```mermaid
67+
graph TD
68+
CHECK[Bo checks installations as an atomic inbox]
69+
TOO_MANY[Do I have too many installations?]
70+
TOO_FEW[Do I have too few installations?]
71+
DO_NOTHING[Do nothing]
72+
RANDOMLY_RETAIN[Retain random installations including myself.]
73+
RANDOMLY_ADD[Add random installations.]
74+
75+
CHECK --> TOO_MANY
76+
CHECK --> TOO_FEW
77+
78+
TOO_MANY -->|yes| RANDOMLY_RETAIN
79+
TOO_MANY -->|no| DO_NOTHING
80+
81+
TOO_FEW -->|yes| RANDOMLY_ADD
82+
TOO_FEW -->|no| DO_NOTHING
83+
84+
```
85+
86+
Effectively, if your inbox is flagged as atomic, others will ensure at least one of your active installations is in the group. Otherwise you manage your own installations in the groups you are a participant of. This is done to keep other members from having to frequently download your key-packages to check if you’re still atomic/balanced.
87+
88+
1. Bo (an atomic inbox) joins a new group
89+
90+
When an atomic installation joins a new group, if the number of Bo’s installations in that group exceed the configured installation limit for that client, that installation will immediately remove other installations until the limit is satisfied. This allows for other “over-capacity” installations to effectively remove themselves from groups by adding other installations that are “under-capacity”. This can only be done by adding extra installations at limit+1 to prevent race conditions with removal.
91+
92+
### New sync group message type: `InstallationLimit`
93+
94+
A new sync group message type will be present. This will set the installation limit for all installations. The last installation limit message in the sync group will be the value for all installations.
95+
96+
```protobuf
97+
message InstallationLimit {
98+
// set to None to disable atomic inbox
99+
optional int32 installation_limit = 1;
100+
// installation_ids in this list will not be added
101+
// to new groups.
102+
repeated bytes frozen_installations = 2;
103+
}
104+
```
105+
106+
### New client functions
107+
108+
1. `client.enable_atomic(limit: usize)`: This will send a sync group message to all installations, signaling all installations to create atomic KPs.
109+
2. `client.disable_atomic()`: This will send a sync group message signaling all installations to disable atomic KPs.
110+
3. `client.is_atomic()`: Returns true or false.
111+
112+
### What happens if I enable this feature and I already have multiple installations?
113+
114+
An over-limit check will be made by the atomic inbox during the missing installations check. If it sees that it has too many installations in the group, the installation will remove other installations until that limit is met.
115+
116+
### What happens when I revoke an installation?
117+
118+
If an installation gets revoked and it was the sole member of a group, other regular members will ensure that at least one valid installation is a member of a group for atomic inboxes, ensuring that you are re-added so long as you still have a valid installation.
119+
120+
### What if a client goes over capacity?
121+
122+
An over-capacity installation can add an installation that is under-capacity. That under-capacity atomic installation will do a check on join to ensure that the limit is met. If the installation number is over the limit, it will remove the over-capacity installation.
123+
124+
## Backward compatibility
125+
126+
This is a breaking change. Old installations that don’t have this extension will ignore the atomic flag and add the missing installations. This will also cause invalid commits on installations without the extension, because they will expect all of the installations to be present.
127+
128+
## Security considerations
129+
130+
This extension limits the number of installations in a group. One attack vector would be to add an inactive installation to the group. However, as mentioned above, other members would notice that installation is inactive, and swap it out.
131+
132+
### Threat model
133+
134+
See [More secure inboxes](#more-secure-inboxes).
135+
136+
## Agent functionality
137+
138+
When this feature is implemented, if you’re an agent developer and want to deploy more than one bot, enabling this feature is relatively simple.
139+
140+
1. Ensure that your installations are pruned.
141+
1. Be sure to revoke any installations that are no longer online.
142+
2. Call `client.emable_atomic()`
143+
144+
This will flag your inbox as an atomic inbox, and now you will only have one installation per-group. This will allow you to create multiple agents with ease.
145+
146+
## Rationale
147+
148+
### Alternative design: Keep things the way they are, and have each installation stream a subset of groups
149+
150+
The benefit of this is this solution is less complex on the surface, but also comes with some hidden costs.
151+
152+
- It does not reduce commit size at all.
153+
- If an agent has 10 bots, those 9 bots that do not stream the group will still occupy the leaf nodes of every group they’re not participating in.
154+
- If that agent is a part of 10k DMs, that’s 9k leaf node slots that are effectively doing nothing. Every DM that would have had a ratchet tree depth of 2 (assuming the other only has 1 installation), now has a depth of 5. Going from 3 nodes to 31.
155+
- Not only does it not reduce commit sizes, it actually increases their size.
156+
- Inactive installations in groups still need to update their leaf nodes.
157+
- Stale leaf nodes increase commit sizes, because they need to be encrypted to outside of the current tree.
158+
- There’s the deceptively complex issue of figuring out which installation is going to stream a new group when it arrives.
159+
- When a dev removes an installation, they have to redistribute those groups it was a part of to the other installations. Those installations will have to “catch up”.
160+
- This will mean that XMTP has to keep commits around forever, which is something that is currently planned, but it would be nice to not design ourselves into that corner.
161+
- XMTP sync cursors are global per-originator. When we omit groups, and then later un-omit them, that will complicate the sync logic.
162+
- A well-tuned sqlite3 database can handle tens of gigabytes of data, probably more. But it would still be nice to proactively do things that don’t push XMTP toward that limit.
163+
164+
The core of this proposal is effectively a flag in the node that tells other installations not to add missing installations for this inbox and that the atomic inbox will manage itself. By reducing the number of installations, we reduce the size of group ratchet trees, prevent stale nodes, and don’t have to worry about managing group_ids for streaming subsets of the group pool by leveraging existing membership logic to keep things as simple as possible.
165+
166+
The main benefit of streaming subsets of groups is it keeps something specific to agents out of the way of the rest of the protocol. But I believe there are also uses of atomic membership outside of agents.
167+
168+
### More secure inboxes
169+
170+
Having a flag that says “Nah, I’m good. Let me manage my own installations.” has a great security benefit for the extra paranoid. For example, if your computer gets hacked and you lose your root signing key, traditionally that attacker would be able to use that root signing key to add a new installation and other members (including the installation on your phone) would add that attacker’s new installation to all of your groups.
171+
172+
But if your inbox is flagged as atomic, this would allow you to build a mechanism where you would have to approve adding new installations to groups. You’d get a prompt to add a new installation that you’re not aware of, which of course - you’d deny. And now the attacker is left without the secret keys to your super secure group.
173+
174+
You would still need to get a new root signing key and consequentially a new inbox, but now you can be sure that the bad actor will not gain access to your very private group, and all you need to do is create a new inbox, and add two proposals to the group:
175+
176+
1. Add new inbox.
177+
2. Remove self.
178+
179+
However, there’s still the attack vector of an installation adding their own installation, then immediately revoking all of yours that needs to be solved for, but I do think that could be solved for in an upcoming XIP. Probably some form of 2FA with a second signing key.
180+
181+
## Copyright
182+
183+
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).

0 commit comments

Comments
 (0)