Quick definitions for terms used across the internals docs. Ably-specific concepts are marked with (Ably).
A lexicographically sortable string identifier that Ably assigns to every message on acceptance. Serials can be compared lexicographically to produce a total order over messages. However, this is not necessarily the order in which messages are delivered to subscribers - the only delivery-order guarantee is that messages published sequentially on the same realtime connection are always delivered in that same relative order, but they may interleave with messages published concurrently from other connections. The conversation tree uses serials as the primary ordering mechanism, and the decoder uses them to correlate appends back to the originating message.
Ably supports updates, deletes, and appends on messages after publication. The AI Transport SDK uses message appends to stream LLM tokens - a message is created with a publish (which returns a serial), then receives appendMessage calls that add data incrementally, and ends with a closing append that sets the final state. Each token is appended to a single persistent message rather than published as a separate message.
The alternative is a discrete message - a single publish with no subsequent appends. User messages and lifecycle events are discrete.
The four operations that can happen to an Ably message:
| Action | Meaning |
|---|---|
message.create |
A new message was published |
message.append |
Data was appended to an existing message |
message.update |
An existing message's content was replaced entirely |
message.delete |
A message was deleted |
Subscribers receive these as the action field on inbound messages. The decoder switches on this field to determine how to process each message.
The act of connecting to an Ably channel. A channel transitions from initialized → attaching → attached. Once attached, the client receives live messages published to the channel. The client transport subscribes to the channel before calling attach() to ensure no messages are lost during the attach process.
A parameter on Ably's channel.history() API that fetches messages up to the exact point where the channel was attached. This guarantees gapless continuity - history ends precisely where the live subscription begins, with no duplicates and no gaps. See History hydration.
Every Ably message has an extras field that can carry metadata. The AI Transport protocol stores all its headers in extras.headers - a Record<string, string> of key-value pairs. Both transport headers (x-ably-*) and domain headers (x-domain-*) live here.
The SDK has two layers with a strict boundary:
- Transport layer - generic machinery shared by all codecs. Handles turn lifecycle, stream routing, optimistic reconciliation, cancel signals, and conversation tree management. Uses
x-ably-*headers. Lives insrc/core/transport/. - Domain layer - framework-specific encoding/decoding. Maps between domain events (e.g. Vercel's
UIMessageChunk) and Ably messages. Usesx-domain-*headers. Lives in codec implementations (e.g.src/vercel/codec/).
The codec interface is the boundary between these layers.
When the client transport receives messages from the channel, it routes them differently depending on who started the turn:
- Own turn - a turn this client initiated (via
view.send(),view.regenerate(),view.edit()). Decoded events are routed to both the stream router (which enqueues them on aReadableStream) and a per-turn accumulator (which builds complete messages for the conversation tree). The stream exists primarily as an integration seam for framework adapters (e.g. Vercel'suseChat()); most application code consumes accumulated messages via the view. - Observer turn - a turn started by another client. Decoded events go to the accumulator only - there is no stream because no caller on this client initiated the turn.
Both paths use the same accumulation logic. The only difference is that own turns additionally expose a ReadableStream for framework integration. See Message lifecycle for the full routing picture.
Two different identity headers serve different purposes:
- Turn ID (
x-ably-turn-id) - groups all messages in one request-response cycle. A single turn may produce multiple messages (user message, assistant text, lifecycle events). Used for cancellation scope, active turn tracking, and stream routing. - Message ID (
x-ably-msg-id) - uniquely identifies a single domain message (acrypto.randomUUID()generated by the client or server transport). Used for optimistic reconciliation, accumulator routing, and conversation tree node identity. For streamed messages, every append carries the same message ID so the entire message append lifecycle shares one identity.
A turn contains one or more messages. A message belongs to exactly one turn. See Wire protocol: message identity for the full lifecycle.
An event that signals the end of a stream. For the Vercel codec, terminal events are finish, error, and abort signals. The stream router uses the codec's isTerminal() predicate to automatically close the ReadableStream when a terminal event arrives. The decoder checks x-ably-status for "finished" or "aborted" to detect terminal state on the wire.
An async operation where the caller does not await the result. The promise is collected but errors are handled later in batch (or logged and discarded). The encoder uses fire-and-forget for append operations - each token delta is sent without waiting for acknowledgement, and failures are caught during flush. The client transport's HTTP POST is also fire-and-forget - the stream is available immediately from the channel subscription, not the HTTP response.
The decoder's strategy for handling message.update on a tracked stream. When an update arrives, the decoder checks: does the new data start with the text already accumulated? If yes (prefix match), it extracts just the new delta (data.slice(accumulated.length)) and emits delta events. If no (not a prefix), the message was fully replaced (e.g. encoder recovery) and the decoder resets its tracker.
When the decoder receives an update for a serial it has never seen - the stream started before this client subscribed (e.g. history, reconnect, late join). The decoder synthesizes the full event sequence from the update: start events, delta events (if data is present), and end events (if status is "finished"). This allows late-joining clients to reconstruct the stream state.
When a client calls send(), it inserts an optimistic message into the conversation tree (with no serial). The server then relays that message onto the channel, and all clients - including the sender - receive it. The sending client matches the relayed message by x-ably-msg-id and reconciles the optimistic entry with the server-assigned serial (serial promotion) rather than creating a duplicate.
The original message in a sibling group - the message at the root of the forkOf chain. When messages fork the same target transitively (A → B forks A, C forks B), the group root is A. Sibling selections are stored by the group root's msgId.
When an optimistic message (null serial) receives a server-assigned serial via optimistic reconciliation, the conversation tree removes it from its current position (end of the sorted list) and re-inserts it at the correct serial-order position. See conversation tree upsert.
The streaming fragment type that the generic layer is parameterized by. For the Vercel codec, this is UIMessageChunk. Events are the unit of real-time streaming - individually meaningless fragments (a text delta, a finish event) that must be accumulated into a complete message. The decoder produces events; the stream router delivers them to own-turn consumers; the accumulator assembles them into TMessage instances.
The complete domain message type that the generic layer is parameterized by. For the Vercel codec, this is UIMessage. Messages are the unit of state - what the conversation tree stores, what the view's flattenNodes() returns, what React hooks render. The accumulator bridges TEvent → TMessage; the encoder bridges TMessage → wire (for discrete publishes like user messages). See Message lifecycle for the full relationship.
A codec-provided component that assembles decoder outputs into complete domain messages. Needed because one domain message is built from many wire messages - a streamed assistant response may produce dozens of Ably messages (create + N appends + close) that must be assembled into a single TMessage. Used in two contexts: live observer turns (working buffer, snapshots upserted into tree on every event) and history decoding (collect only completed messages). See Accumulator for the full explanation.
The act of producing a flat message list from the conversation tree via flattenNodes(). flattenNodes() returns MessageNode<TMessage>[]. The View caches the result and returns it in O(1) on subsequent calls. The cache is refreshed when the tree structure changes (new nodes, deletions, selection changes, history reveal). All consumers go through the view's flattenNodes(): React hooks, send() (for the HTTP POST body), view.loadOlder() (for pagination snapshots). See Message lifecycle.
view.flattenNodes() - the sole path from tree state to a message array. Returns the View's cached node list in O(1). The cache is rebuilt by an internal _computeFlatNodes() method that walks the sorted node list, checks parent reachability and sibling selection, and produces the linear message sequence for the currently selected conversation path. (flattenNodes() on TreeInternal does the actual tree walk; the View's public method returns cached results.) See Conversation tree: flatten.