Skip to content

Commit be1b49d

Browse files
committed
Introduce graph sync crate for fast-forwarding through gossip data downloaded from a server.
1 parent c244c78 commit be1b49d

File tree

11 files changed

+891
-25
lines changed

11 files changed

+891
-25
lines changed

.github/workflows/build.yml

+1
Original file line numberDiff line numberDiff line change
@@ -138,6 +138,7 @@ jobs:
138138
run: |
139139
cargo test --verbose --color always -p lightning
140140
cargo test --verbose --color always -p lightning-invoice
141+
cargo test --verbose --color always -p lightning-graph-sync
141142
cargo build --verbose --color always -p lightning-persister
142143
cargo build --verbose --color always -p lightning-background-processor
143144
- name: Test C Bindings Modifications on Rust ${{ matrix.toolchain }}

Cargo.toml

+1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ members = [
77
"lightning-net-tokio",
88
"lightning-persister",
99
"lightning-background-processor",
10+
"lightning-graph-sync"
1011
]
1112

1213
exclude = [

lightning-graph-sync/Cargo.toml

+16
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
[package]
2+
name = "lightning-graph-sync"
3+
version = "0.0.104"
4+
authors = ["Arik Sosman <[email protected]>"]
5+
license = "MIT OR Apache-2.0"
6+
repository = "https://github.com/lightningdevkit/rust-lightning"
7+
edition = "2018"
8+
description = """
9+
Utility to fetch gossip routing data from LNSync or LNSync-like server
10+
"""
11+
12+
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
13+
14+
[dependencies]
15+
lightning = { version = "0.0.105", path = "../lightning" }
16+
bitcoin = { version = "0.27", default-features = false, features = ["secp-recovery"] }

lightning-graph-sync/README.md

+91
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
# lightning-graph-sync
2+
3+
This crate exposes functionality for rapid gossip graph syncing, aimed primarily at mobile clients.
4+
5+
## Mechanism
6+
7+
The (presumed) server sends a compressed gossip response containing gossip data. The gossip data is formatted compactly,
8+
omitting signatures and opportunistically incremental.
9+
10+
Essentially, the serialization structure is as follows:
11+
12+
1. Fixed prefix bytes `76, 68, 75, 1` (the first three bytes are ASCII for `LDK`)
13+
- The purpose of this prefix is to identify the serialization format, should other rapid gossip sync formats arise
14+
in the future.
15+
- The fourth byte is the protocol version in case our format gets updated
16+
2. Chain hash (32 bytes)
17+
3. Latest seen timestamp (`u32`)
18+
4. A `u64` indicating the number of node IDs to follow
19+
5. `[PublicKey]` (array of compressed 33-byte node IDs)
20+
6. A `u64` indicating the number of channel announcement messages to follow
21+
7. `[CustomChannelAnnouncement]` (array of significantly stripped down channel announcements)
22+
8. A `u64` indicating the number of channel update messages to follow
23+
9. A `u8` flagging whether non-incremental updates are present (if yes, it's set to `1`)
24+
- If present, the following values are added:
25+
1. `default_cltv_expiry_delta`: `u16`
26+
2. `default_htlc_minimum_msat`: `u64`
27+
3. `default_fee_base_msat`: `u32`
28+
4. `default_fee_proportional_millionths`: `u32`
29+
5. `default_htlc_maximum_msat`: `u64` (if the default is no maximum, `u64::MAX`)
30+
- The defaults are calculated by the server based on the frequency within non-incremental updates within this
31+
particular message
32+
10. `[CustomChannelUpdate]`
33+
34+
You will also notice that `NodeAnnouncement` messages are skipped altogether.
35+
36+
The data is then applied to the current network graph, artifically backdated 7 days from the current time to make sure
37+
more recent updates obtained directly from gossip are not accidentally overwritten.
38+
39+
### CustomChannelAnnouncements
40+
41+
To achieve compactness and avoid data repetition, we're sending a significantly stripped down version of the channel
42+
announcement message, which contains only the following data:
43+
44+
1. `channel_features`: `u16` + `n`, where `n` is the number of bytes indicated by the first `u16`
45+
2. `short_channel_id`: `CompactSize` (incremental `CompactSize` deltas starting from 0)
46+
3. `node_id_1_index`: `CompactSize` (index of node id within the previously sent sequence)
47+
4. `node_id_2_index`: `CompactSize` (index of node id within the previously sent sequence)
48+
49+
### CustomChannelUpdate
50+
51+
For the purpose of rapid syncing, we have deviated from the channel update format specified in BOLT 7 significantly. Our
52+
custom channel updates are structured as follows:
53+
54+
1. `short_channel_id`: `CompactSize` (incremental `CompactSize` deltas starting at 0)
55+
2. `custom_channel_flags`: `u8`
56+
3. `update_data`
57+
58+
Specifically, our custom channel flags break down like this:
59+
60+
| 128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 |
61+
|---------------------|----|----|----|---|---|------------------|-----------|
62+
| Incremental update? | | | | | | Disable channel? | Direction |
63+
64+
If the most significant bit is set to `1`, indicating an incremental update, the intermediate bit flags assume the
65+
following meaning:
66+
67+
| 64 | 32 | 16 | 8 | 4 |
68+
|---------------------------------|---------------------------------|-----------------------------|-------------------------------------------|---------------------------------|
69+
| `cltv_expiry_delta` has changed | `htlc_minimum_msat` has changed | `fee_base_msat` has changed | `fee_proportional_millionths` has changed | `htlc_maximum_msat` has changed |
70+
71+
If the most significant bit is set to `0`, the meaning is almost identical, except instead of a change, the flags now
72+
represent a deviation from the defaults sent at the beginning of the update sequence.
73+
74+
In both cases, `update_data` only contains the fields that are indicated by the channel flags to be non-standard or to
75+
have changed.
76+
77+
## Delta Calculation
78+
79+
The way a server is meant to calculate this rapid gossip sync data is by using two data points as a reference that are
80+
meant to be provided by the client:
81+
`latest_announcement_blockheight` and `latest_update_timestamp`.
82+
83+
Based on `latest_announcement_blockheight`, the server only sends channel announcements that occurred at or after that
84+
block height.
85+
86+
Based on `latest_update_timestamp`, the server fetches all channel updates that occurred at or after the timestamp.
87+
Then, the server also checks for each update whether there had been a previous one prior to the given timestamp.
88+
89+
If a particular channel update had never occurred before, the full update is sent. If an channel has had updates prior
90+
to the provided timestamp, the latest update prior to the timestamp is taken as a reference, and the delta is calculated
91+
against it.

lightning-graph-sync/src/error.rs

+33
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
use lightning::ln::msgs::{DecodeError, LightningError};
2+
3+
/// All-encompassing standard error type that processing can return
4+
pub enum GraphSyncError {
5+
/// IO error wrapper, typically the result of an issue with the file system
6+
IOError(std::io::Error),
7+
/// Error trying to read the update data, typically due to an erroneous data length indication
8+
/// that is greater than the actual amount of data provided
9+
DecodeError(DecodeError),
10+
/// Error applying the patch to the network graph, usually the result of updates that are too
11+
/// old or missing prerequisite data to the application of updates out of order
12+
LightningError(LightningError),
13+
/// Some other error whose nature is indicated in its descriptor string
14+
ProcessingError(String),
15+
}
16+
17+
impl From<std::io::Error> for GraphSyncError {
18+
fn from(error: std::io::Error) -> Self {
19+
Self::IOError(error)
20+
}
21+
}
22+
23+
impl From<DecodeError> for GraphSyncError {
24+
fn from(error: DecodeError) -> Self {
25+
Self::DecodeError(error)
26+
}
27+
}
28+
29+
impl From<LightningError> for GraphSyncError {
30+
fn from(error: LightningError) -> Self {
31+
Self::LightningError(error)
32+
}
33+
}

lightning-graph-sync/src/lib.rs

+105
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
#![deny(missing_docs)]
2+
#![deny(unsafe_code)]
3+
#![deny(broken_intra_doc_links)]
4+
#![deny(non_upper_case_globals)]
5+
#![deny(non_camel_case_types)]
6+
#![deny(non_snake_case)]
7+
#![deny(unused_mut)]
8+
#![deny(unused_variables)]
9+
#![deny(unused_imports)]
10+
11+
//! This crate exposes functionality to rapidly sync gossip data, aimed primarily at mobile
12+
//! devices.
13+
14+
use std::fs::File;
15+
16+
use lightning::routing::network_graph;
17+
18+
use crate::error::GraphSyncError;
19+
20+
/// Error types that these functions can return
21+
pub mod error;
22+
23+
/// Core functionality of this crate
24+
pub mod processing;
25+
26+
/// Sync gossip data from a file
27+
///
28+
/// `network_graph`: The network graph to apply the updates to
29+
///
30+
/// `sync_path`: Path to the file where the gossip update data is located
31+
///
32+
pub fn sync_network_graph_with_file_path(
33+
network_graph: &network_graph::NetworkGraph,
34+
sync_path: &str,
35+
) -> Result<(), GraphSyncError> {
36+
let mut file = File::open(sync_path)?;
37+
processing::read_network_graph(&network_graph, &mut file)
38+
}
39+
40+
#[cfg(test)]
41+
mod tests {
42+
use std::fs;
43+
44+
use bitcoin::blockdata::constants::genesis_block;
45+
use bitcoin::Network;
46+
47+
use lightning::routing::network_graph::NetworkGraph;
48+
49+
use crate::sync_network_graph_with_file_path;
50+
51+
#[test]
52+
fn test_sync_from_file() {
53+
// same as incremental_only_update_fails_without_prior_same_direction_updates
54+
// (OldestDataDirection1, timestamp 0)
55+
let valid_response = vec![
56+
76, 68, 75, 1, 111, 226, 140, 10, 182, 241, 179, 114, 193, 166, 162, 70, 174, 99, 247,
57+
79, 147, 30, 131, 101, 225, 90, 8, 156, 104, 214, 25, 0, 0, 0, 0, 0, 97, 227, 98, 218,
58+
0, 0, 0, 4, 2, 22, 7, 207, 206, 25, 164, 197, 231, 230, 231, 56, 102, 61, 250, 251,
59+
187, 172, 38, 46, 79, 247, 108, 44, 155, 48, 219, 238, 252, 53, 192, 6, 67, 2, 36, 125,
60+
157, 176, 223, 175, 234, 116, 94, 248, 201, 225, 97, 235, 50, 47, 115, 172, 63, 136,
61+
88, 216, 115, 11, 111, 217, 114, 84, 116, 124, 231, 107, 2, 158, 1, 242, 121, 152, 106,
62+
204, 131, 186, 35, 93, 70, 216, 10, 237, 224, 183, 89, 95, 65, 3, 83, 185, 58, 138,
63+
181, 64, 187, 103, 127, 68, 50, 2, 201, 19, 17, 138, 136, 149, 185, 226, 156, 137, 175,
64+
110, 32, 237, 0, 217, 90, 31, 100, 228, 149, 46, 219, 175, 168, 77, 4, 143, 38, 128,
65+
76, 97, 0, 0, 0, 2, 0, 0, 255, 8, 153, 192, 0, 2, 27, 0, 0, 0, 1, 0, 0, 255, 2, 68,
66+
226, 0, 6, 11, 0, 1, 2, 3, 0, 0, 0, 2, 1, 0, 40, 0, 0, 0, 0, 0, 0, 3, 232, 0, 0, 3,
67+
232, 0, 0, 0, 1, 0, 0, 0, 0, 29, 129, 25, 192, 255, 8, 153, 192, 0, 2, 27, 0, 0, 29, 0,
68+
0, 0, 1, 0, 0, 0, 125, 0, 0, 0, 0, 58, 85, 116, 216, 255, 2, 68, 226, 0, 6, 11, 0, 1,
69+
1,
70+
];
71+
72+
fs::create_dir_all("./tmp/graph-sync-tests").unwrap();
73+
fs::write("./tmp/graph-sync-tests/test_data.lngossip", valid_response).unwrap();
74+
75+
let block_hash = genesis_block(Network::Bitcoin).block_hash();
76+
let network_graph = NetworkGraph::new(block_hash);
77+
78+
let before = network_graph.to_string();
79+
assert_eq!(before.len(), 31);
80+
81+
let sync_result = sync_network_graph_with_file_path(
82+
&network_graph,
83+
"./tmp/graph-sync-tests/test_data.lngossip",
84+
);
85+
86+
assert!(sync_result.is_ok());
87+
88+
let after = network_graph.to_string();
89+
assert_eq!(after.len(), 1727);
90+
assert!(
91+
after.contains("021607cfce19a4c5e7e6e738663dfafbbbac262e4ff76c2c9b30dbeefc35c00643:")
92+
);
93+
assert!(
94+
after.contains("02247d9db0dfafea745ef8c9e161eb322f73ac3f8858d8730b6fd97254747ce76b:")
95+
);
96+
assert!(
97+
after.contains("029e01f279986acc83ba235d46d80aede0b7595f410353b93a8ab540bb677f4432:")
98+
);
99+
assert!(
100+
after.contains("02c913118a8895b9e29c89af6e20ed00d95a1f64e4952edbafa84d048f26804c61:")
101+
);
102+
assert!(after.contains("channels: [619737530008010752]"));
103+
assert!(after.contains("channels: [783241506229452801]"));
104+
}
105+
}

0 commit comments

Comments
 (0)