Skip to content

Commit f7e8c9f

Browse files
wvanlintTheBlueMatt
andcommitted
Batch on-chain claims more aggressively per channel
When batch claiming was first added, it was only done so for claims which were not pinnable, i.e. those which can only be claimed by us. This was the conservative choice - pinning of outputs claimed by a batch would leave the entire batch unable to confirm on-chain. However, if pinning is considered an attack that can be executed with a high probability of success, then there is no reason not to batch claims of pinnable outputs together, separate from unpinnable outputs. Whether specific outputs are pinnable can change over time - those that are not pinnable will eventually become pinnable at the height at which our counterparty can spend them. Outputs are treated as pinnable if they're within `COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE` of that height. Aside from outputs being pinnable or not, locktimes are also a factor for batching claims. HTLC-timeout claims have locktimes fixed by the counterparty's signature and thus can only be aggregated with other HTLCs of the same CLTV, which we have to check for. The complexity required here is worth it - aggregation can save users a significant amount of fees in the case of a force-closure, and directly impacts the number of UTXOs needed as a reserve for anchors. Co-authored-by: Matt Corallo <[email protected]>
1 parent a177511 commit f7e8c9f

File tree

5 files changed

+824
-668
lines changed

5 files changed

+824
-668
lines changed

lightning/src/chain/channelmonitor.rs

+10-3
Original file line numberDiff line numberDiff line change
@@ -223,9 +223,16 @@ impl_writeable_tlv_based!(HTLCUpdate, {
223223
(4, payment_preimage, option),
224224
});
225225

226-
/// If an HTLC expires within this many blocks, don't try to claim it in a shared transaction,
227-
/// instead claiming it in its own individual transaction.
228-
pub(crate) const CLTV_SHARED_CLAIM_BUFFER: u32 = 12;
226+
/// If an output goes from claimable only by us to claimable by us or our counterparty within this
227+
/// many blocks, we consider it pinnable for the purposes of aggregating claims in a single
228+
/// transaction.
229+
pub(crate) const COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE: u32 = 12;
230+
231+
/// When we go to force-close a channel because an HTLC is expiring, we should ensure that the
232+
/// HTLC(s) expiring are not considered pinnable, allowing us to aggregate them with other HTLC(s)
233+
/// expiring at the same time.
234+
const _HTLCS_NOT_PINNABLE_ON_CLOSE: u32 = CLTV_CLAIM_BUFFER - COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE;
235+
229236
/// If an HTLC expires within this many blocks, force-close the channel to broadcast the
230237
/// HTLC-Success transaction.
231238
/// In other words, this is an upper bound on how many blocks we think it can take us to get a

lightning/src/chain/onchaintx.rs

+2-2
Original file line numberDiff line numberDiff line change
@@ -772,7 +772,7 @@ impl<ChannelSigner: EcdsaChannelSigner> OnchainTxHandler<ChannelSigner> {
772772
for j in 0..i {
773773
if requests[i].can_merge_with(&requests[j], cur_height) {
774774
let merge = requests.remove(i);
775-
if let Err(rejected) = requests[j].merge_package(merge) {
775+
if let Err(rejected) = requests[j].merge_package(merge, cur_height) {
776776
requests.insert(i, rejected);
777777
} else {
778778
break;
@@ -1104,7 +1104,7 @@ impl<ChannelSigner: EcdsaChannelSigner> OnchainTxHandler<ChannelSigner> {
11041104
OnchainEvent::ContentiousOutpoint { package } => {
11051105
if let Some(pending_claim) = self.claimable_outpoints.get(package.outpoints()[0]) {
11061106
if let Some(request) = self.pending_claim_requests.get_mut(&pending_claim.0) {
1107-
assert!(request.merge_package(package).is_ok());
1107+
assert!(request.merge_package(package, height).is_ok());
11081108
// Using a HashMap guarantee us than if we have multiple outpoints getting
11091109
// resurrected only one bump claim tx is going to be broadcast
11101110
bump_candidates.insert(pending_claim.clone(), request.clone());

0 commit comments

Comments
 (0)