Skip to content

Commit 1fdfc7f

Browse files
committed
Do not broadcast commitment txn on Permanent mon update failure
See doc updates for more info on the edge case this prevents, and there isn't really a strong reason why we would need to broadcast the latest state immediately. Specifically, in the case of HTLC claims (the most important reason to ensure we have state on chain if it cannot be persisted), we will still force-close if there are HTLCs which need claiming and are going to expire. Surprisingly, there were no tests which failed as a result of this change, but a new one has been added.
1 parent f75b6cb commit 1fdfc7f

File tree

4 files changed

+74
-56
lines changed

4 files changed

+74
-56
lines changed

lightning/src/chain/channelmonitor.rs

+20-10
Original file line numberDiff line numberDiff line change
@@ -69,12 +69,14 @@ pub struct ChannelMonitorUpdate {
6969
/// increasing and increase by one for each new update, with one exception specified below.
7070
///
7171
/// This sequence number is also used to track up to which points updates which returned
72-
/// ChannelMonitorUpdateErr::TemporaryFailure have been applied to all copies of a given
72+
/// [`ChannelMonitorUpdateErr::TemporaryFailure`] have been applied to all copies of a given
7373
/// ChannelMonitor when ChannelManager::channel_monitor_updated is called.
7474
///
7575
/// The only instance where update_id values are not strictly increasing is the case where we
7676
/// allow post-force-close updates with a special update ID of [`CLOSED_CHANNEL_UPDATE_ID`]. See
7777
/// its docs for more details.
78+
///
79+
/// [`ChannelMonitorUpdateErr::TemporaryFailure`]: super::ChannelMonitorUpdateErr::TemporaryFailure
7880
pub update_id: u64,
7981
}
8082

@@ -1210,14 +1212,20 @@ impl<Signer: Sign> ChannelMonitor<Signer> {
12101212
}
12111213

12121214
/// Used by ChannelManager deserialization to broadcast the latest holder state if its copy of
1213-
/// the Channel was out-of-date. You may use it to get a broadcastable holder toxic tx in case of
1214-
/// fallen-behind, i.e when receiving a channel_reestablish with a proof that our counterparty side knows
1215-
/// a higher revocation secret than the holder commitment number we are aware of. Broadcasting these
1216-
/// transactions are UNSAFE, as they allow counterparty side to punish you. Nevertheless you may want to
1217-
/// broadcast them if counterparty don't close channel with his higher commitment transaction after a
1218-
/// substantial amount of time (a month or even a year) to get back funds. Best may be to contact
1219-
/// out-of-band the other node operator to coordinate with him if option is available to you.
1220-
/// In any-case, choice is up to the user.
1215+
/// the Channel was out-of-date.
1216+
///
1217+
/// You may also use this to broadcast the latest local commitment transaction, either because
1218+
/// a monitor update failed with [`ChannelMonitorUpdateErr::PermanentFailure`] or because we've
1219+
/// fallen behind (i.e we've received proof that our counterparty side knows a revocation
1220+
/// secret we gave them that they shouldn't know).
1221+
///
1222+
/// Broadcasting these transactions in the second case is UNSAFE, as they allow counterparty
1223+
/// side to punish you. Nevertheless you may want to broadcast them if counterparty doesn't
1224+
/// close channel with their commitment transaction after a substantial amount of time. Best
1225+
/// may be to contact the other node operator out-of-band to coordinate other options available
1226+
/// to you. In any-case, the choice is up to you.
1227+
///
1228+
/// [`ChannelMonitorUpdateErr::PermanentFailure`]: super::ChannelMonitorUpdateErr::PermanentFailure
12211229
pub fn get_latest_holder_commitment_txn<L: Deref>(&self, logger: &L) -> Vec<Transaction>
12221230
where L::Target: Logger {
12231231
self.inner.lock().unwrap().get_latest_holder_commitment_txn(logger)
@@ -2002,7 +2010,9 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
20022010
if *should_broadcast {
20032011
self.broadcast_latest_holder_commitment_txn(broadcaster, logger);
20042012
} else if !self.holder_tx_signed {
2005-
log_error!(logger, "You have a toxic holder commitment transaction avaible in channel monitor, read comment in ChannelMonitor::get_latest_holder_commitment_txn to be informed of manual action to take");
2013+
log_error!(logger, "WARNING: You have a potentially-toxic holder commitment transaction available to broadcast");
2014+
log_error!(logger, " in channel monitor for channel {}!", log_bytes!(self.funding_info.0.to_channel_id()));
2015+
log_error!(logger, " Read the docs for ChannelMonitor::get_latest_holder_commitment_txn and take manual action!");
20062016
} else {
20072017
// If we generated a MonitorEvent::CommitmentTxConfirmed, the ChannelManager
20082018
// will still give us a ChannelForceClosed event with !should_broadcast, but we

lightning/src/chain/mod.rs

+48-42
Original file line numberDiff line numberDiff line change
@@ -193,61 +193,67 @@ pub enum ChannelMonitorUpdateErr {
193193
/// our state failed, but is expected to succeed at some point in the future).
194194
///
195195
/// Such a failure will "freeze" a channel, preventing us from revoking old states or
196-
/// submitting new commitment transactions to the counterparty. Once the update(s) that failed
197-
/// have been successfully applied, a [`MonitorEvent::UpdateCompleted`] event should be returned
198-
/// via [`Watch::release_pending_monitor_events`] which will then restore the channel to an
199-
/// operational state.
200-
///
201-
/// Note that a given ChannelManager will *never* re-generate a given ChannelMonitorUpdate. If
202-
/// you return a TemporaryFailure you must ensure that it is written to disk safely before
203-
/// writing out the latest ChannelManager state.
204-
///
205-
/// Even when a channel has been "frozen" updates to the ChannelMonitor can continue to occur
206-
/// (eg if an inbound HTLC which we forwarded was claimed upstream resulting in us attempting
207-
/// to claim it on this channel) and those updates must be applied wherever they can be. At
208-
/// least one such updated ChannelMonitor must be persisted otherwise PermanentFailure should
209-
/// be returned to get things on-chain ASAP using only the in-memory copy. Obviously updates to
210-
/// the channel which would invalidate previous ChannelMonitors are not made when a channel has
211-
/// been "frozen".
212-
///
213-
/// Note that even if updates made after TemporaryFailure succeed you must still provide a
214-
/// [`MonitorEvent::UpdateCompleted`] to ensure you have the latest monitor and re-enable
215-
/// normal channel operation. Note that this is normally generated through a call to
216-
/// [`ChainMonitor::channel_monitor_updated`].
217-
///
218-
/// Note that the update being processed here will not be replayed for you when you return a
219-
/// [`MonitorEvent::UpdateCompleted`] event via [`Watch::release_pending_monitor_events`], so
220-
/// you must store the update itself on your own local disk prior to returning a
221-
/// TemporaryFailure. You may, of course, employ a journaling approach, storing only the
222-
/// ChannelMonitorUpdate on disk without updating the monitor itself, replaying the journal at
223-
/// reload-time.
196+
/// submitting new commitment transactions to the counterparty. Once the update(s) which failed
197+
/// have been successfully applied, a [`MonitorEvent::UpdateCompleted`] can be used to restore
198+
/// the channel to an operational state.
199+
///
200+
/// Note that a given [`ChannelManager`] will *never* re-generate a [`ChannelMonitorUpdate`].
201+
/// If you return this error you must ensure that it is written to disk safely before writing
202+
/// the latest [`ChannelManager`] state, or you should return [`PermanentFailure`] instead.
203+
///
204+
/// Even when a channel has been "frozen", updates to the [`ChannelMonitor`] can continue to
205+
/// occur (e.g. if an inbound HTLC which we forwarded was claimed upstream, resulting in us
206+
/// attempting to claim it on this channel) and those updates must still be persisted.
207+
///
208+
/// No updates to the channel will be made which could invalidate other [`ChannelMonitor`]s
209+
/// until a [`MonitorEvent::UpdateCompleted`] is provided, even if you return no error on a
210+
/// later monitor update for the same channel.
224211
///
225212
/// For deployments where a copy of ChannelMonitors and other local state are backed up in a
226213
/// remote location (with local copies persisted immediately), it is anticipated that all
227214
/// updates will return TemporaryFailure until the remote copies could be updated.
228215
///
229-
/// [`ChainMonitor::channel_monitor_updated`]: chainmonitor::ChainMonitor::channel_monitor_updated
216+
/// [`PermanentFailure`]: ChannelMonitorUpdateErr::PermanentFailure
217+
/// [`ChannelManager`]: crate::ln::channelmanager::ChannelManager
230218
TemporaryFailure,
231-
/// Used to indicate no further channel monitor updates will be allowed (eg we've moved on to a
232-
/// different watchtower and cannot update with all watchtowers that were previously informed
233-
/// of this channel).
219+
/// Used to indicate no further channel monitor updates will be allowed (likely a disk failure
220+
/// or a remote copy of this [`ChannelMonitor`] is no longer reachable and thus not updatable).
234221
///
235-
/// At reception of this error, ChannelManager will force-close the channel and return at
236-
/// least a final ChannelMonitorUpdate::ChannelForceClosed which must be delivered to at
237-
/// least one ChannelMonitor copy. Revocation secret MUST NOT be released and offchain channel
238-
/// update must be rejected.
222+
/// When this is returned, [`ChannelManager`] will force-close the channel but *not* broadcast
223+
/// our current commitment transaction. This avoids a dangerous case where a local disk failure
224+
/// (e.g. the Linux-default remounting of the disk as read-only) causes [`PermanentFailure`]s
225+
/// for all monitor updates. If we were to broadcast our latest commitment transaction and then
226+
/// restart, we could end up reading a previous [`ChannelMonitor`] and [`ChannelManager`],
227+
/// revoking our now-broadcasted state before seeing it confirm and losing all our funds.
239228
///
240-
/// This failure may also signal a failure to update the local persisted copy of one of
241-
/// the channel monitor instance.
229+
/// Note that this is somewhat of a tradeoff - if the disk is really gone and we may have lost
230+
/// the data permanently, we really should broadcast immediately. If the data can be recovered
231+
/// with manual intervention, we'd rather close the channel, rejecting future updates to it,
232+
/// and broadcast the latest state only if we have HTLCs to claim which are timing out (which
233+
/// we do as long as blocks are connected).
242234
///
243-
/// Note that even when you fail a holder commitment transaction update, you must store the
244-
/// update to ensure you can claim from it in case of a duplicate copy of this ChannelMonitor
245-
/// broadcasts it (e.g distributed channel-monitor deployment)
235+
/// In order to broadcast the latest local commitment transaction, you'll need to call
236+
/// [`ChannelMonitor::get_latest_holder_commitment_txn`] and broadcast the resulting
237+
/// transactions once you've safely ensured no further channel updates can be generated by your
238+
/// [`ChannelManager`].
239+
///
240+
/// Note that at least one final [`ChannelMonitorUpdate`] may still be provided, which must
241+
/// still be processed by a running [`ChannelMonitor`]. This final update will mark the
242+
/// [`ChannelMonitor`] as finalized, ensuring no further updates (e.g. revocation of the latest
243+
/// commitment transaction) are allowed.
244+
///
245+
/// Note that even if you return a [`PermanentFailure`] due to unavailability of secondary
246+
/// [`ChannelMonitor`] copies, you should still make an attempt to store the update where
247+
/// possible to ensure you can claim HTLC outputs on the latest commitment transaction
248+
/// broadcasted later.
246249
///
247250
/// In case of distributed watchtowers deployment, the new version must be written to disk, as
248251
/// state may have been stored but rejected due to a block forcing a commitment broadcast. This
249252
/// storage is used to claim outputs of rejected state confirmed onchain by another watchtower,
250253
/// lagging behind on block processing.
254+
///
255+
/// [`PermanentFailure`]: ChannelMonitorUpdateErr::PermanentFailure
256+
/// [`ChannelManager`]: crate::ln::channelmanager::ChannelManager
251257
PermanentFailure,
252258
}
253259

@@ -277,7 +283,7 @@ pub trait Watch<ChannelSigner: Sign> {
277283
/// with any spends of outputs returned by [`get_outputs_to_watch`]. In practice, this means
278284
/// calling [`block_connected`] and [`block_disconnected`] on the monitor.
279285
///
280-
/// Note: this interface MUST error with `ChannelMonitorUpdateErr::PermanentFailure` if
286+
/// Note: this interface MUST error with [`ChannelMonitorUpdateErr::PermanentFailure`] if
281287
/// the given `funding_txo` has previously been registered via `watch_channel`.
282288
///
283289
/// [`get_outputs_to_watch`]: channelmonitor::ChannelMonitor::get_outputs_to_watch

lightning/src/ln/chanmon_update_fail_tests.rs

+2
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,8 @@ fn test_simple_monitor_permanent_update_fail() {
6464
_ => panic!("Unexpected event"),
6565
};
6666

67+
assert!(nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap().is_empty());
68+
6769
// TODO: Once we hit the chain with the failure transaction we should check that we get a
6870
// PaymentPathFailed event
6971

lightning/src/ln/channelmanager.rs

+4-4
Original file line numberDiff line numberDiff line change
@@ -1177,8 +1177,8 @@ pub enum PaymentSendFailure {
11771177
///
11781178
/// Any entries which contain Err(APIError::MonitorUpdateFailed) or Ok(()) MUST NOT be retried
11791179
/// as they will result in over-/re-payment. These HTLCs all either successfully sent (in the
1180-
/// case of Ok(())) or will send once channel_monitor_updated is called on the next-hop channel
1181-
/// with the latest update_id.
1180+
/// case of Ok(())) or will send once a [`MonitorEvent::UpdateCompleted`] is provided for the
1181+
/// next-hop channel with the latest update_id.
11821182
PartialFailure {
11831183
/// The errors themselves, in the same order as the route hops.
11841184
results: Vec<Result<(), APIError>>,
@@ -1351,7 +1351,7 @@ macro_rules! handle_monitor_err {
13511351
// given up the preimage yet, so might as well just wait until the payment is
13521352
// retried, avoiding the on-chain fees.
13531353
let res: Result<(), _> = Err(MsgHandleErrInternal::from_finish_shutdown("ChannelMonitor storage failure".to_owned(), *$chan_id, $chan.get_user_id(),
1354-
$chan.force_shutdown(true), $self.get_channel_update_for_broadcast(&$chan).ok() ));
1354+
$chan.force_shutdown(false), $self.get_channel_update_for_broadcast(&$chan).ok() ));
13551355
(res, true)
13561356
},
13571357
ChannelMonitorUpdateErr::TemporaryFailure => {
@@ -4584,7 +4584,7 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
45844584
// We do not do a force-close here as that would generate a monitor update for
45854585
// a monitor that we didn't manage to store (and that we don't care about - we
45864586
// don't respond with the funding_signed so the channel can never go on chain).
4587-
let (_monitor_update, failed_htlcs) = chan.force_shutdown(true);
4587+
let (_monitor_update, failed_htlcs) = chan.force_shutdown(false);
45884588
assert!(failed_htlcs.is_empty());
45894589
return Err(MsgHandleErrInternal::send_err_msg_no_close("ChannelMonitor storage failure".to_owned(), funding_msg.channel_id));
45904590
},

0 commit comments

Comments
 (0)