-
Notifications
You must be signed in to change notification settings - Fork 374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle fallible per commitment point in channel establishment #3109
Handle fallible per commitment point in channel establishment #3109
Conversation
3fbf2e6
to
1e5b210
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3109 +/- ##
==========================================
- Coverage 89.74% 89.70% -0.05%
==========================================
Files 130 130
Lines 107793 107882 +89
Branches 107793 107882 +89
==========================================
+ Hits 96743 96778 +35
- Misses 8651 8695 +44
- Partials 2399 2409 +10 ☔ View full report in Codecov by Sentry. |
1e5b210
to
07991fa
Compare
the commits from #3115 are second from the end, will rebase on top later this week, but generally most of the stuff here is in place |
07991fa
to
744f2ca
Compare
73583e7
to
b798ffa
Compare
0bdd4cc
to
0ac4ad8
Compare
When we get back to this, please address the comments from #3152 (review) |
7ea6d3f
to
c109e18
Compare
This also needs rebase now. |
df8902b
to
50b1d88
Compare
Will probably squash some of these commits together at some point but just kept them separate for now |
rebased and force pushed with the new approach. Previous branch is still here if it's helpful to see |
// `current_point` will become optional when async signing is implemented. | ||
let cur_holder_commitment_point = Some(self.context.holder_commitment_point.current_point()); | ||
let next_holder_commitment_point = self.context.holder_commitment_point.next_point(); | ||
let cur_holder_commitment_point = Some(self.holder_commitment_point.current_point()); | ||
let next_holder_commitment_point = self.holder_commitment_point.next_point(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when reading, now that this will never be optional, can we unwrap/expect the value (or return a DecodeError on None)? what's the right way to handle this
50b1d88
to
760b055
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, this is looking great. A few nits here and there but basically this LGTM.
} else { | ||
log_debug!(logger, "Not producing channel_ready: the holder commitment point is not available."); | ||
self.context.signer_pending_channel_ready = true; | ||
None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think after this we are missing an update to get_announcement_sigs
to replay it (can come in a followup, but right now if you do an async signer it will always be forced to be a non-public channel).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good, will do in a followup
8f49a7d
to
6041e37
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Going to give this another look next week but it looks good!
lightning/src/ln/channel.rs
Outdated
fn generate_accept_channel_message<L: Deref>(&mut self, _logger: &L) -> Option<msgs::AcceptChannel> | ||
where L::Target: Logger | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you format these method signatures per rustfmt, here and elsewhere in the PR:
fn generate_accept_channel_message<L: Deref>(&mut self, _logger: &L) -> Option<msgs::AcceptChannel> | |
where L::Target: Logger | |
{ | |
fn generate_accept_channel_message<L: Deref>( | |
&mut self, _logger: &L | |
) -> Option<msgs::AcceptChannel> where L::Target: Logger { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a full scan of the function signatures changed in this PR and formatted them accordingly, lmk if i missed any
} | ||
if let Some(ref mut point) = self.unfunded_context.holder_commitment_point { | ||
if !point.is_available() { | ||
point.try_resolve_pending(&self.context.holder_signer, &self.context.secp_ctx, logger); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah would it be easy to add test coverage for this case?
Edit: looking at this again, this looks potentially redundant with HolderCommitmentPoint::new
, which already tries to fetch the next point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This case is for if they already got the first commitment point, but not the next. HolderCommitmentPoint::new
will try to fetch both points, but if one is ready before the other, I think we still need this check.
Unfortunately, I don't think it's easy to add coverage for this. Right now our test utils for disabling signer ops mainly just disable all calls for a certain signing method on a signer, whereas for this we'd need to disable it for the second call and not the first. I have an idea for how to do this, i.e. we could hold a queue of if the next signer calls should be disabled/enabled, but my first impression is it'll be a little complicated just to get this case. Open to suggestions + I'm happy to give it a try it if you want, but yea
let mut holder_commitment_point = match self.unfunded_context.holder_commitment_point { | ||
Some(point) => point, | ||
None => return Err((self, ChannelError::close("Received funding_created before our first commitment point was available".to_owned()))), | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In funding_signed
we'll expect
that holder_commitment_point
is Some
, but here we'll error and close the channel. Is there reason for the difference in handling there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unintentional, just changed funding_signed
to return an error!
@@ -8298,7 +8304,7 @@ impl<SP: Deref> OutboundV1Channel<SP> where SP::Target: SignerProvider { | |||
holder_commitment_point: HolderCommitmentPoint::new(&context.holder_signer, &context.secp_ctx), | |||
}; | |||
|
|||
let chan = Self { context, unfunded_context }; | |||
let chan = Self { context, unfunded_context, signer_pending_open_channel: false }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not obvious to me why this is set to false
, seems like whether it's set should depend on the result of HolderCommitmentPoint::new
above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yea, we just initialize to false here, and leave the logic for setting the flag for where we actually try to generate the open_channel message. Added a comment, lmk if that works
lightning/src/ln/channel.rs
Outdated
let open_channel = match self.unfunded_context.holder_commitment_point { | ||
Some(ref mut point) if point.is_available() && self.signer_pending_open_channel => { | ||
log_trace!(logger, "Attempting to generate open_channel..."); | ||
self.get_open_channel(chain_hash, logger) | ||
} | ||
_ => None | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can be simplified due to the existing checks in get_open_channel
?
let open_channel = if self.signer_pending_open_channel {
log_trace!(logger, "Attempting to generate open_channel...");
self.get_open_channel(chain_hash, logger)
} else { None };
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We check here point.is_available
and don't in get_open_channel
. But it does seem simpler to just do that check in get_open_channel
(and be consistent in both places) so I'll just move it there and simplify this. Same with accept channel.
6041e37
to
20642a6
Compare
pushed some, still addressing more comments |
20642a6
to
ceae1c4
Compare
almost there... |
ceae1c4
to
4aae0ff
Compare
Okay, I think I've addressed all the comments up to now. Lmk when/if I'm good to squash! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all the updates here! LGTM, feel free to squash IMO
4aae0ff
to
be67d99
Compare
Squashed |
Looks like CI is sad :( |
hmmm, nothing was changed in my force push and the checks it's failing are running fine locally...? The failure on the latest commit:
But I don't even have that line? It's also weird it says 0.0.124...should I maybe rebase or something? |
CI gets run rebased against upstream so you probably have a silent merge conflict. A rebase should turn it up and then we can land. |
We are choosing to move the `HolderCommitmentPoint` (the struct that tracks commitment points retrieved from the signer + the commitment number) to handle channel establishment, where we have no commitment point at all. Previously we introduced this struct to track when we were pending a commitment point (because of an async signer) during normal channel operation, which assumed we always had a commitment point to start out with. Intiially we tried to add an `Uninitialized` variant that held no points, but that meant that we needed to handle that case everywhere which left a bunch of scary/unnecessary unwraps/expects. Instead, we just hold an optional `HolderCommitmentPoint` struct on our unfunded channels, and a non-optional `HolderCommitmentPoint` for funded channels. This commit starts that transition. A following commit will remove the holder commitment point from the current `ChannelContext`. This also makes small fixups to the comments on the HolderCommitmentPoint variants.
Following a previous commit adding `HolderCommitmentPoint` elsewhere, we make the transition to use those commitment points and remove the existing one.
In the event that a signer cannot provide a commitment point immediately, we set a flag to remember we're waiting for this before we can send `open_channel`. We make sure to get the first two commitment points, so when we advance commitments, we always have a commitment point available. When initializing a context, we set the `signer_pending_open_channel` flag to false, and leave setting this flag for where we attempt to generate a message. When checking to send messages when a signer is unblocked, we must handle both when we haven't gotten any commitment point, as well as when we've gotten the first but not the second point.
For all of our async signing logic in channel establishment v1, we set signer flags in the method where we create the raw lightning message object. To keep things consistent, this commit moves setting the signer flags to where we create funding_created, since this was being set elsewhere before. While we're doing this cleanup, this also slightly refactors our funding_signed method to move some code out of an indent, as well as removes a log to fix a nit from lightningdevkit#3152.
Similar to `open_channel`, if a signer cannot provide a commitment point immediately, we set a flag to remember we're waiting for a point to send `accept_channel`. We make sure to get the first two points before moving on, so when we advance our commitment we always have a point available.
Here we handle the case where our signer is pending the next commitment point when we try to send channel ready. We set a flag to remember to send this message when our signer is unblocked. This follows the same general pattern as everywhere else where we're waiting on a commitment point from the signer in order to send a message.
Here we make a test that disables a channel signer's ability to return commitment points upon being first derived for a channel. We also fit in a couple cleanups: removing a comment referencing a previous design with a `HolderCommitmentPoint::Uninitialized` variant, as well as adding coverage for updating channel maps in async closing signed.
be67d99
to
d1e94bd
Compare
@@ -5013,11 +5023,13 @@ impl<SP: Deref> Channel<SP> where | |||
ClosureReason::HolderForceClosed { broadcasted_latest_txn: Some(false) }, | |||
))); | |||
} | |||
self.context.assert_no_commitment_advancement("initial commitment_signed"); | |||
let holder_commitment_point = &mut self.holder_commitment_point.clone(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm very much not a fan of this. Its really weird to clone
something to update something that is in self
before calling methods on self
. Instead, we should probably consider tweaking InitialRemoteCommitmentReceiver
to make the methods on it actually static instead of in the trait, allowing us to just borrow the fields we need out of self
.
{ | ||
self.context.maybe_downgrade_channel_features(fee_estimator)?; | ||
Ok(self.get_open_channel(chain_hash)) | ||
self.get_open_channel(chain_hash, logger).ok_or(()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could use a comment describing why its okay to return an Err
if get_open_channel
failed. We use the result of maybe_handle_error_without_close
to check if we have handled the error without needing to FC the channel, so this code reads like a bug. But, actually, because we'll reuse the same point, once get_open_channel
returns a value once, it should always return a value (and never wait on a signer operation again), and thus if we get a failure here its because the peer sent an error
before we ever sent the OpenChannel
(which really shouldn't be possible, they shouldn't know the temporary channel id until we do).
}; | ||
|
||
let need_channel_ready = channel.check_get_channel_ready(0, logger).is_some(); | ||
let need_channel_ready = channel.check_get_channel_ready(0, logger).is_some() | ||
|| channel.context.signer_pending_channel_ready; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to apply this diff to v2 as well.
Handles fallible
get_per_commitment_point
signer method (except for channel reestablish).We make
get_per_commitment_point
return a result type so that in theErr
case, we (LDK) can resume operation while an async signer fetches the commitment point. This will typically block sending out a message, but otherwise most state updates will occur. When the signature arrives, the user is expected to callChannelManager::signer_unblocked
and we will retryget_per_commitment_point
however this time the signer will return the commitment point withOk
, and we'll send out our message.This PR implements this flow for channel establishment, i.e.
open_channel
,accept_channel
, andchannel_ready
.Rough outline of how our
HolderCommitmentPoint
advances and gets new signatures:open_channel
- creates new outbound channel, immediately requests the first commit point, waits to have the first 2 commit points to be unblocked to send the messageaccept_channel
- same ^funding_created
- no op regarding commit pointsfunding_created
- uses point to verify first commitment, then advances commitment, requesting next pointfunding_signed
- no opfunding_signed
- same as above, verifies, advances, requests next pointchannel_ready
- waits to have the next commit point ready, then once unblocked sends the current point in thechannel_ready
message