-
Notifications
You must be signed in to change notification settings - Fork 371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle fallible per commitment point in channel establishment #3109
base: main
Are you sure you want to change the base?
Handle fallible per commitment point in channel establishment #3109
Conversation
3fbf2e6
to
1e5b210
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3109 +/- ##
==========================================
+ Coverage 89.25% 90.65% +1.40%
==========================================
Files 130 130
Lines 106959 114214 +7255
Branches 106959 114214 +7255
==========================================
+ Hits 95464 103545 +8081
+ Misses 8706 8232 -474
+ Partials 2789 2437 -352 ☔ View full report in Codecov by Sentry. |
1e5b210
to
07991fa
Compare
the commits from #3115 are second from the end, will rebase on top later this week, but generally most of the stuff here is in place |
07991fa
to
744f2ca
Compare
73583e7
to
b798ffa
Compare
0bdd4cc
to
0ac4ad8
Compare
When we get back to this, please address the comments from #3152 (review) |
7ea6d3f
to
c109e18
Compare
This also needs rebase now. |
df8902b
to
50b1d88
Compare
Will probably squash some of these commits together at some point but just kept them separate for now |
rebased and force pushed with the new approach. Previous branch is still here if it's helpful to see |
// `current_point` will become optional when async signing is implemented. | ||
let cur_holder_commitment_point = Some(self.context.holder_commitment_point.current_point()); | ||
let next_holder_commitment_point = self.context.holder_commitment_point.next_point(); | ||
let cur_holder_commitment_point = Some(self.holder_commitment_point.current_point()); | ||
let next_holder_commitment_point = self.holder_commitment_point.next_point(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when reading, now that this will never be optional, can we unwrap/expect the value (or return a DecodeError on None)? what's the right way to handle this
50b1d88
to
760b055
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, this is looking great. A few nits here and there but basically this LGTM.
} else { | ||
log_debug!(logger, "Not producing channel_ready: the holder commitment point is not available."); | ||
self.context.signer_pending_channel_ready = true; | ||
None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think after this we are missing an update to get_announcement_sigs
to replay it (can come in a followup, but right now if you do an async signer it will always be forced to be a non-public channel).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good, will do in a followup
8f49a7d
to
6041e37
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Going to give this another look next week but it looks good!
lightning/src/ln/channel.rs
Outdated
fn generate_accept_channel_message<L: Deref>(&mut self, _logger: &L) -> Option<msgs::AcceptChannel> | ||
where L::Target: Logger | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you format these method signatures per rustfmt, here and elsewhere in the PR:
fn generate_accept_channel_message<L: Deref>(&mut self, _logger: &L) -> Option<msgs::AcceptChannel> | |
where L::Target: Logger | |
{ | |
fn generate_accept_channel_message<L: Deref>( | |
&mut self, _logger: &L | |
) -> Option<msgs::AcceptChannel> where L::Target: Logger { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a full scan of the function signatures changed in this PR and formatted them accordingly, lmk if i missed any
} | ||
if let Some(ref mut point) = self.unfunded_context.holder_commitment_point { | ||
if !point.is_available() { | ||
point.try_resolve_pending(&self.context.holder_signer, &self.context.secp_ctx, logger); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah would it be easy to add test coverage for this case?
Edit: looking at this again, this looks potentially redundant with HolderCommitmentPoint::new
, which already tries to fetch the next point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This case is for if they already got the first commitment point, but not the next. HolderCommitmentPoint::new
will try to fetch both points, but if one is ready before the other, I think we still need this check.
Unfortunately, I don't think it's easy to add coverage for this. Right now our test utils for disabling signer ops mainly just disable all calls for a certain signing method on a signer, whereas for this we'd need to disable it for the second call and not the first. I have an idea for how to do this, i.e. we could hold a queue of if the next signer calls should be disabled/enabled, but my first impression is it'll be a little complicated just to get this case. Open to suggestions + I'm happy to give it a try it if you want, but yea
let mut holder_commitment_point = match self.unfunded_context.holder_commitment_point { | ||
Some(point) => point, | ||
None => return Err((self, ChannelError::close("Received funding_created before our first commitment point was available".to_owned()))), | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In funding_signed
we'll expect
that holder_commitment_point
is Some
, but here we'll error and close the channel. Is there reason for the difference in handling there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unintentional, just changed funding_signed
to return an error!
@@ -8298,7 +8304,7 @@ impl<SP: Deref> OutboundV1Channel<SP> where SP::Target: SignerProvider { | |||
holder_commitment_point: HolderCommitmentPoint::new(&context.holder_signer, &context.secp_ctx), | |||
}; | |||
|
|||
let chan = Self { context, unfunded_context }; | |||
let chan = Self { context, unfunded_context, signer_pending_open_channel: false }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not obvious to me why this is set to false
, seems like whether it's set should depend on the result of HolderCommitmentPoint::new
above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yea, we just initialize to false here, and leave the logic for setting the flag for where we actually try to generate the open_channel message. Added a comment, lmk if that works
lightning/src/ln/channel.rs
Outdated
let open_channel = match self.unfunded_context.holder_commitment_point { | ||
Some(ref mut point) if point.is_available() && self.signer_pending_open_channel => { | ||
log_trace!(logger, "Attempting to generate open_channel..."); | ||
self.get_open_channel(chain_hash, logger) | ||
} | ||
_ => None | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can be simplified due to the existing checks in get_open_channel
?
let open_channel = if self.signer_pending_open_channel {
log_trace!(logger, "Attempting to generate open_channel...");
self.get_open_channel(chain_hash, logger)
} else { None };
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We check here point.is_available
and don't in get_open_channel
. But it does seem simpler to just do that check in get_open_channel
(and be consistent in both places) so I'll just move it there and simplify this. Same with accept channel.
6041e37
to
20642a6
Compare
pushed some, still addressing more comments |
20642a6
to
ceae1c4
Compare
almost there... |
In the event that a signer cannot provide a commitment point immediately, we set a flag to remember we're waiting for this before we can send `open_channel`. We make sure to get the first two commitment points, so when we advance commitments, we always have a commitment point available.
We were only checking point.is_available() on the signer unblocked case, but should just check it whenever we're getting open_channel.
For all of our async signing logic in channel establishment v1, we set signer flags in the method where we create the raw lightning message object. To keep things consistent, this commit moves setting the signer flags to where we create funding_created, since this was being set elsewhere before. While we're doing this cleanup, this also slightly refactors our funding_signed method to move some code out of an indent, as well as removes a log to fix a nit from lightningdevkit#3152.
Similar to `open_channel`, if a signer cannot provide a commitment point immediately, we set a flag to remember we're waiting for a point to send `accept_channel`. We make sure to get the first two points before moving on, so when we advance our commitment we always have a point available.
We were only checking point.is_available() on the signer unblocked case, but should just check it whenever we're getting accept_channel.
Here we handle the case where our signer is pending the next commitment point when we try to send channel ready. We set a flag to remember to send this message when our signer is unblocked. This follows the same general pattern as everywhere else where we're waiting on a commitment point from the signer in order to send a message.
ceae1c4
to
4aae0ff
Compare
Okay, I think I've addressed all the comments up to now. Lmk when/if I'm good to squash! |
Handles fallible
get_per_commitment_point
signer method (except for channel reestablish).We make
get_per_commitment_point
return a result type so that in theErr
case, we (LDK) can resume operation while an async signer fetches the commitment point. This will typically block sending out a message, but otherwise most state updates will occur. When the signature arrives, the user is expected to callChannelManager::signer_unblocked
and we will retryget_per_commitment_point
however this time the signer will return the commitment point withOk
, and we'll send out our message.This PR implements this flow for channel establishment, i.e.
open_channel
,accept_channel
, andchannel_ready
.Rough outline of how our
HolderCommitmentPoint
advances and gets new signatures:open_channel
- creates new outbound channel, immediately requests the first commit point, waits to have the first 2 commit points to be unblocked to send the messageaccept_channel
- same ^funding_created
- no op regarding commit pointsfunding_created
- uses point to verify first commitment, then advances commitment, requesting next pointfunding_signed
- no opfunding_signed
- same as above, verifies, advances, requests next pointchannel_ready
- waits to have the next commit point ready, then once unblocked sends the current point in thechannel_ready
message