diff --git a/CHANGELOG.md b/CHANGELOG.md index 3e473a22a1c..f3108ae437b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,160 @@ +# 0.0.123 - May 08, 2024 - "BOLT12 Dust Sweeping" + +## API Updates + + * To reduce risk of force-closures and improve HTLC reliability the default + dust exposure limit has been increased to + `MaxDustHTLCExposure::FeeRateMultiplier(10_000)`. Users with existing + channels might want to consider using + `ChannelManager::update_channel_config` to apply the new default (#3045). + * `ChainMonitor::archive_fully_resolved_channel_monitors` is now provided to + remove from memory `ChannelMonitor`s that have been fully resolved on-chain + and are now not needed. It uses the new `Persist::archive_persisted_channel` + to inform the storage layer that such a monitor should be archived (#2964). + * An `OutputSweeper` is now provided which will automatically sweep + `SpendableOutputDescriptor`s, retrying until the sweep confirms (#2825). + * After initiating an outbound channel, a peer disconnection no longer results + in immediate channel closure. Rather, if the peer is reconnected before the + channel times out LDK will automatically retry opening it (#2725). + * `PaymentPurpose` now has separate variants for BOLT12 payments, which + include fields from the `invoice_request` as well as the `OfferId` (#2970). + * `ChannelDetails` now includes a list of in-flight HTLCs (#2442). + * `Event::PaymentForwarded` now includes `skimmed_fee_msat` (#2858). + * The `hashbrown` dependency has been upgraded and the use of `ahash` as the + no-std hash table hash function has been removed. As a consequence, LDK's + `Hash{Map,Set}`s no longer feature several constructors when LDK is built + with no-std; see the `util::hash_tables` module instead. On platforms that + `getrandom` supports, setting the `possiblyrandom/getrandom` feature flag + will ensure hash tables are resistant to HashDoS attacks, though the + `possiblyrandom` crate should detect most common platforms (#2810, #2891). + * `ChannelMonitor`-originated requests to the `ChannelSigner` can now fail and + be retried using `ChannelMonitor::signer_unblocked` (#2816). + * `SpendableOutputDescriptor::to_psbt_input` now includes the `witness_script` + where available as well as new proprietary data which can be used to + re-derive some spending keys from the base key (#2761, #3004). + * `OutPoint::to_channel_id` has been removed in favor of + `ChannelId::v1_from_funding_outpoint` in preparation for v2 channels with a + different `ChannelId` derivation scheme (#2797). + * `PeerManager::get_peer_node_ids` has been replaced with `list_peers` and + `peer_by_node_id`, which provide more details (#2905). + * `Bolt11Invoice::get_payee_pub_key` is now provided (#2909). + * `Default[Message]Router` now take an `entropy_source` argument (#2847). + * `ClosureReason::HTLCsTimedOut` has been separated out from + `ClosureReason::HolderForceClosed` as it is the most common case (#2887). + * `ClosureReason::CooperativeClosure` is now split into + `{Counterparty,Locally}Initiated` variants (#2863). + * `Event::ChannelPending::channel_type` is now provided (#2872). + * `PaymentForwarded::{prev,next}_user_channel_id` are now provided (#2924). + * Channel init messages have been refactored towards V2 channels (#2871). + * `BumpTransactionEvent` now contains the channel and counterparty (#2873). + * `util::scid_utils` is now public, with some trivial utilities to examine + short channel ids (#2694). + * `DirectedChannelInfo::{source,target}` are now public (#2870). + * Bounds in `lightning-background-processor` were simplified by using + `AChannelManager` (#2963). + * The `Persist` impl for `KVStore` no longer requires `Sized`, allowing for + the use of `dyn KVStore` as `Persist` (#2883, #2976). + * `From` is now implemented for `PaymentHash` (#2918). + * `NodeId::from_slice` is now provided (#2942). + * `ChannelManager` deserialization may now fail with `DangerousValue` when + LDK's persistence API was violated (#2974). + +## Bug Fixes + * Excess fees on counterparty commitment transactions are now included in the + dust exposure calculation. This lines behavior up with some cases where + transaction fees can be burnt, making them effectively dust exposure (#3045). + * `Future`s used as an `std::...::Future` could grow in size unbounded if it + was never woken. For those not using async persistence and using the async + `lightning-background-processor`, this could cause a memory leak in the + `ChainMonitor` (#2894). + * Inbound channel requests that fail in + `ChannelManager::accept_inbound_channel` would previously have stalled from + the peer's perspective as no `error` message was sent (#2953). + * Blinded path construction has been tuned to select paths more likely to + succeed, improving BOLT12 payment reliability (#2911, #2912). + * After a reorg, `lightning-transaction-sync` could have failed to follow a + transaction that LDK needed information about (#2946). + * `RecipientOnionFields`' `custom_tlvs` are now propagated to recipients when + paying with blinded paths (#2975). + * `Event::ChannelClosed` is now properly generated and peers are properly + notified for all channels that as a part of a batch channel open fail to be + funded (#3029). + * In cases where user event processing is substantially delayed such that we + complete multiple round-trips with our peers before a `PaymentSent` event is + handled and then restart without persisting the `ChannelManager` after having + persisted a `ChannelMonitor[Update]`, on startup we may have `Err`d trying to + deserialize the `ChannelManager` (#3021). + * If a peer has relatively high latency, `PeerManager` may have failed to + establish a connection (#2993). + * `ChannelUpdate` messages broadcasted for our own channel closures are now + slightly more robust (#2731). + * Deserializing malformed BOLT11 invoices may have resulted in an integer + overflow panic in debug builds (#3032). + * In exceedingly rare cases (no cases of this are known), LDK may have created + an invalid serialization for a `ChannelManager` (#2998). + * Message processing latency handling BOLT12 payments has been reduced (#2881). + * Latency in processing `Event::SpendableOutputs` may be reduced (#3033). + +## Node Compatibility + * LDK's blinded paths were inconsistent with other implementations in several + ways, which have been addressed (#2856, #2936, #2945). + * LDK's messaging blinded paths now support the latest features which some + nodes may begin relying on soon (#2961). + * LDK's BOLT12 structs have been updated to support some last-minute changes to + the spec (#3017, #3018). + * CLN v24.02 requires the `gossip_queries` feature for all peers, however LDK + by default does not set it for those not using a `P2PGossipSync` (e.g. those + using RGS). This change was reverted in CLN v24.02.2 however for now LDK + always sets the `gossip_queries` feature. This change is expected to be + reverted in a future LDK release (#2959). + +## Security +0.0.123 fixes a denial-of-service vulnerability which we believe to be reachable +from untrusted input when parsing invalid BOLT11 invoices containing non-ASCII +characters. + * BOLT11 invoices with non-ASCII characters in the human-readable-part may + cause an out-of-bounds read attempt leading to a panic (#3054). Note that all + BOLT11 invoices containing non-ASCII characters are invalid. + +In total, this release features 150 files changed, 19307 insertions, 6306 +deletions in 360 commits since 0.0.121 from 17 authors, in alphabetical order: + + * Arik Sosman + * Duncan Dean + * Elias Rohrer + * Evan Feenstra + * Jeffrey Czyz + * Keyue Bao + * Matt Corallo + * Orbital + * Sergi Delgado Segura + * Valentine Wallace + * Willem Van Lint + * Wilmer Paulino + * benthecarman + * jbesraa + * olegkubrakov + * optout + * shaavan + + +# 0.0.122 - Apr 09, 2024 - "That Which Is Untested Is Broken" + +## Bug Fixes + * `Route` objects did not successfully round-trip through de/serialization + since LDK 0.0.117, which has now been fixed (#2897). + * Correct deserialization of unknown future enum variants. This ensures + downgrades from future versions of LDK do not result in read failures or + corrupt reads in cases where enums are written (#2969). + * When hitting lnd bug 6039, our workaround previously resulted in + `ChannelManager` persistences on every round-trip with our peer. These + useless persistences are now skipped (#2937). + +In total, this release features 4 files changed, 99 insertions, 55 +deletions in 6 commits from 1 author, in alphabetical order: + * Matt Corallo + + # 0.0.121 - Jan 22, 2024 - "Unwraps are Bad" ## Bug Fixes @@ -17,6 +174,7 @@ deletions in 4 commits from 2 authors, in alphabetical order: * Jeffrey Czyz * Matt Corallo + # 0.0.120 - Jan 17, 2024 - "Unblinded Fuzzers" ## API Updates @@ -65,6 +223,7 @@ deletions in 79 commits from 9 authors, in alphabetical order: * optout * shuoer86 + # 0.0.119 - Dec 15, 2023 - "Spring Cleaning for Christmas" ## API Updates diff --git a/README.md b/README.md index f8de40f3193..294ce902270 100644 --- a/README.md +++ b/README.md @@ -14,6 +14,199 @@ and networking can be provided by LDK's [sample modules](#crates), or you may pr own custom implementations. More information is available in the [`About`](#about) section. +Splicing Prototype +------------------ + +'Happy Path' PoC for Splicing + +Objective, Restrictions: +- Splice-in supported (increase channel capacity), splice-out not +- between two LDK instances +- No quiscence is used/checked +- Happy path only, no complex combinations, not all error scenarios +- Splice from V2 channel is supported, from V1 channel not, the channel ID is not changed +- Acceptor does not contribute inputs to the splice +- It is assumed that all extra inputs belong to the initiator (the full capacity increase is credited to the channel initiator) +- RBF of pending splice is not supported, only a single pending splicing is supported at a time +- The is_splice flag on the ChannelReady event is not reliable (depending on the order) + +Up-to-date with main branch as of v0.0.123 (May 8, 475f736; originally branched off v0.0.115). + +See also `ldk-sample` https://github.com/optout21/ldk-sample/tree/splicing-hapa9-v123 + +To test: `RUSTFLAGS="--cfg=splicing" cargo test -p lightning splic` + +Detailed steps +-------------- +(as of May 9, 536e3b1, splicing-hapa9-v123) + +Client LDK Counterparty node (acceptor) +------ --- ---------------------------- +--> + splice_channel() - ChannelManager API + Do checks, save pending splice parameters + get_splice_init() - Channel + message out: splice_init + --- + message in: splice_init + handle_splice_init() - ChannelManager + internal_splice_init() - ChannelManager + Do checks. Check if channel ID would change. Cycle back the channel to UnfundedInboundV2 + Channel phase to RefundingV2 (inbound pending) + splice_start() -- ChannelContext + Start the splice, update capacity, state to NegotiatingFunding, reset funding transaction + get_splice_ack() -- Channel + begin_interactive_funding_tx_construction() - Channel + begin_interactive_funding_tx_construction() - ChannelContext + Splicing specific: Add the previous funding as an input to the new one. + get_input_of_current_funding() + InteractiveTxConstructor::new() + Start interactive TX negotiation + message out: splice_ack + --- + message in: splice_ack + handle_splice_ack() - ChannelManager + internal_splice_ack() - ChannelManager + Do checks, check against initial splice() + Channel phase to RefundingV2 (outbound pending) + splice_start() -- ChannelContext + Start the splice, update capacity, state to NegotiatingFunding, reset funding transaction + //event: SpliceAckedInputsContributionReady + //contains the pre & post capacities, channel ID +// --- +//event: SpliceAckedInputsContributionReady +//action by client: +//provide extra input(s) for new funding +//--- + //contribute_funding_inputs() - ChannelManager API + begin_interactive_funding_tx_construction() - Channel + begin_interactive_funding_tx_construction() - ChannelContext + Splicing specific: Add the previous funding as an input to the new one. + get_input_of_current_funding() InteractiveTxConstructor::new() + Interactive tx construction flow follows (e.g. 2 inputs, 2 outputs) + message out: tx_add_input + --- + message out: tx_add_input + handle_tx_add_input() -- ChannelManager + Interactive tx construction flow follows, details omitted on acceptor side + message in: tx_complete + message out: tx_add_input, second + message in: tx_complete + message out: tx_add_output - for change + message in: tx_complete + message out: tx_add_output - for new funding + + message in: tx_complete + handle_tx_complete() - ChannelManager + internal_tx_complete() -- ChannelManager + tx_complete() -- Channel + message out: tx_complete + funding_tx_constructed() -- Channel + get_initial_commitment_signed() - Channel + Splicing-specific: Add signature on the previous funding tx input + Mark finished interactive tx construction + Update channel state to FundingNegotiated + event: FundingTransactionReadyForSigning + contains the new funding transaction with the signature on the previous tx input + message out: commitment_signed (UpdateHTLCs) + channel state: Funded + --- +event: FundingTransactionReadyForSigning +action by client: Create and provide signature on the extra inputs +--- + funding_transaction_signed() - ChannelManager + funding_transaction_signed() - Channel + verify_interactive_tx_signatures() - Channel + splicing specific: Use the previously saved shared signature (tlvs field) + provide_holder_witnesses() - InteractiveTxSigningSession + (assume CP sigs not yet received; funding tx not yet fully signed) + (assume CP sigs not yet received; not yet signing tx_signatures) + --- + message in: tx_complete + handle_tx_complete() - ChannelManager + internal_tx_complete() -- ChannelManager + tx_complete() -- Channel + funding_tx_constructed() -- Channel + get_initial_commitment_signed() - Channel + Splicing-specific: Add signature on the previous funding tx input + Mark finished interactive tx construction + Update channel state to FundingNegotiated + message out: commitment_signed (UpdateHTLCs) + channel state: Funded + --- + message in: commitment_signed (UpdateHTLCs) + handle_commitment_signed() - ChannelManager + internal_commitment_signed() -- ChannelManager + commitment_signed_initial_v2() -- Channel + Update channel state to AwaitingChannelReady + watch_channel() - ChainMonitor + received_commitment_signed() -- InteractiveTxSigningSession + --- + message in: commitment_signed (UpdateHTLCs) (from earlier) + handle_commitment_signed() - ChannelManager + internal_commitment_signed() -- ChannelManager + commitment_signed_initial_v2() -- Channel + Update channel state to AwaitingChannelReady + watch_channel() - ChainMonitor + received_commitment_signed() -- InteractiveTxSigningSession + message out: tx_signatures + --- + message in: tx_signatures + handle_tx_signatures() - ChannelManager + internal_tx_signatures() -- ChannelManager + tx_signatures() -- Channel + Check present signatures, tlvs field, txid match + received_tx_signatures() -- InteractiveTxSigningSession + Update signature on previous tx input (with shared signature) + Update channel state to AwaitingChannelReady + event: ChannelPending + Save funding transaction + message out: tx_signatures + funding transaction is ready, broadcast it + --- +event: ChannelPending +--- + message in: tx_signatures + handle_tx_signatures() - ChannelManager + internal_tx_signatures() -- ChannelManager + tx_signatures() -- Channel + Check present signatures, tlvs field, txid match + received_tx_signatures() -- InteractiveTxSigningSession + Update signature on previous tx input (with shared signature) + Update channel state to AwaitingChannelReady + event: ChannelPending + Save funding transaction + funding transaction is ready, broadcast it + --- +New funding tx gets broadcasted (both sides) +Waiting for confirmation + transactions_confirmed() - Channel + mark the interactive tx session as complete + message out: splice_locked + --- + transactions_confirmed() - Channel + mark the interactive tx session as complete + message out: splice_locked + --- + message in: splice_locked + handle_splice_locked() - ChannelManager + internal_splice_locked() - ChannelManager + splice_complete() -- ChannelContext + Mark splicing process as completed + event: ChannelReady + message out: channel_update + --- +event: ChannelReady +--- + message in: splice_locked + handle_splice_locked() - ChannelManager + internal_splice_locked() - ChannelManager + splice_complete() -- ChannelContext + Mark splicing process as completed + event: ChannelReady + message out: channel_update +/end of sequence/ + Status ------ The project implements all of the [BOLT specifications](https://github.com/lightning/bolts), diff --git a/fuzz/src/full_stack.rs b/fuzz/src/full_stack.rs index e128d91810a..d07def30ff9 100644 --- a/fuzz/src/full_stack.rs +++ b/fuzz/src/full_stack.rs @@ -971,6 +971,8 @@ mod tests { // create the funding transaction (client should send funding_created now) ext_from_hex("0a", &mut test); + // Two feerate requests to check the dust exposure on the initial commitment tx + ext_from_hex("00fd00fd", &mut test); // inbound read from peer id 1 of len 18 ext_from_hex("030112", &mut test); @@ -1019,6 +1021,9 @@ mod tests { // end of update_add_htlc from 0 to 1 via client and mac ext_from_hex("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff ab00000000000000000000000000000000000000000000000000000000000000 03000000000000000000000000000000", &mut test); + // Two feerate requests to check dust exposure + ext_from_hex("00fd00fd", &mut test); + // inbound read from peer id 0 of len 18 ext_from_hex("030012", &mut test); // message header indicating message length 100 @@ -1040,6 +1045,8 @@ mod tests { // process the now-pending HTLC forward ext_from_hex("07", &mut test); + // Two feerate requests to check dust exposure + ext_from_hex("00fd00fd", &mut test); // client now sends id 1 update_add_htlc and commitment_signed (CHECK 7: UpdateHTLCs event for node 03020000 with 1 HTLCs for channel 3f000000) // we respond with commitment_signed then revoke_and_ack (a weird, but valid, order) @@ -1115,6 +1122,9 @@ mod tests { // end of update_add_htlc from 0 to 1 via client and mac ext_from_hex("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff ab00000000000000000000000000000000000000000000000000000000000000 03000000000000000000000000000000", &mut test); + // Two feerate requests to check dust exposure + ext_from_hex("00fd00fd", &mut test); + // now respond to the update_fulfill_htlc+commitment_signed messages the client sent to peer 0 // inbound read from peer id 0 of len 18 ext_from_hex("030012", &mut test); @@ -1146,6 +1156,10 @@ mod tests { // process the now-pending HTLC forward ext_from_hex("07", &mut test); + + // Two feerate requests to check dust exposure + ext_from_hex("00fd00fd", &mut test); + // client now sends id 1 update_add_htlc and commitment_signed (CHECK 7 duplicate) // we respond with revoke_and_ack, then commitment_signed, then update_fail_htlc @@ -1243,6 +1257,9 @@ mod tests { // end of update_add_htlc from 0 to 1 via client and mac ext_from_hex("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff 5300000000000000000000000000000000000000000000000000000000000000 03000000000000000000000000000000", &mut test); + // Two feerate requests to check dust exposure + ext_from_hex("00fd00fd", &mut test); + // inbound read from peer id 0 of len 18 ext_from_hex("030012", &mut test); // message header indicating message length 164 @@ -1264,6 +1281,8 @@ mod tests { // process the now-pending HTLC forward ext_from_hex("07", &mut test); + // Two feerate requests to check dust exposure + ext_from_hex("00fd00fd", &mut test); // client now sends id 1 update_add_htlc and commitment_signed (CHECK 7 duplicate) // connect a block with one transaction of len 125 diff --git a/fuzz/src/msg_targets/gen_target.sh b/fuzz/src/msg_targets/gen_target.sh index cb24aa919db..e6e94e9e0e9 100755 --- a/fuzz/src/msg_targets/gen_target.sh +++ b/fuzz/src/msg_targets/gen_target.sh @@ -61,6 +61,6 @@ GEN_TEST lightning::ln::msgs::TxAbort test_msg_simple "" GEN_TEST lightning::ln::msgs::Stfu test_msg_simple "" -GEN_TEST lightning::ln::msgs::Splice test_msg_simple "" +GEN_TEST lightning::ln::msgs::SpliceInit test_msg_simple "" GEN_TEST lightning::ln::msgs::SpliceAck test_msg_simple "" GEN_TEST lightning::ln::msgs::SpliceLocked test_msg_simple "" diff --git a/fuzz/src/msg_targets/msg_splice.rs b/fuzz/src/msg_targets/msg_splice.rs index e6a18d2561c..70e083c14c9 100644 --- a/fuzz/src/msg_targets/msg_splice.rs +++ b/fuzz/src/msg_targets/msg_splice.rs @@ -15,11 +15,11 @@ use crate::utils::test_logger; #[inline] pub fn msg_splice_test(data: &[u8], _out: Out) { - test_msg_simple!(lightning::ln::msgs::Splice, data); + test_msg_simple!(lightning::ln::msgs::SpliceInit, data); } #[no_mangle] pub extern "C" fn msg_splice_run(data: *const u8, datalen: usize) { let data = unsafe { std::slice::from_raw_parts(data, datalen) }; - test_msg_simple!(lightning::ln::msgs::Splice, data); + test_msg_simple!(lightning::ln::msgs::SpliceInit, data); } diff --git a/lightning-background-processor/Cargo.toml b/lightning-background-processor/Cargo.toml index 5148bb81ea4..b6fe9c4bfe7 100644 --- a/lightning-background-processor/Cargo.toml +++ b/lightning-background-processor/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-background-processor" -version = "0.0.123-beta" +version = "0.0.123" authors = ["Valentine Wallace "] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -22,11 +22,11 @@ default = ["std"] [dependencies] bitcoin = { version = "0.30.2", default-features = false } -lightning = { version = "0.0.123-beta", path = "../lightning", default-features = false } -lightning-rapid-gossip-sync = { version = "0.0.123-beta", path = "../lightning-rapid-gossip-sync", default-features = false } +lightning = { version = "0.0.123", path = "../lightning", default-features = false } +lightning-rapid-gossip-sync = { version = "0.0.123", path = "../lightning-rapid-gossip-sync", default-features = false } [dev-dependencies] tokio = { version = "1.35", features = [ "macros", "rt", "rt-multi-thread", "sync", "time" ] } -lightning = { version = "0.0.123-beta", path = "../lightning", features = ["_test_utils"] } -lightning-invoice = { version = "0.31.0-beta", path = "../lightning-invoice" } -lightning-persister = { version = "0.0.123-beta", path = "../lightning-persister" } +lightning = { version = "0.0.123", path = "../lightning", features = ["_test_utils"] } +lightning-invoice = { version = "0.31.0", path = "../lightning-invoice" } +lightning-persister = { version = "0.0.123", path = "../lightning-persister" } diff --git a/lightning-block-sync/Cargo.toml b/lightning-block-sync/Cargo.toml index c55281f6ccc..e9d5c569766 100644 --- a/lightning-block-sync/Cargo.toml +++ b/lightning-block-sync/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-block-sync" -version = "0.0.123-beta" +version = "0.0.123" authors = ["Jeffrey Czyz", "Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -20,11 +20,11 @@ rpc-client = [ "serde_json", "chunked_transfer" ] [dependencies] bitcoin = "0.30.2" hex = { package = "hex-conservative", version = "0.1.1", default-features = false } -lightning = { version = "0.0.123-beta", path = "../lightning" } +lightning = { version = "0.0.123", path = "../lightning" } tokio = { version = "1.35", features = [ "io-util", "net", "time", "rt" ], optional = true } serde_json = { version = "1.0", optional = true } chunked_transfer = { version = "1.4", optional = true } [dev-dependencies] -lightning = { version = "0.0.123-beta", path = "../lightning", features = ["_test_utils"] } +lightning = { version = "0.0.123", path = "../lightning", features = ["_test_utils"] } tokio = { version = "1.35", features = [ "macros", "rt" ] } diff --git a/lightning-custom-message/Cargo.toml b/lightning-custom-message/Cargo.toml index 6b150280b60..0ef2213d2b1 100644 --- a/lightning-custom-message/Cargo.toml +++ b/lightning-custom-message/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-custom-message" -version = "0.0.123-beta" +version = "0.0.123" authors = ["Jeffrey Czyz"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -15,4 +15,4 @@ rustdoc-args = ["--cfg", "docsrs"] [dependencies] bitcoin = "0.30.2" -lightning = { version = "0.0.123-beta", path = "../lightning" } +lightning = { version = "0.0.123", path = "../lightning" } diff --git a/lightning-invoice/Cargo.toml b/lightning-invoice/Cargo.toml index 1ebf67be1ad..33a6a0d6a54 100644 --- a/lightning-invoice/Cargo.toml +++ b/lightning-invoice/Cargo.toml @@ -1,7 +1,7 @@ [package] name = "lightning-invoice" description = "Data structures to parse and serialize BOLT11 lightning invoices" -version = "0.31.0-beta" +version = "0.31.0" authors = ["Sebastian Geisler "] documentation = "https://docs.rs/lightning-invoice/" license = "MIT OR Apache-2.0" @@ -21,13 +21,13 @@ std = ["bitcoin/std", "lightning/std", "bech32/std"] [dependencies] bech32 = { version = "0.9.0", default-features = false } -lightning = { version = "0.0.123-beta", path = "../lightning", default-features = false } +lightning = { version = "0.0.123", path = "../lightning", default-features = false } secp256k1 = { version = "0.27.0", default-features = false, features = ["recovery", "alloc"] } serde = { version = "1.0.118", optional = true } bitcoin = { version = "0.30.2", default-features = false } [dev-dependencies] -lightning = { version = "0.0.123-beta", path = "../lightning", default-features = false, features = ["_test_utils"] } +lightning = { version = "0.0.123", path = "../lightning", default-features = false, features = ["_test_utils"] } hex = { package = "hex-conservative", version = "0.1.1", default-features = false } serde_json = { version = "1"} hashbrown = { version = "0.13", default-features = false } diff --git a/lightning-invoice/src/de.rs b/lightning-invoice/src/de.rs index 674518272d0..381c7b645f9 100644 --- a/lightning-invoice/src/de.rs +++ b/lightning-invoice/src/de.rs @@ -43,7 +43,11 @@ mod hrp_sm { } impl States { - fn next_state(&self, read_symbol: char) -> Result { + fn next_state(&self, read_byte: u8) -> Result { + let read_symbol = match char::from_u32(read_byte.into()) { + Some(symb) if symb.is_ascii() => symb, + _ => return Err(super::Bolt11ParseError::MalformedHRP), + }; match *self { States::Start => { if read_symbol == 'l' { @@ -119,7 +123,7 @@ mod hrp_sm { *range = Some(new_range); } - fn step(&mut self, c: char) -> Result<(), super::Bolt11ParseError> { + fn step(&mut self, c: u8) -> Result<(), super::Bolt11ParseError> { let next_state = self.state.next_state(c)?; match next_state { States::ParseCurrencyPrefix => { @@ -158,7 +162,7 @@ mod hrp_sm { pub fn parse_hrp(input: &str) -> Result<(&str, &str, &str), super::Bolt11ParseError> { let mut sm = StateMachine::new(); - for c in input.chars() { + for c in input.bytes() { sm.step(c)?; } diff --git a/lightning-invoice/src/lib.rs b/lightning-invoice/src/lib.rs index 920d44b1561..fb34240bee1 100644 --- a/lightning-invoice/src/lib.rs +++ b/lightning-invoice/src/lib.rs @@ -577,7 +577,13 @@ impl Self { - let amount = amount_msat * 10; // Invoices are denominated in "pico BTC" + let amount = match amount_msat.checked_mul(10) { // Invoices are denominated in "pico BTC" + Some(amt) => amt, + None => { + self.error = Some(CreationError::InvalidAmount); + return self + } + }; let biggest_possible_si_prefix = SiPrefix::values_desc() .iter() .find(|prefix| amount % prefix.multiplier() == 0) diff --git a/lightning-net-tokio/Cargo.toml b/lightning-net-tokio/Cargo.toml index 0ab9f82f527..478986bb837 100644 --- a/lightning-net-tokio/Cargo.toml +++ b/lightning-net-tokio/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-net-tokio" -version = "0.0.123-beta" +version = "0.0.123" authors = ["Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning/" @@ -16,9 +16,9 @@ rustdoc-args = ["--cfg", "docsrs"] [dependencies] bitcoin = "0.30.2" -lightning = { version = "0.0.123-beta", path = "../lightning" } +lightning = { version = "0.0.123", path = "../lightning" } tokio = { version = "1.35", features = [ "rt", "sync", "net", "time" ] } [dev-dependencies] tokio = { version = "1.35", features = [ "macros", "rt", "rt-multi-thread", "sync", "net", "time" ] } -lightning = { version = "0.0.123-beta", path = "../lightning", features = ["_test_utils"] } +lightning = { version = "0.0.123", path = "../lightning", features = ["_test_utils"] } diff --git a/lightning-net-tokio/src/lib.rs b/lightning-net-tokio/src/lib.rs index 6d001ca67fd..af4b3e11ebb 100644 --- a/lightning-net-tokio/src/lib.rs +++ b/lightning-net-tokio/src/lib.rs @@ -621,23 +621,34 @@ mod tests { fn handle_update_fee(&self, _their_node_id: &PublicKey, _msg: &UpdateFee) {} fn handle_announcement_signatures(&self, _their_node_id: &PublicKey, _msg: &AnnouncementSignatures) {} fn handle_channel_update(&self, _their_node_id: &PublicKey, _msg: &ChannelUpdate) {} + #[cfg(any(dual_funding, splicing))] fn handle_open_channel_v2(&self, _their_node_id: &PublicKey, _msg: &OpenChannelV2) {} + #[cfg(any(dual_funding, splicing))] fn handle_accept_channel_v2(&self, _their_node_id: &PublicKey, _msg: &AcceptChannelV2) {} fn handle_stfu(&self, _their_node_id: &PublicKey, _msg: &Stfu) {} #[cfg(splicing)] - fn handle_splice(&self, _their_node_id: &PublicKey, _msg: &Splice) {} + fn handle_splice_init(&self, _their_node_id: &PublicKey, _msg: &SpliceInit) {} #[cfg(splicing)] fn handle_splice_ack(&self, _their_node_id: &PublicKey, _msg: &SpliceAck) {} #[cfg(splicing)] fn handle_splice_locked(&self, _their_node_id: &PublicKey, _msg: &SpliceLocked) {} + #[cfg(any(dual_funding, splicing))] fn handle_tx_add_input(&self, _their_node_id: &PublicKey, _msg: &TxAddInput) {} + #[cfg(any(dual_funding, splicing))] fn handle_tx_add_output(&self, _their_node_id: &PublicKey, _msg: &TxAddOutput) {} + #[cfg(any(dual_funding, splicing))] fn handle_tx_remove_input(&self, _their_node_id: &PublicKey, _msg: &TxRemoveInput) {} + #[cfg(any(dual_funding, splicing))] fn handle_tx_remove_output(&self, _their_node_id: &PublicKey, _msg: &TxRemoveOutput) {} + #[cfg(any(dual_funding, splicing))] fn handle_tx_complete(&self, _their_node_id: &PublicKey, _msg: &TxComplete) {} + #[cfg(any(dual_funding, splicing))] fn handle_tx_signatures(&self, _their_node_id: &PublicKey, _msg: &TxSignatures) {} + #[cfg(any(dual_funding, splicing))] fn handle_tx_init_rbf(&self, _their_node_id: &PublicKey, _msg: &TxInitRbf) {} + #[cfg(any(dual_funding, splicing))] fn handle_tx_ack_rbf(&self, _their_node_id: &PublicKey, _msg: &TxAckRbf) {} + #[cfg(any(dual_funding, splicing))] fn handle_tx_abort(&self, _their_node_id: &PublicKey, _msg: &TxAbort) {} fn peer_disconnected(&self, their_node_id: &PublicKey) { if *their_node_id == self.expected_pubkey { diff --git a/lightning-persister/Cargo.toml b/lightning-persister/Cargo.toml index 9f7aca47a78..a826f8ca51e 100644 --- a/lightning-persister/Cargo.toml +++ b/lightning-persister/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-persister" -version = "0.0.123-beta" +version = "0.0.123" authors = ["Valentine Wallace", "Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -15,7 +15,7 @@ rustdoc-args = ["--cfg", "docsrs"] [dependencies] bitcoin = "0.30.2" -lightning = { version = "0.0.123-beta", path = "../lightning" } +lightning = { version = "0.0.123", path = "../lightning" } [target.'cfg(windows)'.dependencies] windows-sys = { version = "0.48.0", default-features = false, features = ["Win32_Storage_FileSystem", "Win32_Foundation"] } @@ -24,5 +24,5 @@ windows-sys = { version = "0.48.0", default-features = false, features = ["Win32 criterion = { version = "0.4", optional = true, default-features = false } [dev-dependencies] -lightning = { version = "0.0.123-beta", path = "../lightning", features = ["_test_utils"] } +lightning = { version = "0.0.123", path = "../lightning", features = ["_test_utils"] } bitcoin = { version = "0.30.2", default-features = false } diff --git a/lightning-rapid-gossip-sync/Cargo.toml b/lightning-rapid-gossip-sync/Cargo.toml index 02ff3da57b5..b7843a59db6 100644 --- a/lightning-rapid-gossip-sync/Cargo.toml +++ b/lightning-rapid-gossip-sync/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-rapid-gossip-sync" -version = "0.0.123-beta" +version = "0.0.123" authors = ["Arik Sosman "] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -15,11 +15,11 @@ no-std = ["lightning/no-std"] std = ["lightning/std"] [dependencies] -lightning = { version = "0.0.123-beta", path = "../lightning", default-features = false } +lightning = { version = "0.0.123", path = "../lightning", default-features = false } bitcoin = { version = "0.30.2", default-features = false } [target.'cfg(ldk_bench)'.dependencies] criterion = { version = "0.4", optional = true, default-features = false } [dev-dependencies] -lightning = { version = "0.0.123-beta", path = "../lightning", features = ["_test_utils"] } +lightning = { version = "0.0.123", path = "../lightning", features = ["_test_utils"] } diff --git a/lightning-transaction-sync/Cargo.toml b/lightning-transaction-sync/Cargo.toml index 8bb4958f9fa..9bf2b6e37c1 100644 --- a/lightning-transaction-sync/Cargo.toml +++ b/lightning-transaction-sync/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-transaction-sync" -version = "0.0.123-beta" +version = "0.0.123" authors = ["Elias Rohrer"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -23,7 +23,7 @@ electrum = ["electrum-client"] async-interface = [] [dependencies] -lightning = { version = "0.0.123-beta", path = "../lightning", default-features = false, features = ["std"] } +lightning = { version = "0.0.123", path = "../lightning", default-features = false, features = ["std"] } bitcoin = { version = "0.30.2", default-features = false } bdk-macros = "0.6" futures = { version = "0.3", optional = true } @@ -31,7 +31,7 @@ esplora-client = { version = "0.6", default-features = false, optional = true } electrum-client = { version = "0.18.0", optional = true } [dev-dependencies] -lightning = { version = "0.0.123-beta", path = "../lightning", default-features = false, features = ["std", "_test_utils"] } +lightning = { version = "0.0.123", path = "../lightning", default-features = false, features = ["std", "_test_utils"] } tokio = { version = "1.35.0", features = ["full"] } [target.'cfg(all(not(target_os = "windows"), not(no_download)))'.dev-dependencies] diff --git a/lightning/Cargo.toml b/lightning/Cargo.toml index 27f7af93fdb..91c4c76e769 100644 --- a/lightning/Cargo.toml +++ b/lightning/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning" -version = "0.0.123-beta" +version = "0.0.123" authors = ["Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning/" @@ -43,7 +43,7 @@ default = ["std", "grind_signatures"] bitcoin = { version = "0.30.2", default-features = false, features = ["secp-recovery"] } hashbrown = { version = "0.13", optional = true, default-features = false } -possiblyrandom = { version = "0.1", optional = true, default-features = false } +possiblyrandom = { version = "0.2", optional = true, default-features = false } hex = { package = "hex-conservative", version = "0.1.1", default-features = false } regex = { version = "1.5.6", optional = true } backtrace = { version = "0.3", optional = true } diff --git a/lightning/src/chain/chaininterface.rs b/lightning/src/chain/chaininterface.rs index 2e37127e038..9909e115ed6 100644 --- a/lightning/src/chain/chaininterface.rs +++ b/lightning/src/chain/chaininterface.rs @@ -147,6 +147,10 @@ pub enum ConfirmationTarget { /// /// Note that all of the functions implemented here *must* be reentrant-safe (obviously - they're /// called from inside the library in response to chain events, P2P events, or timer events). +/// +/// LDK may generate a substantial number of fee-estimation calls in some cases. You should +/// pre-calculate and cache the fee estimate results to ensure you don't substantially slow HTLC +/// handling. pub trait FeeEstimator { /// Gets estimated satoshis of fee required per 1000 Weight-Units. /// diff --git a/lightning/src/chain/channelmonitor.rs b/lightning/src/chain/channelmonitor.rs index b8598eabb97..c6170922cca 100644 --- a/lightning/src/chain/channelmonitor.rs +++ b/lightning/src/chain/channelmonitor.rs @@ -1366,6 +1366,7 @@ impl ChannelMonitor { /// The monitor watches for it to be broadcasted and then uses the HTLC information (and /// possibly future revocation/preimage information) to claim outputs where possible. /// We cache also the mapping hash:commitment number to lighten pruning of old preimages by watchtowers. + /// #SPLICING #[cfg(test)] fn provide_latest_counterparty_commitment_tx( &self, @@ -2593,10 +2594,11 @@ impl ChannelMonitorImpl { self.initial_counterparty_commitment_info = Some((their_per_commitment_point.clone(), feerate_per_kw, to_broadcaster_value, to_countersignatory_value)); - #[cfg(debug_assertions)] { - let rebuilt_commitment_tx = self.initial_counterparty_commitment_tx().unwrap(); - debug_assert_eq!(rebuilt_commitment_tx.trust().txid(), txid); - } + // TODO check reenable + // #[cfg(debug_assertions)] { + // let rebuilt_commitment_tx = self.initial_counterparty_commitment_tx().unwrap(); + // debug_assert_eq!(rebuilt_commitment_tx.trust().txid(), txid); + // } self.provide_latest_counterparty_commitment_tx(txid, htlc_outputs, commitment_number, their_per_commitment_point, logger); diff --git a/lightning/src/crypto/poly1305.rs b/lightning/src/crypto/poly1305.rs index 59320021005..bc2459adb69 100644 --- a/lightning/src/crypto/poly1305.rs +++ b/lightning/src/crypto/poly1305.rs @@ -207,6 +207,7 @@ impl Poly1305 { #[cfg(test)] mod test { use core::iter::repeat; + use alloc::vec::Vec; use super::Poly1305; diff --git a/lightning/src/events/mod.rs b/lightning/src/events/mod.rs index 2a1f698782b..4c4315e4d9e 100644 --- a/lightning/src/events/mod.rs +++ b/lightning/src/events/mod.rs @@ -990,6 +990,10 @@ pub enum Event { /// /// Will be `None` for channels created prior to LDK version 0.0.122. channel_type: Option, + /// True if the channel is pending as part of a splicing process. + /// Note: for persistence reasons this field is included non-conditionally, #[cfg(splicing)]. + /// TODO: remove this comment once cfg(splicing) is removed. + is_splice: bool, }, /// Used to indicate that a channel with the given `channel_id` is ready to /// be used. This event is emitted either when the funding transaction has been confirmed @@ -1011,6 +1015,10 @@ pub enum Event { counterparty_node_id: PublicKey, /// The features that this channel will operate with. channel_type: ChannelTypeFeatures, + /// True if the channel is became ready after a splicing process. + /// Note: for persistence reasons this field is included non-conditionally, #[cfg(splicing)]. + /// TODO: remove this comment once cfg(splicing) is removed. + is_splice: bool, }, /// Used to indicate that a channel that got past the initial handshake with the given `channel_id` is in the /// process of closure. This includes previously opened channels, and channels that time out from not being funded. @@ -1114,6 +1122,65 @@ pub enum Event { /// [`ChannelManager`]: crate::ln::channelmanager::ChannelManager channel_type: ChannelTypeFeatures, }, + /// Indicates a request to open a new dual-funded channel by a peer. + /// + /// To accept the request without contributing funds, call [`ChannelManager::accept_inbound_channel`]. + /// To accept the request and contribute funds, call [`ChannelManager::accept_inbound_channel_with_contribution`]. + /// To reject the request, call [`ChannelManager::force_close_without_broadcasting_txn`]. + /// + /// The event is always triggered when a new open channel request is received for a dual-funded + /// channel, regardless of the value of the [`UserConfig::manually_accept_inbound_channels`] + /// config flag. This is so that funding inputs can be manually provided to contribute to the + /// overall channel capacity on the acceptor side. + /// + /// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel + /// [`ChannelManager::accept_inbound_channel_with_contribution`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel_with_contribution + /// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn + /// [`UserConfig::manually_accept_inbound_channels`]: crate::util::config::UserConfig::manually_accept_inbound_channels + #[cfg(any(dual_funding, splicing))] + OpenChannelV2Request { + /// The temporary channel ID of the channel requested to be opened. + /// + /// When responding to the request, the `temporary_channel_id` should be passed + /// back to the ChannelManager through [`ChannelManager::accept_inbound_channel`] or + /// [`ChannelManager::accept_inbound_channel_with_contribution`] to accept, or through + /// [`ChannelManager::force_close_without_broadcasting_txn`] to reject. + /// + /// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel + /// [`ChannelManager::accept_inbound_channel_with_contribution`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel_with_contribution + /// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn + temporary_channel_id: ChannelId, + /// The node_id of the counterparty requesting to open the channel. + /// + /// When responding to the request, the `counterparty_node_id` should be passed + /// back to the ChannelManager through [`ChannelManager::accept_inbound_channel`] or + /// [`ChannelManager::accept_inbound_channel_with_contribution`] to accept, or through + /// [`ChannelManager::force_close_without_broadcasting_txn`] to reject the request. + /// + /// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel + /// [`ChannelManager::accept_inbound_channel_with_contribution`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel_with_contribution + /// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn + counterparty_node_id: PublicKey, + /// The counterparty's contribution to the channel value in satoshis. + counterparty_funding_satoshis: u64, + /// The features that this channel will operate with. If you reject the channel, a + /// well-behaved counterparty may automatically re-attempt the channel with a new set of + /// feature flags. + /// + /// Note that if [`ChannelTypeFeatures::supports_scid_privacy`] returns true on this type, + /// the resulting [`ChannelManager`] will not be readable by versions of LDK prior to + /// 0.0.106. + /// + /// Furthermore, note that if [`ChannelTypeFeatures::supports_zero_conf`] returns true on this type, + /// the resulting [`ChannelManager`] will not be readable by versions of LDK prior to + /// 0.0.107. + /// + /// NOTE: Zero-conf dual-funded channels are not currently accepted. + // TODO(dual_funding): Support zero-conf channels. + /// + /// [`ChannelManager`]: crate::ln::channelmanager::ChannelManager + channel_type: ChannelTypeFeatures, + }, /// Indicates that the HTLC was accepted, but could not be processed when or after attempting to /// forward it. /// @@ -1141,6 +1208,123 @@ pub enum Event { /// /// [`ChannelHandshakeConfig::negotiate_anchors_zero_fee_htlc_tx`]: crate::util::config::ChannelHandshakeConfig::negotiate_anchors_zero_fee_htlc_tx BumpTransaction(BumpTransactionEvent), + /* Note: FundingInputsContributionReady event has been abandoned + /// Used to indicate that the client should provide inputs to fund a dual-funded channel using + /// interactive transaction construction by calling [`ChannelManager::contribute_funding_inputs`]. + /// Generated in [`ChannelManager`] message handling. + /// Note that *all inputs* contributed must spend SegWit outputs or your counterparty can steal + /// your funds! + /// + /// [`ChannelManager`]: crate::ln::channelmanager::ChannelManager + /// [`ChannelManager::contribute_funding_inputs`]: crate::ln::channelmanager::ChannelManager::contribute_funding_inputs + #[cfg(any(dual_funding, splicing))] + FundingInputsContributionReady { + /// The channel_id of the channel that requires funding inputs which you'll need to pass into + /// [`ChannelManager::contribute_funding_inputs`]. + /// + /// [`ChannelManager::contribute_funding_inputs`]: crate::ln::channelmanager::ChannelManager::contribute_funding_inputs + channel_id: ChannelId, + /// The counterparty's node_id, which you'll need to pass back into + /// [`ChannelManager::contribute_funding_inputs`]. + /// + /// [`ChannelManager::contribute_funding_inputs`]: crate::ln::channelmanager::ChannelManager::contribute_funding_inputs + counterparty_node_id: PublicKey, + /// The value, in satoshis, that we commited to contribute to the channel value during + /// establishment. + holder_funding_satoshis: u64, + /// The value, in satoshis, that the counterparty commited to contribute to the channel value + /// during channel establishment. + counterparty_funding_satoshis: u64, + /// TODO(dual_funding): Update docs + /// The `user_channel_id` value passed in to [`ChannelManager::create_channel`] for outbound + /// channels, or to [`ChannelManager::accept_inbound_channel`] for inbound channels if + /// [`UserConfig::manually_accept_inbound_channels`] config flag is set to true. Otherwise + /// `user_channel_id` will be randomized for an inbound channel. This may be zero for objects + /// serialized with LDK versions prior to 0.0.113. + /// + /// [`ChannelManager::create_channel`]: crate::ln::channelmanager::ChannelManager::create_channel + /// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel + /// [`UserConfig::manually_accept_inbound_channels`]: crate::util::config::UserConfig::manually_accept_inbound_channels + user_channel_id: u128, + }, + */ + /// Indicates that a transaction constructed via interactive transaction construction for a + /// dual-funded (V2) channel is ready to be signed by the client. This event will only be triggered + /// if at least one input was contributed by the holder. + /// + /// The transaction contains all inputs provided by both parties when the channel was + /// created/accepted along with the channel's funding output and a change output if applicable. + /// + /// No part of the transaction should be changed before signing as the content of the transaction + /// has already been negotiated with the counterparty. + /// + /// Each signature MUST use the SIGHASH_ALL flag to avoid invalidation of initial commitment and + /// hence possible loss of funds. + /// + /// After signing, call [`ChannelManager::funding_transaction_signed`] with the (partially) signed + /// funding transaction. + /// + /// Generated in [`ChannelManager`] message handling. + /// + /// [`ChannelManager`]: crate::ln::channelmanager::ChannelManager + /// [`ChannelManager::funding_transaction_signed`]: crate::ln::channelmanager::ChannelManager::funding_transaction_signed + #[cfg(any(dual_funding, splicing))] + FundingTransactionReadyForSigning { + /// The channel_id of the V2 channel which you'll need to pass back into + /// [`ChannelManager::funding_transaction_signed`]. + /// + /// [`ChannelManager::funding_transaction_signed`]: crate::ln::channelmanager::ChannelManager::funding_transaction_signed + channel_id: ChannelId, + /// The counterparty's node_id, which you'll need to pass back into + /// [`ChannelManager::funding_transaction_signed`]. + /// + /// [`ChannelManager::funding_transaction_signed`]: crate::ln::channelmanager::ChannelManager::funding_transaction_signed + counterparty_node_id: PublicKey, + /// The `user_channel_id` value passed in to [`ChannelManager::create_dual_funded_channel`] for outbound + /// channels, or to [`ChannelManager::accept_inbound_channel`] or [`ChannelManager::accept_inbound_channel_with_contribution`] + /// for inbound channels if [`UserConfig::manually_accept_inbound_channels`] config flag is set to true. + /// Otherwise `user_channel_id` will be randomized for an inbound channel. + /// This may be zero for objects serialized with LDK versions prior to 0.0.113. + /// + /// [`ChannelManager::create_dual_funded_channel`]: crate::ln::channelmanager::ChannelManager::create_dual_funded_channel + /// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel + /// [`ChannelManager::accept_inbound_channel_with_contribution`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel_with_contribution + /// [`UserConfig::manually_accept_inbound_channels`]: crate::util::config::UserConfig::manually_accept_inbound_channels + user_channel_id: u128, + /// The unsigned transaction to be signed and passed back to + /// [`ChannelManager::funding_transaction_signed`]. + /// + /// [`ChannelManager::funding_transaction_signed`]: crate::ln::channelmanager::ChannelManager::funding_transaction_signed + unsigned_transaction: Transaction, + } + /* Note: SpliceAckedInputsContributionReady is no longer used + /// #SPLICING + /// Indicates that the splice negotiation is done, `splice_ack` msg was received, and interactive transaction negotiation can start. + /// Similar to FundingInputsContributionReady + /// TODO Change name, this should come after tx negotiation, maybe not needed in this form + #[cfg(splicing)] + SpliceAckedInputsContributionReady { + /// The channel_id of the channel that requires funding inputs which you'll need to pass into + /// [`ChannelManager::contribute_funding_inputs`]. + /// + /// [`ChannelManager::contribute_funding_inputs`]: crate::ln::channelmanager::ChannelManager::contribute_funding_inputs + channel_id: ChannelId, + /// The counterparty's node_id, which you'll need to pass back into + /// [`ChannelManager::contribute_funding_inputs`]. + /// + /// [`ChannelManager::contribute_funding_inputs`]: crate::ln::channelmanager::ChannelManager::contribute_funding_inputs + counterparty_node_id: PublicKey, + /// The pre-splice channel value, in satoshis. + pre_channel_value_satoshis: u64, + /// The post-splice channel value, in satoshis. + post_channel_value_satoshis: u64, + /// The value, in satoshis, that we commited to contribute to the new funding transaction during splicing. + holder_funding_satoshis: u64, + /// The value, in satoshis, that the counterparty commited to contribute to the new funding transaction + /// during splicing. + counterparty_funding_satoshis: u64, + } + */ } impl Writeable for Event { @@ -1371,18 +1555,19 @@ impl Writeable for Event { } write_tlv_fields!(writer, {}); // Write a length field for forwards compat } - &Event::ChannelReady { ref channel_id, ref user_channel_id, ref counterparty_node_id, ref channel_type } => { + &Event::ChannelReady { ref channel_id, ref user_channel_id, ref counterparty_node_id, ref channel_type, ref is_splice } => { 29u8.write(writer)?; write_tlv_fields!(writer, { (0, channel_id, required), (2, user_channel_id, required), (4, counterparty_node_id, required), (6, channel_type, required), + (8, is_splice, required), }); }, &Event::ChannelPending { ref channel_id, ref user_channel_id, ref former_temporary_channel_id, ref counterparty_node_id, ref funding_txo, - ref channel_type + ref channel_type, ref is_splice, } => { 31u8.write(writer)?; write_tlv_fields!(writer, { @@ -1392,6 +1577,7 @@ impl Writeable for Event { (4, former_temporary_channel_id, required), (6, counterparty_node_id, required), (8, funding_txo, required), + (10, is_splice, required), }); }, &Event::InvoiceRequestFailed { ref payment_id } => { @@ -1400,10 +1586,47 @@ impl Writeable for Event { (0, payment_id, required), }) }, + /* + // #SPLICING + #[cfg(splicing)] + &Event::SpliceAckedInputsContributionReady { ref channel_id, ref counterparty_node_id, ref pre_channel_value_satoshis, ref post_channel_value_satoshis, ref holder_funding_satoshis, ref counterparty_funding_satoshis } => { + 33u8.write(writer)?; // TODO value + write_tlv_fields!(writer, { + (0, channel_id, required), + (2, counterparty_node_id, required), + (4, pre_channel_value_satoshis, required), + (6, post_channel_value_satoshis, required), + (8, holder_funding_satoshis, required), + (10, counterparty_funding_satoshis, required), + }); + }, + */ &Event::ConnectionNeeded { .. } => { 35u8.write(writer)?; // Never write ConnectionNeeded events as buffered onion messages aren't serialized. }, + // #[cfg(any(dual_funding, splicing))] + // &Event::FundingInputsContributionReady { .. } => { + // 37u8.write(writer)?; + // // We never write out FundingInputsContributionReady events as, upon disconnection, peers + // // drop any channels which have not yet exchanged the initial commitment_signed in V2 channel + // // establishment. + // }, + #[cfg(any(dual_funding, splicing))] + &Event::OpenChannelV2Request { .. } => { + 37u8.write(writer)?; + // We never write the OpenChannelV2Request events as, upon disconnection, peers + // drop any channels which have not yet completed any interactive funding transaction + // construction. + }, + #[cfg(any(dual_funding, splicing))] + &Event::FundingTransactionReadyForSigning { .. } => { + 39u8.write(writer)?; + // We never write out FundingTransactionReadyForSigning events as, upon disconnection, peers + // drop any V2-established channels which have not yet exchanged the initial `commitment_signed`. + // We only exhange the initial `commitment_signed` after the client calls + // `ChannelManager::funding_transaction_signed` and ALWAYS before we send a `tx_signatures` + }, // Note that, going forward, all new events must only write data inside of // `write_tlv_fields`. Versions 0.0.101+ will ignore odd-numbered events that write // data via `write_tlv_fields`. @@ -1757,18 +1980,21 @@ impl MaybeReadable for Event { let mut user_channel_id: u128 = 0; let mut counterparty_node_id = RequiredWrapper(None); let mut channel_type = RequiredWrapper(None); + let mut is_splice = false; read_tlv_fields!(reader, { (0, channel_id, required), (2, user_channel_id, required), (4, counterparty_node_id, required), (6, channel_type, required), + (8, is_splice, required), }); Ok(Some(Event::ChannelReady { channel_id, user_channel_id, counterparty_node_id: counterparty_node_id.0.unwrap(), - channel_type: channel_type.0.unwrap() + channel_type: channel_type.0.unwrap(), + is_splice, })) }; f() @@ -1781,6 +2007,7 @@ impl MaybeReadable for Event { let mut counterparty_node_id = RequiredWrapper(None); let mut funding_txo = RequiredWrapper(None); let mut channel_type = None; + let mut is_splice = false; read_tlv_fields!(reader, { (0, channel_id, required), (1, channel_type, option), @@ -1788,6 +2015,7 @@ impl MaybeReadable for Event { (4, former_temporary_channel_id, required), (6, counterparty_node_id, required), (8, funding_txo, required), + (10, is_splice, required), }); Ok(Some(Event::ChannelPending { @@ -1797,6 +2025,7 @@ impl MaybeReadable for Event { counterparty_node_id: counterparty_node_id.0.unwrap(), funding_txo: funding_txo.0.unwrap(), channel_type, + is_splice, })) }; f() @@ -1892,12 +2121,12 @@ pub enum MessageSendEvent { /// The message which should be sent. msg: msgs::Stfu, }, - /// Used to indicate that a splice message should be sent to the peer with the given node id. - SendSplice { + /// Used to indicate that a splice_init message should be sent to the peer with the given node id. + SendSpliceInit { /// The node_id of the node which should receive this message node_id: PublicKey, /// The message which should be sent. - msg: msgs::Splice, + msg: msgs::SpliceInit, }, /// Used to indicate that a splice_ack message should be sent to the peer with the given node id. SendSpliceAck { diff --git a/lightning/src/ln/chan_utils.rs b/lightning/src/ln/chan_utils.rs index 3c99cdb0943..b95c822ff59 100644 --- a/lightning/src/ln/chan_utils.rs +++ b/lightning/src/ln/chan_utils.rs @@ -300,7 +300,9 @@ impl CounterpartyCommitmentSecrets { for i in 0..pos { let (old_secret, old_idx) = self.old_secrets[i as usize]; if Self::derive_secret(secret, pos, old_idx) != old_secret { - return Err(()); + if old_idx != (1 << 48) && old_secret != [0; 32] { // ignore empty entries from the check + return Err(()); + } } } if self.get_min_seen_secret() <= idx { diff --git a/lightning/src/ln/channel.rs b/lightning/src/ln/channel.rs index afe265c8a6a..dc9ec1b2a13 100644 --- a/lightning/src/ln/channel.rs +++ b/lightning/src/ln/channel.rs @@ -10,8 +10,7 @@ use bitcoin::blockdata::constants::ChainHash; use bitcoin::blockdata::script::{Script, ScriptBuf, Builder}; use bitcoin::blockdata::transaction::Transaction; -use bitcoin::sighash; -use bitcoin::sighash::EcdsaSighashType; +use bitcoin::sighash::{self, EcdsaSighashType}; use bitcoin::consensus::encode; use bitcoin::hashes::Hash; @@ -23,9 +22,17 @@ use bitcoin::secp256k1::constants::PUBLIC_KEY_SIZE; use bitcoin::secp256k1::{PublicKey,SecretKey}; use bitcoin::secp256k1::{Secp256k1,ecdsa::Signature}; use bitcoin::secp256k1; +#[cfg(any(dual_funding, splicing))] +use bitcoin::{TxIn, TxOut, Witness}; +#[cfg(any(dual_funding, splicing))] +use bitcoin::locktime::absolute::LockTime; use crate::ln::types::{ChannelId, PaymentPreimage, PaymentHash}; +#[cfg(splicing)] +use crate::ln::channel_splice::{PendingSpliceInfoPre, PendingSpliceInfoPost, SplicingChannelValues}; use crate::ln::features::{ChannelTypeFeatures, InitFeatures}; +#[cfg(any(dual_funding, splicing))] +use crate::ln::interactivetxs::{ConstructedTransaction, estimate_input_weight, get_output_weight, HandleTxCompleteResult, InteractiveTxConstructor, InteractiveTxSigningSession, InteractiveTxMessageSend, InteractiveTxMessageSendResult, TX_COMMON_FIELDS_WEIGHT}; use crate::ln::msgs; use crate::ln::msgs::DecodeError; use crate::ln::script::{self, ShutdownScript}; @@ -35,13 +42,19 @@ use crate::ln::chan_utils; use crate::ln::onion_utils::HTLCFailReason; use crate::chain::BestBlock; use crate::chain::chaininterface::{FeeEstimator, ConfirmationTarget, LowerBoundedFeeEstimator}; +#[cfg(any(dual_funding, splicing))] +use crate::chain::chaininterface::fee_for_weight; use crate::chain::channelmonitor::{ChannelMonitor, ChannelMonitorUpdate, ChannelMonitorUpdateStep, LATENCY_GRACE_PERIOD_BLOCKS, CLOSED_CHANNEL_UPDATE_ID}; use crate::chain::transaction::{OutPoint, TransactionData}; use crate::sign::ecdsa::{EcdsaChannelSigner, WriteableEcdsaChannelSigner}; use crate::sign::{EntropySource, ChannelSigner, SignerProvider, NodeSigner, Recipient}; use crate::events::ClosureReason; +#[cfg(any(dual_funding, splicing))] +use crate::events::Event; use crate::routing::gossip::NodeId; use crate::util::ser::{Readable, ReadableArgs, Writeable, Writer}; +#[cfg(any(dual_funding, splicing))] +use crate::util::ser::TransactionU16LenLimited; use crate::util::logger::{Logger, Record, WithContext}; use crate::util::errors::APIError; use crate::util::config::{UserConfig, ChannelConfig, LegacyChannelConfig, ChannelHandshakeConfig, ChannelHandshakeLimits, MaxDustHTLCExposure}; @@ -97,6 +110,7 @@ enum FeeUpdateState { Outbound, } +#[derive(Clone)] enum InboundHTLCRemovalReason { FailRelay(msgs::OnionErrorPacket), FailMalformed(([u8; 32], u16)), @@ -131,6 +145,7 @@ impl_writeable_tlv_based_enum!(InboundHTLCResolution, }; ); +#[derive(Clone)] enum InboundHTLCState { /// Offered by remote, to be included in next local commitment tx. I.e., the remote sent an /// update_add_htlc message for this HTLC. @@ -251,6 +266,7 @@ impl_writeable_tlv_based_enum_upgradable!(InboundHTLCStateDetails, (6, AwaitingRemoteRevokeToRemoveFail) => {}; ); +#[derive(Clone)] struct InboundHTLCOutput { htlc_id: u64, amount_msat: u64, @@ -306,7 +322,8 @@ impl_writeable_tlv_based!(InboundHTLCDetails, { (8, is_dust, required), }); -#[cfg_attr(test, derive(Clone, Debug, PartialEq))] +#[derive(Clone)] +#[cfg_attr(test, derive(Debug, PartialEq))] enum OutboundHTLCState { /// Added by us and included in a commitment_signed (if we were AwaitingRemoteRevoke when we /// created it we would have put it in the holding cell instead). When they next revoke_and_ack @@ -431,7 +448,8 @@ impl<'a> Into> for &'a OutboundHTLCOutcome { } } -#[cfg_attr(test, derive(Clone, Debug, PartialEq))] +#[derive(Clone)] +#[cfg_attr(test, derive(Debug, PartialEq))] struct OutboundHTLCOutput { htlc_id: u64, amount_msat: u64, @@ -496,7 +514,8 @@ impl_writeable_tlv_based!(OutboundHTLCDetails, { }); /// See AwaitingRemoteRevoke ChannelState for more info -#[cfg_attr(test, derive(Clone, Debug, PartialEq))] +#[derive(Clone)] +#[cfg_attr(test, derive(Debug, PartialEq))] enum HTLCUpdateAwaitingACK { AddHTLC { // TODO: Time out if we're getting close to cltv_expiry // always outbound @@ -646,6 +665,7 @@ mod state_flags { pub const LOCAL_SHUTDOWN_SENT: u32 = 1 << 11; pub const SHUTDOWN_COMPLETE: u32 = 1 << 12; pub const WAITING_FOR_BATCH: u32 = 1 << 13; + pub const IS_SPLICE: u32 = 1 << 14; } define_state_flags!( @@ -692,7 +712,9 @@ define_state_flags!( ("Indicates the channel was funded in a batch and the broadcast of the funding transaction \ is being held until all channels in the batch have received `funding_signed` and have \ their monitors persisted.", WAITING_FOR_BATCH, state_flags::WAITING_FOR_BATCH, - is_waiting_for_batch, set_waiting_for_batch, clear_waiting_for_batch) + is_waiting_for_batch, set_waiting_for_batch, clear_waiting_for_batch), + ("Indicates that the channel funding changes as part of a splicing process", + IS_SPLICE, state_flags::IS_SPLICE, is_splice, set_splice, clear_splice) ] ); @@ -720,6 +742,7 @@ enum ChannelState { FundingNegotiated, /// We've received/sent `funding_created` and `funding_signed` and are thus now waiting on the /// funding transaction to confirm. + /// Also used in case of splicing (with splicing flag set) AwaitingChannelReady(AwaitingChannelReadyFlags), /// Both we and our counterparty consider the funding transaction confirmed and the channel is /// now operational. @@ -973,7 +996,7 @@ pub(super) enum ChannelUpdateStatus { } /// We track when we sent an `AnnouncementSignatures` to our peer in a few states, described here. -#[derive(PartialEq)] +#[derive(Clone, Copy, PartialEq)] pub enum AnnouncementSigsState { /// We have not sent our peer an `AnnouncementSignatures` yet, or our peer disconnected since /// we sent the last `AnnouncementSignatures`. @@ -998,14 +1021,16 @@ enum HTLCInitiator { RemoteOffered, } -/// An enum gathering stats on pending HTLCs, either inbound or outbound side. +/// Current counts of various HTLCs, useful for calculating current balances available exactly. struct HTLCStats { - pending_htlcs: u32, - pending_htlcs_value_msat: u64, + pending_inbound_htlcs: usize, + pending_outbound_htlcs: usize, + pending_inbound_htlcs_value_msat: u64, + pending_outbound_htlcs_value_msat: u64, on_counterparty_tx_dust_exposure_msat: u64, on_holder_tx_dust_exposure_msat: u64, - holding_cell_msat: u64, - on_holder_tx_holding_cell_htlcs_count: u32, // dust HTLCs *non*-included + outbound_holding_cell_msat: u64, + on_holder_tx_outbound_holding_cell_htlcs_count: u32, // dust HTLCs *non*-included } /// An enum gathering stats on commitment transaction, either local or remote. @@ -1176,6 +1201,7 @@ pub(crate) const UNFUNDED_CHANNEL_AGE_LIMIT_TICKS: usize = 60; /// Number of blocks needed for an output from a coinbase transaction to be spendable. pub(crate) const COINBASE_MATURITY: u32 = 100; +#[derive(Clone)] struct PendingChannelMonitorUpdate { update: ChannelMonitorUpdate, } @@ -1184,6 +1210,395 @@ impl_writeable_tlv_based!(PendingChannelMonitorUpdate, { (0, update, required), }); +/// An enum for a negotiating V2 channel +/// Note: OutboundV2Channel and InboundV2Channel should be merged into one struct, with an `is_outbound` flag. +/// This is the placeholder for it here. +#[cfg(any(dual_funding, splicing))] +pub(super) enum V2Channel where SP::Target: SignerProvider { + UnfundedOutboundV2(OutboundV2Channel), + UnfundedInboundV2(InboundV2Channel), +} + +/// #SPLICING ChannelVariants +/// Can hold: +/// - one or more funded channel (confirmed or not), and +/// - one optional negotiating channel, outbound or inbound +/// Used is several phases: +/// - Funded & confirmed V1 -- one channel +/// - Funded & confirmed V2 -- one channel +/// - Funded & pending V1 -- one channel +/// - Funded & pending V2 no RBF -- one channel +/// - Funded & pending V2 with some RBF -- several channels +/// - Funded & pending V2 with some RBF, and one RBF negotiating +/// -- several channels, one channel with negotiating context +/// TODO: separate Funded(ChannelVariants) out into two different phases: +/// - Confirmed(Channel) and +/// - NegotiatingV2(ChannelVariants) +// #[cfg(any(dual_funding, splicing))] +pub(super) struct ChannelVariants where SP::Target: SignerProvider { + // TODO locking + funded_channels: Vec>, + #[cfg(any(dual_funding, splicing))] + unfunded_channel: Option>, +} + +// #[cfg(any(dual_funding, splicing))] +impl ChannelVariants where SP::Target: SignerProvider { + pub fn new(funded_channel: Channel) -> Self { + Self { + funded_channels: vec![funded_channel], + #[cfg(any(dual_funding, splicing))] + unfunded_channel: None, + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn new_with_pending(pending_channel: V2Channel) -> Self { + Self { + funded_channels: Vec::new(), + #[cfg(any(dual_funding, splicing))] + unfunded_channel: Some(pending_channel), + } + } + + /// Return the context of the last funded (unconfirmed) channel, if any, or + /// if none, the pending one + pub fn context(&self) -> &ChannelContext { + if self.funded_channels.len() > 0 { + &self.funded_channels[self.funded_channels.len() - 1].context + } else { + #[cfg(any(dual_funding, splicing))] + if let Some(unfunded) = &self.unfunded_channel { + return unfunded.context(); + } + panic!("No channel in collection"); + } + } + + /// Return the mutable context of the last funded (unconfirmed) channel, if any, or + /// if none, the pending one + pub fn context_mut(&mut self) -> &mut ChannelContext { + if self.funded_channels.len() > 0 { + let n = self.funded_channels.len(); + &mut self.funded_channels[n - 1].context + } else { + #[cfg(any(dual_funding, splicing))] + if let Some(ref mut unfunded) = &mut self.unfunded_channel { + return unfunded.context_mut(); + } + panic!("No channel in collection"); + } + } + + /// Return the last funded (unconfirmed) channel, or if none, the pending one + pub fn get_funded_channel(&self) -> Option<&Channel> { + // self.debug(); // TODO remove + if self.funded_channels.len() > 0 { + Some(&self.funded_channels[self.funded_channels.len() - 1]) + } else { + None + } + } + + /// Return the last funded (unconfirmed) channel + pub fn get_funded_channel_mut(&mut self) -> Option<&mut Channel> { + self.debug(); // TODO remove + if self.funded_channels.len() > 0 { + let n = self.funded_channels.len(); + Some(&mut self.funded_channels[n - 1]) + } else { + None + } + } + + /// Return all the funded channels + pub fn all_funded(&mut self) -> Vec<&mut Channel> { + debug_assert!(false, "This is to be used in RBF, not yet implemented"); + self.funded_channels.iter_mut().collect::>() + } + + /// TODO remove + pub fn debug(&self) { + } + + /// Add new funded, close any unfunded + pub fn add_funded(&mut self, funded_channel: Channel) { + self.funded_channels.push(funded_channel); + + #[cfg(any(dual_funding, splicing))] + { + self.unfunded_channel = None; + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn has_pending(&self) -> bool { + self.unfunded_channel.is_some() + } + + #[cfg(any(dual_funding, splicing))] + pub fn get_pending(&self) -> Option<&V2Channel> { + self.unfunded_channel.as_ref() + } + + #[cfg(any(dual_funding, splicing))] + pub fn get_pending_mut(&mut self) -> Option<&mut V2Channel> { + self.unfunded_channel.as_mut() + } + + #[cfg(any(dual_funding, splicing))] + pub fn take_pending(&mut self) -> Option> { + self.unfunded_channel.take() + } + + #[cfg(any(dual_funding, splicing))] + pub fn set_new_pending_out(&mut self, variant_channel: OutboundV2Channel) { + debug_assert!(self.unfunded_channel.is_none()); + self.unfunded_channel = Some(V2Channel::UnfundedOutboundV2(variant_channel)); + self.debug(); // TODO remove + panic!("This is to be used in RBF, not yet implemented"); + } + + #[cfg(any(dual_funding, splicing))] + pub fn get_pending_out_mut(&mut self) -> Option<&mut OutboundV2Channel> { + match self.unfunded_channel { + None => None, + Some(ref mut ch) => { + match ch { + V2Channel::UnfundedOutboundV2(ref mut ch) => Some(ch), + _ => None, + } + }, + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn take_pending_out(&mut self) -> Option> { + if self.get_pending_out_mut().is_none() { return None; } + match self.unfunded_channel.take() { + None => panic!("None"), + Some(ch) => { + match ch { + V2Channel::UnfundedOutboundV2(ch) => Some(ch), + _ => panic!("Not out"), + } + }, + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn set_new_pending_in(&mut self, variant_channel: InboundV2Channel) { + debug_assert!(self.unfunded_channel.is_none()); + self.unfunded_channel = Some(V2Channel::UnfundedInboundV2(variant_channel)); + self.debug(); // TODO remove + panic!("This is to be used in RBF, not yet implemented"); + } + + #[cfg(any(dual_funding, splicing))] + pub fn get_pending_in_mut(&mut self) -> Option<&mut InboundV2Channel> { + match self.unfunded_channel { + None => None, + Some(ref mut ch) => { + match ch { + V2Channel::UnfundedInboundV2(ref mut ch) => Some(ch), + _ => None, + } + }, + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn take_pending_in(&mut self) -> Option> { + if self.get_pending_in_mut().is_none() { return None; } + let v = self.unfunded_channel.take(); + match v { + None => panic!("None"), + Some(ch) => { + match ch { + V2Channel::UnfundedInboundV2(ch) => Some(ch), + _ => panic!("Not in"), + } + }, + } + } + + /// Take the last funded (unconfirmed) channel + pub fn take_funded_channel(&mut self) -> Option> { + // self.debug(); // TODO remove + if self.funded_channels.len() > 0 { + let n = self.funded_channels.len(); + Some(self.funded_channels.remove(n - 1)) + } else { + None + } + } + + /// Keep only the one confirmed channel, drop the other variants + // This is to be relaced by going to Confirmed phase with one channel + pub fn keep_one_confirmed(&mut self, channel_index: usize) { + self.debug(); // TODO remove + if self.funded_channels.len() > 1 { + debug_assert!(channel_index < self.funded_channels.len()); + self.funded_channels = vec![self.funded_channels.remove(channel_index)]; + #[cfg(any(dual_funding, splicing))] + { + self.unfunded_channel = None; + } + self.debug(); // TODO remove + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn tx_add_input(&mut self, msg: &msgs::TxAddInput) -> Result { + if let Some(pending) = self.get_pending_mut() { + Ok(pending.tx_add_input(msg)) + } else { + // funded + Err(ChannelError::Warn(format!("Channel is already funded, not expecting tx_add_input"))) + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn tx_add_output(&mut self, msg: &msgs::TxAddOutput) -> Result { + if let Some(pending) = self.get_pending_mut() { + Ok(pending.tx_add_output(msg)) + } else { + // funded + Err(ChannelError::Warn(format!("Channel is already funded, not expecting tx_add_output"))) + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn tx_complete(&mut self, msg: &msgs::TxComplete) -> Result { + if let Some(pending) = self.get_pending_mut() { + Ok(pending.tx_complete(msg)) + } else { + // funded + Err(ChannelError::Warn(format!("Channel is already funded, not expecting tx_complete"))) + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn funding_tx_constructed( + mut self, counterparty_node_id: &PublicKey, signing_session: InteractiveTxSigningSession, logger: &L + ) -> Result<(Channel, msgs::CommitmentSigned, Option), (Self, ChannelError)> + where + L::Target: Logger + { + if let Some(pending) = self.take_pending() { + pending.funding_tx_constructed(counterparty_node_id, signing_session, logger) + .map_err(|(_ch, err)| (self, err)) + } else { + Err((self, ChannelError::Close(format!("Channel is already funded, not expecting tx_constructed")))) + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn funding_transaction_signed(&mut self, channel_id: &ChannelId, witnesses: Vec) -> Result, ChannelError> { + if let Some(funded) = self.get_funded_channel_mut() { + funded.funding_transaction_signed(channel_id, witnesses) + } else { + Err(ChannelError::Close(format!("Channel with id {} is has no funded channel, not expecting funding signatures", channel_id))) + } + } + + pub fn commitment_signed(&mut self, msg: &msgs::CommitmentSigned, logger: &L) -> Result<(&mut Channel, Option), ChannelError> + where L::Target: Logger + { + if let Some(post_chan) = self.get_funded_channel_mut() { + let monitor_opt = post_chan.commitment_signed(msg, logger)?; + Ok((post_chan, monitor_opt)) + } else { + Err(ChannelError::Close("Got a commitment_signed message, but there is no funded channel!".into())) + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn commitment_signed_initial_v2(&mut self, msg: &msgs::CommitmentSigned, best_block: BestBlock, signer_provider: &SP, logger: &L) + -> Result, ChannelMonitor<::EcdsaSigner>)>, ChannelError> + where L::Target: Logger + { + if let Some(post_chan) = self.get_funded_channel_mut() { + if matches!(post_chan.context.channel_state, ChannelState::FundingNegotiated) { + // First commitment + // TODO(splicing): For splice pending case expand this condition, depending if commitment was already received or not + let interactive_tx_signing_in_progress = post_chan.interactive_tx_signing_session.is_some(); + if interactive_tx_signing_in_progress { + let monitor = post_chan.commitment_signed_initial_v2(&msg, best_block, signer_provider, logger)?; + Ok(Some((post_chan, monitor))) + } else { + Err(ChannelError::Close("Got a commitment_signed message, but there is no transaction negotiation context!".into())) + } + } else { + // Not first commitment + Ok(None) + } + } else { + Err(ChannelError::Close("Got a commitment_signed message, but there is no funded channel!".into())) + } + } + + #[cfg(splicing)] + pub fn splice_init( + &mut self, our_funding_contribution_satoshis: i64, + signer_provider: &SP, entropy_source: &ES, holder_node_id: PublicKey, logger: &L + ) + -> Result + where ES::Target: EntropySource, L::Target: Logger + { + if let Some(post_chan) = self.get_pending_mut() { + if !post_chan.is_outbound() { + // Apply start of splice change in the state + post_chan.context_mut().splice_start(false, logger); + let splice_ack_msg = post_chan.get_splice_ack(our_funding_contribution_satoshis)?; + let _msg = post_chan.begin_interactive_funding_tx_construction(signer_provider, entropy_source, holder_node_id) + .map_err(|err| ChannelError::Warn(format!("Failed to start interactive transaction construction, {:?}", err)))?; + Ok(splice_ack_msg) + } else { + Err(ChannelError::Warn("Internal consistency error: splice_init but no inbound channel".into())) + } + } else { + Err(ChannelError::Warn("Internal consistency error: splice_init and no pending channel".into())) + } + } + + #[cfg(splicing)] + pub fn splice_ack( + &mut self, + signer_provider: &SP, entropy_source: &ES, holder_node_id: PublicKey, logger: &L + ) + -> Result, ChannelError> + where ES::Target: EntropySource, L::Target: Logger + { + if let Some(post_chan) = self.get_pending_mut() { + if post_chan.is_outbound() { + // Apply start of splice change in the state + post_chan.context_mut().splice_start(true, logger); + + /* Note: SpliceAckedInputsContributionReady event is no longer used + // Prepare SpliceAckedInputsContributionReady event + let mut pending_events = self.pending_events.lock().unwrap(); + pending_events.push_back((events::Event::SpliceAckedInputsContributionReady { + channel_id: post_chan_id, + counterparty_node_id: *counterparty_node_id, + pre_channel_value_satoshis: pre_channel_value, + post_channel_value_satoshis: post_channel_value, + holder_funding_satoshis: if post_channel_value < pre_channel_value { 0 } else { post_channel_value.saturating_sub(pre_channel_value) }, + counterparty_funding_satoshis: 0, + } , None)); + */ + let tx_msg_opt = post_chan.begin_interactive_funding_tx_construction(signer_provider, entropy_source, holder_node_id) + .map_err(|err| ChannelError::Warn(format!("V2 channel rejected due to sender error, {:?}", err)))?; + Ok(tx_msg_opt) + } else { + Err(ChannelError::Warn("Internal consistency error: splice_ack but no outbound channel".into())) + } + } else { + Err(ChannelError::Warn("Internal consistency error: splice_ack but no pending channel".into())) + } + } +} + /// The `ChannelPhase` enum describes the current phase in life of a lightning channel with each of /// its variants containing an appropriate channel struct. pub(super) enum ChannelPhase where SP::Target: SignerProvider { @@ -1193,7 +1608,15 @@ pub(super) enum ChannelPhase where SP::Target: SignerProvider { UnfundedOutboundV2(OutboundV2Channel), #[cfg(any(dual_funding, splicing))] UnfundedInboundV2(InboundV2Channel), + /// Funding transaction negotiated (pending or locked) Funded(Channel), + /// Renegotiating existing channel, for splicing + /// First channel is the already funded (and confirmed, pre-splice) channel. + /// The second collection can hold: + /// - the negotiated (funded) but pending post-splice channel + /// - the channel being negotiated (post-splice, inbound or outbound) + #[cfg(splicing)] + RefundingV2((Channel, ChannelVariants)), } impl<'a, SP: Deref> ChannelPhase where @@ -1209,6 +1632,12 @@ impl<'a, SP: Deref> ChannelPhase where ChannelPhase::UnfundedOutboundV2(chan) => &chan.context, #[cfg(any(dual_funding, splicing))] ChannelPhase::UnfundedInboundV2(chan) => &chan.context, + // Both post and pre exist + #[cfg(splicing)] + ChannelPhase::RefundingV2((pre_chan, post_chans)) => { + // If post is funded, use that, otherwise use pre + &post_chans.get_funded_channel().unwrap_or(pre_chan).context + }, } } @@ -1221,6 +1650,12 @@ impl<'a, SP: Deref> ChannelPhase where ChannelPhase::UnfundedOutboundV2(ref mut chan) => &mut chan.context, #[cfg(any(dual_funding, splicing))] ChannelPhase::UnfundedInboundV2(ref mut chan) => &mut chan.context, + // Both post and pre exist + #[cfg(splicing)] + ChannelPhase::RefundingV2((ref mut pre_chan, ref mut post_chans)) => { + // If post is funded, use that, otherwise use pre + &mut post_chans.get_funded_channel_mut().unwrap_or(pre_chan).context + }, } } } @@ -1245,6 +1680,8 @@ impl UnfundedChannelContext { self.unfunded_channel_age_ticks += 1; self.unfunded_channel_age_ticks >= UNFUNDED_CHANNEL_AGE_LIMIT_TICKS } + + pub fn default() -> Self { Self { unfunded_channel_age_ticks: 0 } } } /// Contains everything about the channel including state, and various flags. @@ -1282,6 +1719,13 @@ pub(super) struct ChannelContext where SP::Target: SignerProvider { secp_ctx: Secp256k1, channel_value_satoshis: u64, + /// Info about an in-progress, pending splice (if any), on the pre-splice channel + #[cfg(splicing)] + pub(crate) pending_splice_pre: Option, + /// Info about an in-progress, pending splice (if any), on the post-splice channel + #[cfg(splicing)] + pub(crate) pending_splice_post: Option, + latest_monitor_update_id: u64, holder_signer: ChannelSignerType, @@ -1437,7 +1881,13 @@ pub(super) struct ChannelContext where SP::Target: SignerProvider { counterparty_forwarding_info: Option, pub(crate) channel_transaction_parameters: ChannelTransactionParameters, + /// The funding transaction is stored here, but only during the channel establishment phase. + /// Being set does not necessarily mean that is's already locked. funding_transaction: Option, + /// The funding transaction, similar to `funding_transaction` field, but stored here for the full lifecycle of the channel. + /// Being set does not necessarily mean that is's already locked. + #[cfg(splicing)] + funding_transaction_saved: Option, is_batch_funding: Option<()>, counterparty_cur_commitment_point: Option, @@ -1541,8 +1991,235 @@ pub(super) struct ChannelContext where SP::Target: SignerProvider { /// If we can't release a [`ChannelMonitorUpdate`] until some external action completes, we /// store it here and only release it to the `ChannelManager` once it asks for it. blocked_monitor_updates: Vec, + + /// The current interactive transaction construction session under negotiation. + #[cfg(any(dual_funding, splicing))] + interactive_tx_constructor: Option, + // If we've sent `commitment_signed` for an interactive transaction construction, + // but have not received `tx_signatures` we MUST set `next_funding_txid` to the + // txid of that interactive transaction, else we MUST NOT set it. + next_funding_txid: Option, } +#[cfg(any(dual_funding, splicing))] +pub(super) trait HasChannelContext where SP::Target: SignerProvider { + fn context(&self) -> &ChannelContext; + fn context_mut(&mut self) -> &mut ChannelContext; +} + +#[cfg(any(dual_funding, splicing))] +pub(super) trait HasDualFundingChannelContext { + fn dual_funding_context(&self) -> &DualFundingChannelContext; + fn dual_funding_context_mut(&mut self) -> &mut DualFundingChannelContext; +} + +#[cfg(any(dual_funding, splicing))] +pub(super) trait InteractivelyFunded: HasChannelContext + HasDualFundingChannelContext where SP::Target: SignerProvider { + fn tx_add_input(&mut self, msg: &msgs::TxAddInput) -> InteractiveTxMessageSendResult { + InteractiveTxMessageSendResult(match self.context_mut().interactive_tx_constructor { + Some(ref mut tx_constructor) => tx_constructor.handle_tx_add_input(msg).map_err( + |reason| reason.into_tx_abort_msg(self.context_mut().channel_id())), + None => Err(msgs::TxAbort { + channel_id: self.context_mut().channel_id(), + data: "We do not have an interactive transaction negotiation in progress".to_string().into_bytes() + }), + }) + } + + fn tx_add_output(&mut self, msg: &msgs::TxAddOutput)-> InteractiveTxMessageSendResult { + InteractiveTxMessageSendResult(match self.context_mut().interactive_tx_constructor { + Some(ref mut tx_constructor) => tx_constructor.handle_tx_add_output(msg).map_err( + |reason| reason.into_tx_abort_msg(self.context_mut().channel_id())), + None => Err(msgs::TxAbort { + channel_id: self.context_mut().channel_id(), + data: "We do not have an interactive transaction negotiation in progress".to_string().into_bytes() + }), + }) + } + + fn tx_remove_input(&mut self, msg: &msgs::TxRemoveInput)-> InteractiveTxMessageSendResult { + InteractiveTxMessageSendResult(match self.context_mut().interactive_tx_constructor { + Some(ref mut tx_constructor) => tx_constructor.handle_tx_remove_input(msg).map_err( + |reason| reason.into_tx_abort_msg(self.context_mut().channel_id())), + None => Err(msgs::TxAbort { + channel_id: self.context_mut().channel_id(), + data: "We do not have an interactive transaction negotiation in progress".to_string().into_bytes() + }), + }) + } + + fn tx_remove_output(&mut self, msg: &msgs::TxRemoveOutput)-> InteractiveTxMessageSendResult { + InteractiveTxMessageSendResult(match self.context_mut().interactive_tx_constructor { + Some(ref mut tx_constructor) => tx_constructor.handle_tx_remove_output(msg).map_err( + |reason| reason.into_tx_abort_msg(self.context_mut().channel_id())), + None => Err(msgs::TxAbort { + channel_id: self.context_mut().channel_id(), + data: "We do not have an interactive transaction negotiation in progress".to_string().into_bytes() + }), + }) + } + + fn tx_complete(&mut self, msg: &msgs::TxComplete) -> HandleTxCompleteResult { + HandleTxCompleteResult(match self.context_mut().interactive_tx_constructor { + Some(ref mut tx_constructor) => tx_constructor.handle_tx_complete(msg).map_err( + |reason| reason.into_tx_abort_msg(self.context_mut().channel_id())), + None => Err(msgs::TxAbort { + channel_id: self.context_mut().channel_id(), + data: "We do not have an interactive transaction negotiation in progress".to_string().into_bytes() + }), + }) + } + + fn internal_funding_tx_constructed(&mut self, counterparty_node_id: &PublicKey, + signing_session: &mut InteractiveTxSigningSession, logger: &L) + -> Result<(msgs::CommitmentSigned, Option), ChannelError> + where + L::Target: Logger + { + let our_funding_satoshis = self.dual_funding_context().our_funding_satoshis; + let context = self.context_mut(); + let is_splice_pending = context.is_splice_pending(); + + // Find the funding output + // Splicing note: the channel value at this time is already the post-splice value, so no special handling is needed + let expected_spk = context.get_funding_redeemscript().to_v0_p2wsh(); + let funding_outputs = signing_session.unsigned_tx.find_output_by_script(&expected_spk, Some(context.get_value_satoshis())); + let funding_outpoint_index = if funding_outputs.len() == 1 { + funding_outputs[0].0 as u16 + } else if funding_outputs.len() == 0 { + return Err(ChannelError::Close("No output matched the script_pubkey and value in the FundingGenerationReady event".to_owned())); + } else { // > 1 + return Err(ChannelError::Close("Multiple outputs matched the expected script and value".to_owned())); + }; + let outpoint = OutPoint { txid: signing_session.unsigned_tx.txid(), index: funding_outpoint_index }; + context.channel_transaction_parameters.funding_outpoint = Some(outpoint); + context.holder_signer.as_mut().provide_channel_parameters(&context.channel_transaction_parameters); + + let commitment_signed = get_initial_commitment_signed(context, signing_session.unsigned_tx.clone(), + is_splice_pending, logger); + let commitment_signed = match commitment_signed { + Ok(commitment_signed) => commitment_signed, + Err(err) => return Err(ChannelError::Close(err.to_string())), + }; + + let mut partly_signed_transaction = signing_session.unsigned_tx.clone().into_unsigned_tx(); + #[cfg(splicing)] + if is_splice_pending { + // #SPLICING + // #SPLICE-SIG + // Add signature for prev funding input + // Note: here the transaction is used for signing, input&output order matters + let (partly_signed_tx, holder_signature) = context.prev_funding_tx_sign(&partly_signed_transaction, None, logger)?; + signing_session.shared_signature = Some(holder_signature); + partly_signed_transaction = partly_signed_tx; + } + + let mut funding_ready_for_sig_event = None; + if our_funding_satoshis == 0 { + signing_session.provide_holder_witnesses(context.channel_id, Vec::new(), signing_session.shared_signature.clone()); + } else { + funding_ready_for_sig_event = Some(Event::FundingTransactionReadyForSigning { + channel_id: context.channel_id, + counterparty_node_id: *counterparty_node_id, + user_channel_id: context.user_id, + // Note: here the transaction is used for signing, input&output order matters + unsigned_transaction: partly_signed_transaction, + }); + } + + // Replace tx constructor session with signing session + context.interactive_tx_constructor = None; + context.channel_state = ChannelState::FundingNegotiated; + + Ok((commitment_signed, funding_ready_for_sig_event)) + } +} + +#[cfg(any(dual_funding, splicing))] +impl HasChannelContext for OutboundV2Channel where SP::Target: SignerProvider { + fn context(&self) -> &ChannelContext { + &self.context + } + fn context_mut(&mut self) -> &mut ChannelContext { + &mut self.context + } +} + +#[cfg(any(dual_funding, splicing))] +impl HasChannelContext for InboundV2Channel where SP::Target: SignerProvider { + fn context(&self) -> &ChannelContext { + &self.context + } + fn context_mut(&mut self) -> &mut ChannelContext { + &mut self.context + } +} + +#[cfg(any(dual_funding, splicing))] +impl HasChannelContext for V2Channel where SP::Target: SignerProvider { + fn context(&self) -> &ChannelContext { + match self { + Self::UnfundedOutboundV2(ch) => &ch.context, + Self::UnfundedInboundV2(ch) => &ch.context, + } + } + + fn context_mut(&mut self) -> &mut ChannelContext { + match self { + Self::UnfundedOutboundV2(ch) => &mut ch.context, + Self::UnfundedInboundV2(ch) => &mut ch.context, + } + } +} + +#[cfg(any(dual_funding, splicing))] +impl HasDualFundingChannelContext for OutboundV2Channel where SP::Target: SignerProvider { + fn dual_funding_context(&self) -> &DualFundingChannelContext { + &self.dual_funding_context + } + + fn dual_funding_context_mut(&mut self) -> &mut DualFundingChannelContext { + &mut self.dual_funding_context + } +} + +#[cfg(any(dual_funding, splicing))] +impl HasDualFundingChannelContext for InboundV2Channel where SP::Target: SignerProvider { + fn dual_funding_context(&self) -> &DualFundingChannelContext { + &self.dual_funding_context + } + + fn dual_funding_context_mut(&mut self) -> &mut DualFundingChannelContext { + &mut self.dual_funding_context + } +} + +#[cfg(any(dual_funding, splicing))] +impl HasDualFundingChannelContext for V2Channel where SP::Target: SignerProvider { + fn dual_funding_context(&self) -> &DualFundingChannelContext { + match &self { + Self::UnfundedOutboundV2(ch) => &ch.dual_funding_context, + Self::UnfundedInboundV2(ch) => &ch.dual_funding_context, + } + } + + fn dual_funding_context_mut(&mut self) -> &mut DualFundingChannelContext { + match self { + Self::UnfundedOutboundV2(ch) => &mut ch.dual_funding_context, + Self::UnfundedInboundV2(ch) => &mut ch.dual_funding_context, + } + } +} + +#[cfg(any(dual_funding, splicing))] +impl InteractivelyFunded for OutboundV2Channel where SP::Target: SignerProvider {} + +#[cfg(any(dual_funding, splicing))] +impl InteractivelyFunded for InboundV2Channel where SP::Target: SignerProvider {} + +#[cfg(any(dual_funding, splicing))] +impl InteractivelyFunded for V2Channel where SP::Target: SignerProvider {} + impl ChannelContext where SP::Target: SignerProvider { fn new_for_inbound_channel<'a, ES: Deref, F: Deref, L: Deref>( fee_estimator: &'a LowerBoundedFeeEstimator, @@ -1834,6 +2511,8 @@ impl ChannelContext where SP::Target: SignerProvider { channel_type_features: channel_type.clone() }, funding_transaction: None, + #[cfg(splicing)] + funding_transaction_saved: None, is_batch_funding: None, counterparty_cur_commitment_point: Some(open_channel_fields.first_per_commitment_point), @@ -1872,6 +2551,15 @@ impl ChannelContext where SP::Target: SignerProvider { local_initiated_shutdown: None, blocked_monitor_updates: Vec::new(), + + #[cfg(any(dual_funding, splicing))] + interactive_tx_constructor: None, + next_funding_txid: None, + + #[cfg(splicing)] + pending_splice_pre: None, + #[cfg(splicing)] + pending_splice_post: None, }; Ok(channel_context) @@ -2055,6 +2743,8 @@ impl ChannelContext where SP::Target: SignerProvider { channel_type_features: channel_type.clone() }, funding_transaction: None, + #[cfg(splicing)] + funding_transaction_saved: None, is_batch_funding: None, counterparty_cur_commitment_point: None, @@ -2092,9 +2782,232 @@ impl ChannelContext where SP::Target: SignerProvider { blocked_monitor_updates: Vec::new(), local_initiated_shutdown: None, + + #[cfg(any(dual_funding, splicing))] + interactive_tx_constructor: None, + next_funding_txid: None, + #[cfg(splicing)] + pending_splice_pre: None, + #[cfg(splicing)] + pending_splice_post: None, }) } + /// Clone, each field, with a few exceptions, notably the channel signer, + /// interactive_tx_constructor is nulled, and + /// a few non-cloneable fields (such as Secp256k1 context) + fn clone(&self, holder_signer: ::EcdsaSigner) -> Self { + Self { + config: self.config, + prev_config: self.prev_config, + inbound_handshake_limits_override: self.inbound_handshake_limits_override, + user_id: self.user_id, + channel_id: self.channel_id, + temporary_channel_id: self.temporary_channel_id, + channel_state: self.channel_state, + announcement_sigs_state: self.announcement_sigs_state.clone(), + // Create new Secp256k context + secp_ctx: Secp256k1::new(), + channel_value_satoshis: self.channel_value_satoshis, + #[cfg(splicing)] + pending_splice_pre: self.pending_splice_pre.clone(), + #[cfg(splicing)] + pending_splice_post: self.pending_splice_post.clone(), + latest_monitor_update_id: self.latest_monitor_update_id, + // Use provided channel signer + holder_signer: ChannelSignerType::Ecdsa(holder_signer), + shutdown_scriptpubkey: self.shutdown_scriptpubkey.clone(), + destination_script: self.destination_script.clone(), + cur_holder_commitment_transaction_number: self.cur_holder_commitment_transaction_number, + cur_counterparty_commitment_transaction_number: self.cur_counterparty_commitment_transaction_number, + value_to_self_msat: self.value_to_self_msat, + pending_inbound_htlcs: self.pending_inbound_htlcs.clone(), + pending_outbound_htlcs: self.pending_outbound_htlcs.clone(), + holding_cell_htlc_updates: self.holding_cell_htlc_updates.clone(), + resend_order: self.resend_order.clone(), + monitor_pending_channel_ready: self.monitor_pending_channel_ready, + monitor_pending_revoke_and_ack: self.monitor_pending_revoke_and_ack, + monitor_pending_commitment_signed: self.monitor_pending_commitment_signed, + monitor_pending_forwards: self.monitor_pending_forwards.clone(), + monitor_pending_failures: self.monitor_pending_failures.clone(), + monitor_pending_finalized_fulfills: self.monitor_pending_finalized_fulfills.clone(), + monitor_pending_update_adds: self.monitor_pending_update_adds.clone(), + signer_pending_commitment_update: self.signer_pending_commitment_update, + signer_pending_funding: self.signer_pending_funding, + pending_update_fee: self.pending_update_fee, + holding_cell_update_fee: self.holding_cell_update_fee, + next_holder_htlc_id: self.next_holder_htlc_id, + next_counterparty_htlc_id: self.next_counterparty_htlc_id, + feerate_per_kw: self.feerate_per_kw, + update_time_counter: self.update_time_counter, + // Create new mutex with copied values + #[cfg(debug_assertions)] + holder_max_commitment_tx_output: Mutex::new(*self.holder_max_commitment_tx_output.lock().unwrap()), + #[cfg(debug_assertions)] + counterparty_max_commitment_tx_output: Mutex::new(*self.counterparty_max_commitment_tx_output.lock().unwrap()), + last_sent_closing_fee: self.last_sent_closing_fee.clone(), + target_closing_feerate_sats_per_kw: self.target_closing_feerate_sats_per_kw, + pending_counterparty_closing_signed: self.pending_counterparty_closing_signed.clone(), + closing_fee_limits: self.closing_fee_limits, + expecting_peer_commitment_signed: self.expecting_peer_commitment_signed, + funding_tx_confirmed_in: self.funding_tx_confirmed_in, + funding_tx_confirmation_height: self.funding_tx_confirmation_height, + short_channel_id: self.short_channel_id, + channel_creation_height: self.channel_creation_height, + counterparty_dust_limit_satoshis: self.counterparty_dust_limit_satoshis, + holder_dust_limit_satoshis: self.holder_dust_limit_satoshis, + counterparty_max_htlc_value_in_flight_msat: self.counterparty_max_htlc_value_in_flight_msat, + holder_max_htlc_value_in_flight_msat: self.holder_max_htlc_value_in_flight_msat, + counterparty_selected_channel_reserve_satoshis: self.counterparty_selected_channel_reserve_satoshis, + holder_selected_channel_reserve_satoshis: self.holder_selected_channel_reserve_satoshis, + counterparty_htlc_minimum_msat: self.counterparty_htlc_minimum_msat, + holder_htlc_minimum_msat: self.holder_htlc_minimum_msat, + counterparty_max_accepted_htlcs: self.counterparty_max_accepted_htlcs, + holder_max_accepted_htlcs: self.holder_max_accepted_htlcs, + minimum_depth: self.minimum_depth, + counterparty_forwarding_info: self.counterparty_forwarding_info.clone(), + channel_transaction_parameters: self.channel_transaction_parameters.clone(), + funding_transaction: self.funding_transaction.clone(), + #[cfg(splicing)] + funding_transaction_saved: self.funding_transaction_saved.clone(), + is_batch_funding: self.is_batch_funding, + counterparty_cur_commitment_point: self.counterparty_cur_commitment_point, + counterparty_prev_commitment_point: self.counterparty_prev_commitment_point, + counterparty_node_id: self.counterparty_node_id, + counterparty_shutdown_scriptpubkey: self.counterparty_shutdown_scriptpubkey.clone(), + commitment_secrets: self.commitment_secrets.clone(), + channel_update_status: self.channel_update_status, + closing_signed_in_flight: self.closing_signed_in_flight, + announcement_sigs: self.announcement_sigs, + // Create new mutex with copied values + #[cfg(any(test, fuzzing))] + next_local_commitment_tx_fee_info_cached: Mutex::new(self.next_local_commitment_tx_fee_info_cached.lock().unwrap().clone()), + #[cfg(any(test, fuzzing))] + next_remote_commitment_tx_fee_info_cached: Mutex::new(self.next_remote_commitment_tx_fee_info_cached.lock().unwrap().clone()), + workaround_lnd_bug_4006: self.workaround_lnd_bug_4006.clone(), + sent_message_awaiting_response: self.sent_message_awaiting_response, + #[cfg(any(test, fuzzing))] + historical_inbound_htlc_fulfills: self.historical_inbound_htlc_fulfills.clone(), + channel_type: self.channel_type.clone(), + latest_inbound_scid_alias: self.latest_inbound_scid_alias, + outbound_scid_alias: self.outbound_scid_alias, + channel_pending_event_emitted: self.channel_pending_event_emitted, + channel_ready_event_emitted: self.channel_ready_event_emitted, + local_initiated_shutdown: self.local_initiated_shutdown.clone(), + channel_keys_id: self.channel_keys_id, + blocked_monitor_updates: self.blocked_monitor_updates.clone(), + #[cfg(any(dual_funding, splicing))] + interactive_tx_constructor: None, + next_funding_txid: self.next_funding_txid, + } + } + + /// Create channel context for spliced channel, by duplicating and updating the context. + /// TODO change doc + /// relative_satoshis: The change in channel value (sats), + /// positive for increase (splice-in), negative for decrease (splice out). + /// delta_belongs_to_local: + /// The amount from the channel value change that belongs to the local (sats). + /// Its sign has to be the same as the sign of relative_satoshis, and its absolute value + /// less or equal (e.g. for +100 in the range of 0..100, for -100 in the range of -100..0). + #[cfg(splicing)] + fn new_for_splice( + pre_splice_context: &Self, + is_outgoing: bool, + counterparty_funding_pubkey: &PublicKey, + our_funding_contribution: i64, + their_funding_contribution: i64, + holder_signer: ::EcdsaSigner, + logger: &L, + ) -> Result, ChannelError> where L::Target: Logger + { + if pre_splice_context.is_splice_pending() { + return Err(ChannelError::Warn(format!("Internal error: Channel is already splicing, channel_id {}", pre_splice_context.channel_id))); + } + + let pre_channel_value = pre_splice_context.channel_value_satoshis; + + // Save the current funding transaction + let pre_funding_transaction = pre_splice_context.funding_transaction_saved.clone(); + let pre_funding_txo = pre_splice_context.get_funding_txo().clone(); + + // Save relevant info from pre-splice state + let pending_splice_post = PendingSpliceInfoPost::new( + pre_channel_value, + our_funding_contribution, + their_funding_contribution, + pre_funding_transaction, + pre_funding_txo, + ); + let post_channel_value = pending_splice_post.post_channel_value(); + + // Compute our new balance + let old_to_self = pre_splice_context.value_to_self_msat; + let delta_in_value_to_self = our_funding_contribution * 1000; + if delta_in_value_to_self < 0 && delta_in_value_to_self.abs() as u64 > old_to_self { + // Change would make our balance negative + return Err(ChannelError::Close(format!("Cannot decrease channel value to requested amount, too low, {} {} {} {} {}", + pre_channel_value, post_channel_value, our_funding_contribution, their_funding_contribution, old_to_self))); + } + let value_to_self_msat = (old_to_self as i64).saturating_add(delta_in_value_to_self) as u64; + + let mut context = pre_splice_context.clone(holder_signer); + + // New channel value + context.channel_value_satoshis = post_channel_value; + // Update value to self + context.value_to_self_msat = value_to_self_msat; + // Reset funding + context.funding_transaction = None; + context.funding_transaction_saved = None; + context.funding_tx_confirmed_in = None; + context.funding_tx_confirmation_height = 0; + context.channel_transaction_parameters.funding_outpoint = None; + // Reset state + context.channel_state = ChannelState::NegotiatingFunding( + if is_outgoing { NegotiatingFundingFlags::OUR_INIT_SENT } else { NegotiatingFundingFlags::OUR_INIT_SENT | NegotiatingFundingFlags::THEIR_INIT_SENT } + ); + context.interactive_tx_constructor = None; + context.next_funding_txid = None; + #[cfg(splicing)] + { + context.pending_splice_pre = None; + context.pending_splice_post = Some(pending_splice_post); + } + // Reset monitor update + context.latest_monitor_update_id = 0; + // Note on commitment transaction numbers and commitment points: + // we could step 'back' here (i.e. increase number by one, set cur to prev), but that does not work, + // because latest commitment point would be lost. + // Instead, we take the previous values in relevant cases when splicing is pending. + // We'll add our counterparty's `funding_satoshis` to these max commitment output assertions + // Clear these state flags, for sending `ChannelPending` and `ChannelReady` again + context.channel_pending_event_emitted = false; + context.channel_ready_event_emitted = false; + // when we receive `accept_channel2`. + #[cfg(debug_assertions)] + { + context.holder_max_commitment_tx_output = Mutex::new((value_to_self_msat, post_channel_value.saturating_sub(value_to_self_msat))); + context.counterparty_max_commitment_tx_output = Mutex::new((value_to_self_msat, post_channel_value.saturating_sub(value_to_self_msat))); + } + // Reset + context.blocked_monitor_updates = Vec::new(); + // // Update funding pubkeys -- Not needed as funding pubkeys do not change + // context.channel_transaction_parameters.holder_pubkeys.funding_pubkey = holder_signer_funding_pubkey; + // if context.channel_transaction_parameters.counterparty_parameters.is_some() { + // context.channel_transaction_parameters.counterparty_parameters.as_mut().unwrap().pubkeys.funding_pubkey = counterparty_funding_pubkey.clone(); + // } + + log_debug!(logger, "Splicing channel context: value {} old {}, dir {}, value to self {}, funding keys local {} cp {}", + context.channel_value_satoshis, pre_channel_value, + if is_outgoing { "outgoing" } else { "incoming" }, + context.value_to_self_msat, + context.channel_transaction_parameters.holder_pubkeys.funding_pubkey, counterparty_funding_pubkey + ); + + Ok(context) + } + /// Allowed in any state (including after shutdown) pub fn get_update_time_counter(&self) -> u32 { self.update_time_counter @@ -2246,6 +3159,140 @@ impl ChannelContext where SP::Target: SignerProvider { } } + fn do_accept_channel_checks(&mut self, default_limits: &ChannelHandshakeLimits, + their_features: &InitFeatures, common_fields: &msgs::CommonAcceptChannelFields, channel_reserve_satoshis: u64, + ) -> Result<(), ChannelError> { + let peer_limits = if let Some(ref limits) = self.inbound_handshake_limits_override { limits } else { default_limits }; + + // Check sanity of message fields: + if !self.is_outbound() { + return Err(ChannelError::Close("Got an accept_channel message from an inbound peer".to_owned())); + } + if !matches!(self.channel_state, ChannelState::NegotiatingFunding(flags) if flags == NegotiatingFundingFlags::OUR_INIT_SENT) { + return Err(ChannelError::Close("Got an accept_channel message at a strange time".to_owned())); + } + if common_fields.dust_limit_satoshis > 21000000 * 100000000 { + return Err(ChannelError::Close(format!("Peer never wants payout outputs? dust_limit_satoshis was {}", common_fields.dust_limit_satoshis))); + } + if channel_reserve_satoshis > self.channel_value_satoshis { + return Err(ChannelError::Close(format!("Bogus channel_reserve_satoshis ({}). Must not be greater than ({})", channel_reserve_satoshis, self.channel_value_satoshis))); + } + if common_fields.dust_limit_satoshis > self.holder_selected_channel_reserve_satoshis { + return Err(ChannelError::Close(format!("Dust limit ({}) is bigger than our channel reserve ({})", common_fields.dust_limit_satoshis, self.holder_selected_channel_reserve_satoshis))); + } + if channel_reserve_satoshis > self.channel_value_satoshis - self.holder_selected_channel_reserve_satoshis { + return Err(ChannelError::Close(format!("Bogus channel_reserve_satoshis ({}). Must not be greater than channel value minus our reserve ({})", + channel_reserve_satoshis, self.channel_value_satoshis - self.holder_selected_channel_reserve_satoshis))); + } + let full_channel_value_msat = (self.channel_value_satoshis - channel_reserve_satoshis) * 1000; + if common_fields.htlc_minimum_msat >= full_channel_value_msat { + return Err(ChannelError::Close(format!("Minimum htlc value ({}) is full channel value ({})", common_fields.htlc_minimum_msat, full_channel_value_msat))); + } + let max_delay_acceptable = u16::min(peer_limits.their_to_self_delay, MAX_LOCAL_BREAKDOWN_TIMEOUT); + if common_fields.to_self_delay > max_delay_acceptable { + return Err(ChannelError::Close(format!("They wanted our payments to be delayed by a needlessly long period. Upper limit: {}. Actual: {}", max_delay_acceptable, common_fields.to_self_delay))); + } + if common_fields.max_accepted_htlcs < 1 { + return Err(ChannelError::Close("0 max_accepted_htlcs makes for a useless channel".to_owned())); + } + if common_fields.max_accepted_htlcs > MAX_HTLCS { + return Err(ChannelError::Close(format!("max_accepted_htlcs was {}. It must not be larger than {}", common_fields.max_accepted_htlcs, MAX_HTLCS))); + } + + // Now check against optional parameters as set by config... + if common_fields.htlc_minimum_msat > peer_limits.max_htlc_minimum_msat { + return Err(ChannelError::Close(format!("htlc_minimum_msat ({}) is higher than the user specified limit ({})", common_fields.htlc_minimum_msat, peer_limits.max_htlc_minimum_msat))); + } + if common_fields.max_htlc_value_in_flight_msat < peer_limits.min_max_htlc_value_in_flight_msat { + return Err(ChannelError::Close(format!("max_htlc_value_in_flight_msat ({}) is less than the user specified limit ({})", common_fields.max_htlc_value_in_flight_msat, peer_limits.min_max_htlc_value_in_flight_msat))); + } + if channel_reserve_satoshis > peer_limits.max_channel_reserve_satoshis { + return Err(ChannelError::Close(format!("channel_reserve_satoshis ({}) is higher than the user specified limit ({})", channel_reserve_satoshis, peer_limits.max_channel_reserve_satoshis))); + } + if common_fields.max_accepted_htlcs < peer_limits.min_max_accepted_htlcs { + return Err(ChannelError::Close(format!("max_accepted_htlcs ({}) is less than the user specified limit ({})", common_fields.max_accepted_htlcs, peer_limits.min_max_accepted_htlcs))); + } + if common_fields.dust_limit_satoshis < MIN_CHAN_DUST_LIMIT_SATOSHIS { + return Err(ChannelError::Close(format!("dust_limit_satoshis ({}) is less than the implementation limit ({})", common_fields.dust_limit_satoshis, MIN_CHAN_DUST_LIMIT_SATOSHIS))); + } + if common_fields.dust_limit_satoshis > MAX_CHAN_DUST_LIMIT_SATOSHIS { + return Err(ChannelError::Close(format!("dust_limit_satoshis ({}) is greater than the implementation limit ({})", common_fields.dust_limit_satoshis, MAX_CHAN_DUST_LIMIT_SATOSHIS))); + } + if common_fields.minimum_depth > peer_limits.max_minimum_depth { + return Err(ChannelError::Close(format!("We consider the minimum depth to be unreasonably large. Expected minimum: ({}). Actual: ({})", peer_limits.max_minimum_depth, common_fields.minimum_depth))); + } + + if let Some(ty) = &common_fields.channel_type { + if *ty != self.channel_type { + return Err(ChannelError::Close("Channel Type in accept_channel didn't match the one sent in open_channel.".to_owned())); + } + } else if their_features.supports_channel_type() { + // Assume they've accepted the channel type as they said they understand it. + } else { + let channel_type = ChannelTypeFeatures::from_init(&their_features); + if channel_type != ChannelTypeFeatures::only_static_remote_key() { + return Err(ChannelError::Close("Only static_remote_key is supported for non-negotiated channel types".to_owned())); + } + self.channel_type = channel_type.clone(); + self.channel_transaction_parameters.channel_type_features = channel_type; + } + + let counterparty_shutdown_scriptpubkey = if their_features.supports_upfront_shutdown_script() { + match &common_fields.shutdown_scriptpubkey { + &Some(ref script) => { + // Peer is signaling upfront_shutdown and has opt-out with a 0-length script. We don't enforce anything + if script.len() == 0 { + None + } else { + if !script::is_bolt2_compliant(&script, their_features) { + return Err(ChannelError::Close(format!("Peer is signaling upfront_shutdown but has provided an unacceptable scriptpubkey format: {}", script))); + } + Some(script.clone()) + } + }, + // Peer is signaling upfront shutdown but don't opt-out with correct mechanism (a.k.a 0-length script). Peer looks buggy, we fail the channel + &None => { + return Err(ChannelError::Close("Peer is signaling upfront_shutdown but we don't get any script. Use 0-length script to opt-out".to_owned())); + } + } + } else { None }; + + self.counterparty_dust_limit_satoshis = common_fields.dust_limit_satoshis; + self.counterparty_max_htlc_value_in_flight_msat = cmp::min(common_fields.max_htlc_value_in_flight_msat, self.channel_value_satoshis * 1000); + self.counterparty_selected_channel_reserve_satoshis = Some(channel_reserve_satoshis); + self.counterparty_htlc_minimum_msat = common_fields.htlc_minimum_msat; + self.counterparty_max_accepted_htlcs = common_fields.max_accepted_htlcs; + + if peer_limits.trust_own_funding_0conf { + self.minimum_depth = Some(common_fields.minimum_depth); + } else { + self.minimum_depth = Some(cmp::max(1, common_fields.minimum_depth)); + } + + let counterparty_pubkeys = ChannelPublicKeys { + funding_pubkey: common_fields.funding_pubkey, + revocation_basepoint: RevocationBasepoint::from(common_fields.revocation_basepoint), + payment_point: common_fields.payment_basepoint, + delayed_payment_basepoint: DelayedPaymentBasepoint::from(common_fields.delayed_payment_basepoint), + htlc_basepoint: HtlcBasepoint::from(common_fields.htlc_basepoint) + }; + + self.channel_transaction_parameters.counterparty_parameters = Some(CounterpartyChannelTransactionParameters { + selected_contest_delay: common_fields.to_self_delay, + pubkeys: counterparty_pubkeys, + }); + + self.counterparty_cur_commitment_point = Some(common_fields.first_per_commitment_point); + self.counterparty_shutdown_scriptpubkey = counterparty_shutdown_scriptpubkey; + + self.channel_state = ChannelState::NegotiatingFunding( + NegotiatingFundingFlags::OUR_INIT_SENT | NegotiatingFundingFlags::THEIR_INIT_SENT + ); + self.inbound_handshake_limits_override = None; // We're done enforcing limits on our peer's handshake now. + + Ok(()) + } + /// Returns the block hash in which our funding transaction was confirmed. pub fn get_funding_tx_confirmed_in(&self) -> Option { self.funding_tx_confirmed_in @@ -2337,15 +3384,16 @@ impl ChannelContext where SP::Target: SignerProvider { cmp::max(self.config.options.cltv_expiry_delta, MIN_CLTV_EXPIRY_DELTA) } - pub fn get_max_dust_htlc_exposure_msat(&self, - fee_estimator: &LowerBoundedFeeEstimator) -> u64 - where F::Target: FeeEstimator - { + fn get_dust_exposure_limiting_feerate(&self, + fee_estimator: &LowerBoundedFeeEstimator, + ) -> u32 where F::Target: FeeEstimator { + fee_estimator.bounded_sat_per_1000_weight(ConfirmationTarget::OnChainSweep) + } + + pub fn get_max_dust_htlc_exposure_msat(&self, limiting_feerate_sat_per_kw: u32) -> u64 { match self.config.options.max_dust_htlc_exposure { MaxDustHTLCExposure::FeeRateMultiplier(multiplier) => { - let feerate_per_kw = fee_estimator.bounded_sat_per_1000_weight( - ConfirmationTarget::OnChainSweep) as u64; - feerate_per_kw.saturating_mul(multiplier) + (limiting_feerate_sat_per_kw as u64).saturating_mul(multiplier) }, MaxDustHTLCExposure::FixedLimitMsat(limit) => limit, } @@ -2695,8 +3743,16 @@ impl ChannelContext where SP::Target: SignerProvider { let revocation_basepoint = &self.get_holder_pubkeys().revocation_basepoint; let htlc_basepoint = &self.get_holder_pubkeys().htlc_basepoint; let counterparty_pubkeys = self.get_counterparty_pubkeys(); + let is_splice_pending = self.is_splice_pending(); + let counterparty_commitment_point = if !is_splice_pending { + self.counterparty_cur_commitment_point.unwrap() + } else { + // During splicing negotiation don't advance the commitment point + // TODO: check if this field is set (could get here when splice is initiated on a not-yet-ready channel) + self.counterparty_prev_commitment_point.unwrap() + }; - TxCreationKeys::derive_new(&self.secp_ctx, &self.counterparty_cur_commitment_point.unwrap(), &counterparty_pubkeys.delayed_payment_basepoint, &counterparty_pubkeys.htlc_basepoint, revocation_basepoint, htlc_basepoint) + TxCreationKeys::derive_new(&self.secp_ctx, &counterparty_commitment_point, &counterparty_pubkeys.delayed_payment_basepoint, &counterparty_pubkeys.htlc_basepoint, revocation_basepoint, htlc_basepoint) } /// Gets the redeemscript for the funding transaction output (ie the funding transaction output @@ -2738,86 +3794,111 @@ impl ChannelContext where SP::Target: SignerProvider { self.counterparty_forwarding_info.clone() } - /// Returns a HTLCStats about inbound pending htlcs - fn get_inbound_pending_htlc_stats(&self, outbound_feerate_update: Option) -> HTLCStats { + /// Returns a HTLCStats about pending htlcs + fn get_pending_htlc_stats(&self, outbound_feerate_update: Option, dust_exposure_limiting_feerate: u32) -> HTLCStats { let context = self; - let mut stats = HTLCStats { - pending_htlcs: context.pending_inbound_htlcs.len() as u32, - pending_htlcs_value_msat: 0, - on_counterparty_tx_dust_exposure_msat: 0, - on_holder_tx_dust_exposure_msat: 0, - holding_cell_msat: 0, - on_holder_tx_holding_cell_htlcs_count: 0, - }; + let uses_0_htlc_fee_anchors = self.get_channel_type().supports_anchors_zero_fee_htlc_tx(); - let (htlc_timeout_dust_limit, htlc_success_dust_limit) = if context.get_channel_type().supports_anchors_zero_fee_htlc_tx() { + let dust_buffer_feerate = context.get_dust_buffer_feerate(outbound_feerate_update); + let (htlc_timeout_dust_limit, htlc_success_dust_limit) = if uses_0_htlc_fee_anchors { (0, 0) } else { - let dust_buffer_feerate = context.get_dust_buffer_feerate(outbound_feerate_update) as u64; - (dust_buffer_feerate * htlc_timeout_tx_weight(context.get_channel_type()) / 1000, - dust_buffer_feerate * htlc_success_tx_weight(context.get_channel_type()) / 1000) + (dust_buffer_feerate as u64 * htlc_timeout_tx_weight(context.get_channel_type()) / 1000, + dust_buffer_feerate as u64 * htlc_success_tx_weight(context.get_channel_type()) / 1000) }; - let counterparty_dust_limit_timeout_sat = htlc_timeout_dust_limit + context.counterparty_dust_limit_satoshis; - let holder_dust_limit_success_sat = htlc_success_dust_limit + context.holder_dust_limit_satoshis; - for ref htlc in context.pending_inbound_htlcs.iter() { - stats.pending_htlcs_value_msat += htlc.amount_msat; - if htlc.amount_msat / 1000 < counterparty_dust_limit_timeout_sat { - stats.on_counterparty_tx_dust_exposure_msat += htlc.amount_msat; + + let mut on_holder_tx_dust_exposure_msat = 0; + let mut on_counterparty_tx_dust_exposure_msat = 0; + + let mut on_counterparty_tx_offered_nondust_htlcs = 0; + let mut on_counterparty_tx_accepted_nondust_htlcs = 0; + + let mut pending_inbound_htlcs_value_msat = 0; + + { + let counterparty_dust_limit_timeout_sat = htlc_timeout_dust_limit + context.counterparty_dust_limit_satoshis; + let holder_dust_limit_success_sat = htlc_success_dust_limit + context.holder_dust_limit_satoshis; + for ref htlc in context.pending_inbound_htlcs.iter() { + pending_inbound_htlcs_value_msat += htlc.amount_msat; + if htlc.amount_msat / 1000 < counterparty_dust_limit_timeout_sat { + on_counterparty_tx_dust_exposure_msat += htlc.amount_msat; + } else { + on_counterparty_tx_offered_nondust_htlcs += 1; + } + if htlc.amount_msat / 1000 < holder_dust_limit_success_sat { + on_holder_tx_dust_exposure_msat += htlc.amount_msat; + } } - if htlc.amount_msat / 1000 < holder_dust_limit_success_sat { - stats.on_holder_tx_dust_exposure_msat += htlc.amount_msat; + } + + let mut pending_outbound_htlcs_value_msat = 0; + let mut outbound_holding_cell_msat = 0; + let mut on_holder_tx_outbound_holding_cell_htlcs_count = 0; + let mut pending_outbound_htlcs = self.pending_outbound_htlcs.len(); + { + let counterparty_dust_limit_success_sat = htlc_success_dust_limit + context.counterparty_dust_limit_satoshis; + let holder_dust_limit_timeout_sat = htlc_timeout_dust_limit + context.holder_dust_limit_satoshis; + for ref htlc in context.pending_outbound_htlcs.iter() { + pending_outbound_htlcs_value_msat += htlc.amount_msat; + if htlc.amount_msat / 1000 < counterparty_dust_limit_success_sat { + on_counterparty_tx_dust_exposure_msat += htlc.amount_msat; + } else { + on_counterparty_tx_accepted_nondust_htlcs += 1; + } + if htlc.amount_msat / 1000 < holder_dust_limit_timeout_sat { + on_holder_tx_dust_exposure_msat += htlc.amount_msat; + } + } + + for update in context.holding_cell_htlc_updates.iter() { + if let &HTLCUpdateAwaitingACK::AddHTLC { ref amount_msat, .. } = update { + pending_outbound_htlcs += 1; + pending_outbound_htlcs_value_msat += amount_msat; + outbound_holding_cell_msat += amount_msat; + if *amount_msat / 1000 < counterparty_dust_limit_success_sat { + on_counterparty_tx_dust_exposure_msat += amount_msat; + } else { + on_counterparty_tx_accepted_nondust_htlcs += 1; + } + if *amount_msat / 1000 < holder_dust_limit_timeout_sat { + on_holder_tx_dust_exposure_msat += amount_msat; + } else { + on_holder_tx_outbound_holding_cell_htlcs_count += 1; + } + } } } - stats - } - - /// Returns a HTLCStats about pending outbound htlcs, *including* pending adds in our holding cell. - fn get_outbound_pending_htlc_stats(&self, outbound_feerate_update: Option) -> HTLCStats { - let context = self; - let mut stats = HTLCStats { - pending_htlcs: context.pending_outbound_htlcs.len() as u32, - pending_htlcs_value_msat: 0, - on_counterparty_tx_dust_exposure_msat: 0, - on_holder_tx_dust_exposure_msat: 0, - holding_cell_msat: 0, - on_holder_tx_holding_cell_htlcs_count: 0, - }; - let (htlc_timeout_dust_limit, htlc_success_dust_limit) = if context.get_channel_type().supports_anchors_zero_fee_htlc_tx() { - (0, 0) - } else { - let dust_buffer_feerate = context.get_dust_buffer_feerate(outbound_feerate_update) as u64; - (dust_buffer_feerate * htlc_timeout_tx_weight(context.get_channel_type()) / 1000, - dust_buffer_feerate * htlc_success_tx_weight(context.get_channel_type()) / 1000) - }; - let counterparty_dust_limit_success_sat = htlc_success_dust_limit + context.counterparty_dust_limit_satoshis; - let holder_dust_limit_timeout_sat = htlc_timeout_dust_limit + context.holder_dust_limit_satoshis; - for ref htlc in context.pending_outbound_htlcs.iter() { - stats.pending_htlcs_value_msat += htlc.amount_msat; - if htlc.amount_msat / 1000 < counterparty_dust_limit_success_sat { - stats.on_counterparty_tx_dust_exposure_msat += htlc.amount_msat; - } - if htlc.amount_msat / 1000 < holder_dust_limit_timeout_sat { - stats.on_holder_tx_dust_exposure_msat += htlc.amount_msat; + // Include any mining "excess" fees in the dust calculation + let excess_feerate_opt = outbound_feerate_update + .or(self.pending_update_fee.map(|(fee, _)| fee)) + .unwrap_or(self.feerate_per_kw) + .checked_sub(dust_exposure_limiting_feerate); + if let Some(excess_feerate) = excess_feerate_opt { + let on_counterparty_tx_nondust_htlcs = + on_counterparty_tx_accepted_nondust_htlcs + on_counterparty_tx_offered_nondust_htlcs; + on_counterparty_tx_dust_exposure_msat += + commit_tx_fee_msat(excess_feerate, on_counterparty_tx_nondust_htlcs, &self.channel_type); + if !self.channel_type.supports_anchors_zero_fee_htlc_tx() { + on_counterparty_tx_dust_exposure_msat += + on_counterparty_tx_accepted_nondust_htlcs as u64 * htlc_success_tx_weight(&self.channel_type) + * excess_feerate as u64 / 1000; + on_counterparty_tx_dust_exposure_msat += + on_counterparty_tx_offered_nondust_htlcs as u64 * htlc_timeout_tx_weight(&self.channel_type) + * excess_feerate as u64 / 1000; } } - for update in context.holding_cell_htlc_updates.iter() { - if let &HTLCUpdateAwaitingACK::AddHTLC { ref amount_msat, .. } = update { - stats.pending_htlcs += 1; - stats.pending_htlcs_value_msat += amount_msat; - stats.holding_cell_msat += amount_msat; - if *amount_msat / 1000 < counterparty_dust_limit_success_sat { - stats.on_counterparty_tx_dust_exposure_msat += amount_msat; - } - if *amount_msat / 1000 < holder_dust_limit_timeout_sat { - stats.on_holder_tx_dust_exposure_msat += amount_msat; - } else { - stats.on_holder_tx_holding_cell_htlcs_count += 1; - } - } + HTLCStats { + pending_inbound_htlcs: self.pending_inbound_htlcs.len(), + pending_outbound_htlcs, + pending_inbound_htlcs_value_msat, + pending_outbound_htlcs_value_msat, + on_counterparty_tx_dust_exposure_msat, + on_holder_tx_dust_exposure_msat, + outbound_holding_cell_msat, + on_holder_tx_outbound_holding_cell_htlcs_count, } - stats } /// Returns information on all pending inbound HTLCs. @@ -2922,9 +4003,11 @@ impl ChannelContext where SP::Target: SignerProvider { where F::Target: FeeEstimator { let context = &self; - // Note that we have to handle overflow due to the above case. - let inbound_stats = context.get_inbound_pending_htlc_stats(None); - let outbound_stats = context.get_outbound_pending_htlc_stats(None); + // Note that we have to handle overflow due to the case mentioned in the docs in general + // here. + + let dust_exposure_limiting_feerate = self.get_dust_exposure_limiting_feerate(&fee_estimator); + let htlc_stats = context.get_pending_htlc_stats(None, dust_exposure_limiting_feerate); let mut balance_msat = context.value_to_self_msat; for ref htlc in context.pending_inbound_htlcs.iter() { @@ -2932,10 +4015,10 @@ impl ChannelContext where SP::Target: SignerProvider { balance_msat += htlc.amount_msat; } } - balance_msat -= outbound_stats.pending_htlcs_value_msat; + balance_msat -= htlc_stats.pending_outbound_htlcs_value_msat; let outbound_capacity_msat = context.value_to_self_msat - .saturating_sub(outbound_stats.pending_htlcs_value_msat) + .saturating_sub(htlc_stats.pending_outbound_htlcs_value_msat) .saturating_sub( context.counterparty_selected_channel_reserve_satoshis.unwrap_or(0) * 1000); @@ -2995,7 +4078,7 @@ impl ChannelContext where SP::Target: SignerProvider { let holder_selected_chan_reserve_msat = context.holder_selected_channel_reserve_satoshis * 1000; let remote_balance_msat = (context.channel_value_satoshis * 1000 - context.value_to_self_msat) - .saturating_sub(inbound_stats.pending_htlcs_value_msat); + .saturating_sub(htlc_stats.pending_inbound_htlcs_value_msat); if remote_balance_msat < max_reserved_commit_tx_fee_msat + holder_selected_chan_reserve_msat + anchor_outputs_value_msat { // If another HTLC's fee would reduce the remote's balance below the reserve limit @@ -3012,7 +4095,7 @@ impl ChannelContext where SP::Target: SignerProvider { // send above the dust limit (as the router can always overpay to meet the dust limit). let mut remaining_msat_below_dust_exposure_limit = None; let mut dust_exposure_dust_limit_msat = 0; - let max_dust_htlc_exposure_msat = context.get_max_dust_htlc_exposure_msat(fee_estimator); + let max_dust_htlc_exposure_msat = context.get_max_dust_htlc_exposure_msat(dust_exposure_limiting_feerate); let (htlc_success_dust_limit, htlc_timeout_dust_limit) = if context.get_channel_type().supports_anchors_zero_fee_htlc_tx() { (context.counterparty_dust_limit_satoshis, context.holder_dust_limit_satoshis) @@ -3021,18 +4104,32 @@ impl ChannelContext where SP::Target: SignerProvider { (context.counterparty_dust_limit_satoshis + dust_buffer_feerate * htlc_success_tx_weight(context.get_channel_type()) / 1000, context.holder_dust_limit_satoshis + dust_buffer_feerate * htlc_timeout_tx_weight(context.get_channel_type()) / 1000) }; - let on_counterparty_dust_htlc_exposure_msat = inbound_stats.on_counterparty_tx_dust_exposure_msat + outbound_stats.on_counterparty_tx_dust_exposure_msat; - if on_counterparty_dust_htlc_exposure_msat as i64 + htlc_success_dust_limit as i64 * 1000 - 1 > max_dust_htlc_exposure_msat.try_into().unwrap_or(i64::max_value()) { + + let excess_feerate_opt = self.feerate_per_kw.checked_sub(dust_exposure_limiting_feerate); + if let Some(excess_feerate) = excess_feerate_opt { + let htlc_dust_exposure_msat = + per_outbound_htlc_counterparty_commit_tx_fee_msat(excess_feerate, &context.channel_type); + let nondust_htlc_counterparty_tx_dust_exposure = + htlc_stats.on_counterparty_tx_dust_exposure_msat.saturating_add(htlc_dust_exposure_msat); + if nondust_htlc_counterparty_tx_dust_exposure > max_dust_htlc_exposure_msat { + // If adding an extra HTLC would put us over the dust limit in total fees, we cannot + // send any non-dust HTLCs. + available_capacity_msat = cmp::min(available_capacity_msat, htlc_success_dust_limit * 1000); + } + } + + if htlc_stats.on_counterparty_tx_dust_exposure_msat.saturating_add(htlc_success_dust_limit * 1000) > max_dust_htlc_exposure_msat.saturating_add(1) { + // Note that we don't use the `counterparty_tx_dust_exposure` (with + // `htlc_dust_exposure_msat`) here as it only applies to non-dust HTLCs. remaining_msat_below_dust_exposure_limit = - Some(max_dust_htlc_exposure_msat.saturating_sub(on_counterparty_dust_htlc_exposure_msat)); + Some(max_dust_htlc_exposure_msat.saturating_sub(htlc_stats.on_counterparty_tx_dust_exposure_msat)); dust_exposure_dust_limit_msat = cmp::max(dust_exposure_dust_limit_msat, htlc_success_dust_limit * 1000); } - let on_holder_dust_htlc_exposure_msat = inbound_stats.on_holder_tx_dust_exposure_msat + outbound_stats.on_holder_tx_dust_exposure_msat; - if on_holder_dust_htlc_exposure_msat as i64 + htlc_timeout_dust_limit as i64 * 1000 - 1 > max_dust_htlc_exposure_msat.try_into().unwrap_or(i64::max_value()) { + if htlc_stats.on_holder_tx_dust_exposure_msat as i64 + htlc_timeout_dust_limit as i64 * 1000 - 1 > max_dust_htlc_exposure_msat.try_into().unwrap_or(i64::max_value()) { remaining_msat_below_dust_exposure_limit = Some(cmp::min( remaining_msat_below_dust_exposure_limit.unwrap_or(u64::max_value()), - max_dust_htlc_exposure_msat.saturating_sub(on_holder_dust_htlc_exposure_msat))); + max_dust_htlc_exposure_msat.saturating_sub(htlc_stats.on_holder_tx_dust_exposure_msat))); dust_exposure_dust_limit_msat = cmp::max(dust_exposure_dust_limit_msat, htlc_timeout_dust_limit * 1000); } @@ -3045,16 +4142,16 @@ impl ChannelContext where SP::Target: SignerProvider { } available_capacity_msat = cmp::min(available_capacity_msat, - context.counterparty_max_htlc_value_in_flight_msat - outbound_stats.pending_htlcs_value_msat); + context.counterparty_max_htlc_value_in_flight_msat - htlc_stats.pending_outbound_htlcs_value_msat); - if outbound_stats.pending_htlcs + 1 > context.counterparty_max_accepted_htlcs as u32 { + if htlc_stats.pending_outbound_htlcs + 1 > context.counterparty_max_accepted_htlcs as usize { available_capacity_msat = 0; } AvailableBalances { inbound_capacity_msat: cmp::max(context.channel_value_satoshis as i64 * 1000 - context.value_to_self_msat as i64 - - context.get_inbound_pending_htlc_stats(None).pending_htlcs_value_msat as i64 + - htlc_stats.pending_inbound_htlcs_value_msat as i64 - context.holder_selected_channel_reserve_satoshis as i64 * 1000, 0) as u64, outbound_capacity_msat, @@ -3450,6 +4547,205 @@ impl ChannelContext where SP::Target: SignerProvider { self.channel_transaction_parameters.channel_type_features = self.channel_type.clone(); Ok(()) } + + // Interactive transaction construction + + #[cfg(any(dual_funding, splicing))] + pub fn begin_interactive_funding_tx_construction( + &mut self, dual_funding_context: &DualFundingChannelContext, signer_provider: &SP, + entropy_source: &ES, holder_node_id: PublicKey, is_initiator: bool, + ) -> Result, APIError> + where ES::Target: EntropySource + { + let mut funding_inputs_with_extra = dual_funding_context.our_funding_inputs.clone(); + // #SPLICING + #[cfg(splicing)] + if let Some(pending_splice) = &self.pending_splice_post { + if is_initiator { + // Add current funding tx as an extra, shared input + let prev_funding_input = pending_splice.get_input_of_previous_funding() + .map_err(|e| APIError::APIMisuseError { + err: format!("Interal error: Could not create input for previous funding transaction, channel_id {}, {:?}", self.channel_id(), e) + })?; + funding_inputs_with_extra.push(prev_funding_input); + } + } + + let mut funding_inputs_prev_outputs: Vec = Vec::with_capacity(funding_inputs_with_extra.len()); + // Check that vouts exist for each TxIn in provided transactions. + for (idx, input) in funding_inputs_with_extra.iter().enumerate() { + if let Some(output) = input.1.as_transaction().output.get(input.0.previous_output.vout as usize) { + funding_inputs_prev_outputs.push(output.clone()); + } else { + return Err(APIError::APIMisuseError { + err: format!("Transaction with txid {} does not have an output with vout of {} corresponding to TxIn at funding_inputs_with_extra[{}]", + input.1.as_transaction().txid(), input.0.previous_output.vout, idx) }); + } + } + + let total_input_satoshis: u64 = funding_inputs_with_extra.iter().map(|input| input.1.as_transaction().output[input.0.previous_output.vout as usize].value).sum(); + if total_input_satoshis < dual_funding_context.our_funding_satoshis { + return Err(APIError::APIMisuseError { + err: format!("Total value of funding inputs must be at least funding amount. It was {} sats", + total_input_satoshis) }); + } + + // Add output for funding tx + let mut funding_outputs = Vec::new(); + if is_initiator { + // add output + // #SPLICING Note on splicing: channel value at this point is changed to the post-splice value, so no special action needed + funding_outputs.push(TxOut { + value: self.get_value_satoshis(), + script_pubkey: self.get_funding_redeemscript().to_v0_p2wsh(), + }); + } + + maybe_add_funding_change_output(signer_provider, is_initiator, dual_funding_context.our_funding_satoshis, + &funding_inputs_prev_outputs, &mut funding_outputs, dual_funding_context.funding_feerate_sat_per_1000_weight, + total_input_satoshis, self.holder_dust_limit_satoshis, self.channel_keys_id).map_err( + |_| APIError::APIMisuseError { err: "Could not create change output".to_string() })?; + + let (tx_constructor, msg) = InteractiveTxConstructor::new( + entropy_source, self.channel_id(), dual_funding_context.funding_feerate_sat_per_1000_weight, + holder_node_id, self.counterparty_node_id, is_initiator, dual_funding_context.funding_tx_locktime, + funding_inputs_with_extra, funding_outputs, + ); + self.interactive_tx_constructor = Some(tx_constructor); + + Ok(msg) + } + + #[cfg(any(dual_funding, splicing))] + pub fn tx_signatures(&self, _msg: &msgs::TxSignatures)-> Result { + todo!(); + } + + #[cfg(any(dual_funding, splicing))] + pub fn tx_init_rbf(&self, _msg: &msgs::TxInitRbf)-> Result { + todo!(); + } + + #[cfg(any(dual_funding, splicing))] + pub fn tx_ack_rbf(&self, _msg: &msgs::TxAckRbf)-> Result { + todo!(); + } + + #[cfg(any(dual_funding, splicing))] + pub fn tx_abort(&self, _msg: &msgs::TxAbort)-> Result { + todo!(); + } + + /// Check is a splice is currently in progress + /// Can be called regardless of `splicing` configuration. TODO: remove this note once `cfg(splicing)` is being removed + pub fn is_splice_pending(&self) -> bool { + #[cfg(splicing)] + return self.pending_splice_post.is_some(); + #[cfg(not(splicing))] + false + } + + #[cfg(splicing)] + pub fn generate_v2_channel_id_from_revocation_basepoints(&self) -> ChannelId { + ChannelId::v2_from_revocation_basepoints(&self.get_holder_pubkeys().revocation_basepoint, &self.get_counterparty_pubkeys().revocation_basepoint) + } + + /// Splice process starting; update state; update capacity, state, reset funding tx + #[cfg(splicing)] + pub(crate) fn splice_start(&mut self, is_outgoing: bool, logger: &L) where L::Target: Logger { + // Set state, by this point handshake is complete + self.channel_state = ChannelState::NegotiatingFunding(NegotiatingFundingFlags::OUR_INIT_SENT | NegotiatingFundingFlags::THEIR_INIT_SENT); + + log_info!(logger, "Splicing process started, new channel value {}, outgoing {}, channel_id {}", self.channel_value_satoshis, is_outgoing, self.channel_id); + } + + /// Splice process finished, new funding transaction locked. + /// At this point the old funding transaction is spent. + #[cfg(splicing)] + pub(crate) fn splice_complete(&mut self, logger: &L) -> Result<(), ChannelError> + where L::Target: Logger + { + if !self.is_splice_pending() { + return Err(ChannelError::Warn(format!("Internal error: Channel is not in currently splicing, channel_id {}", self.channel_id()))); + } + + // TODO: purge HTLCs + + // TODO: if there is a pre channel, with different channel ID, purge it + + self.pending_splice_pre = None; + self.pending_splice_post = None; + + log_trace!(logger, "Splicing completed, channel_id {}", self.channel_id); + + Ok(()) + } + + /// Create signature for the current funding tx input, used in the splicing case. + #[cfg(splicing)] + fn prev_funding_tx_create_holder_sig(&self, transaction: &Transaction, input_index: u16, input_value: u64/*, _redeem_script: &ScriptBuf*/) -> Result { + // #SPLICE-SIG + match &self.holder_signer { + ChannelSignerType::Ecdsa(ecdsa) => { + ecdsa.sign_splicing_funding_input(transaction, input_index, input_value, /*&redeem_script, */&self.secp_ctx) + .map_err(|_e| ChannelError::Close("Failed to sign the previous funding input in the new splicing funding tx".to_owned())) + }, + // TODO (taproot|arik) + #[cfg(taproot)] + _ => todo!() + } + } + + /// Prepare the witness on the current funding tx input (used in the splicing case), + /// containing our holder signature, and optionally the counterparty signature, or its empty placholder. + #[cfg(splicing)] + fn prev_funding_tx_sign( + &self, transaction: &Transaction, counterparty_sig: Option, logger: &L + ) -> Result<(Transaction, Signature), ChannelError> where L::Target: Logger { + let (prev_funding_input_index, pre_channel_value) = if let Some(pending_splice) = &self.pending_splice_post { + ( + pending_splice.find_input_of_previous_funding(&transaction)?, + pending_splice.pre_channel_value() + ) + } else { + return Err(ChannelError::Warn(format!("Cannot sign splice transaction, channel is not in active splice, channel_id {}", self.channel_id))) + }; + debug_assert!((prev_funding_input_index as usize) < transaction.input.len()); + + // #SPLICE-SIG + // the redeem script + let sig_order_ours_first = self.get_holder_pubkeys().funding_pubkey.serialize() < self.counterparty_funding_pubkey().serialize(); + log_info!(logger, "Pubkeys used for redeem script: {} {} {}", &self.get_holder_pubkeys().funding_pubkey, &self.counterparty_funding_pubkey(), sig_order_ours_first); + + let redeem_script = self.get_funding_redeemscript(); + let holder_signature = self.prev_funding_tx_create_holder_sig(&transaction, prev_funding_input_index, pre_channel_value)?; // , &redeem_script)?; + let mut holder_sig = holder_signature.serialize_der().to_vec(); + holder_sig.push(EcdsaSighashType::All as u8); + // counterparty signature + let cp_sig = match counterparty_sig { + Some(s) => { + let mut sb = s.serialize_der().to_vec(); + sb.push(EcdsaSighashType::All as u8); + sb + }, + None => Vec::new(), // placeholder + }; + // prepare witness stack + let mut witness = Witness::new(); + witness.push(Vec::new()); + if sig_order_ours_first { + witness.push(holder_sig); + witness.push(cp_sig); + } else { + witness.push(cp_sig); + witness.push(holder_sig); + } + witness.push(redeem_script.clone().into_bytes()); + + let mut tx = transaction.clone(); + tx.input[prev_funding_input_index as usize].witness = witness; + Ok((tx, holder_signature)) + } } // Internal utility functions for channels @@ -3523,6 +4819,63 @@ pub(crate) fn commit_tx_fee_msat(feerate_per_kw: u32, num_htlcs: usize, channel_ (commitment_tx_base_weight(channel_type_features) + num_htlcs as u64 * COMMITMENT_TX_WEIGHT_PER_HTLC) * feerate_per_kw as u64 / 1000 * 1000 } +#[cfg(any(dual_funding, splicing))] +pub(super) fn maybe_add_funding_change_output(signer_provider: &SP, is_initiator: bool, + our_funding_satoshis: u64, funding_inputs_prev_outputs: &Vec, + funding_outputs: &mut Vec, funding_feerate_sat_per_1000_weight: u32, + total_input_satoshis: u64, holder_dust_limit_satoshis: u64, channel_keys_id: [u8; 32], +) -> Result, ChannelError> where + SP::Target: SignerProvider, +{ + let our_funding_inputs_weight = funding_inputs_prev_outputs.iter().fold(0u64, |weight, prev_output| { + weight.saturating_add(estimate_input_weight(prev_output).to_wu()) + }); + let our_funding_outputs_weight = funding_outputs.iter().fold(0u64, |weight, txout| { + weight.saturating_add(get_output_weight(&txout.script_pubkey).to_wu()) + }); + let our_contributed_weight = our_funding_outputs_weight.saturating_add(our_funding_inputs_weight); + let mut fees_sats = fee_for_weight(funding_feerate_sat_per_1000_weight, our_contributed_weight); + + // If we are the initiator, we must pay for weight of all common fields in the funding transaction. + if is_initiator { + let common_fees = fee_for_weight(funding_feerate_sat_per_1000_weight, TX_COMMON_FIELDS_WEIGHT); + fees_sats = fees_sats.saturating_add(common_fees); + } + + let remaining_value = total_input_satoshis + .saturating_sub(our_funding_satoshis) + .saturating_sub(fees_sats); + + if remaining_value < holder_dust_limit_satoshis { + Ok(None) + } else { + let change_script = signer_provider.get_destination_script(channel_keys_id).map_err( + |_| ChannelError::Close("Failed to get change script as new destination script".to_owned()) + )?; + let mut change_output = TxOut { + value: remaining_value, + script_pubkey: change_script, + }; + let change_output_weight = get_output_weight(&change_output.script_pubkey).to_wu(); + + let change_output_fee = fee_for_weight(funding_feerate_sat_per_1000_weight, change_output_weight); + change_output.value = remaining_value.saturating_sub(change_output_fee); + funding_outputs.push(change_output.clone()); + Ok(Some(change_output)) + } +} + +pub(crate) fn per_outbound_htlc_counterparty_commit_tx_fee_msat(feerate_per_kw: u32, channel_type_features: &ChannelTypeFeatures) -> u64 { + // Note that we need to divide before multiplying to round properly, + // since the lowest denomination of bitcoin on-chain is the satoshi. + let commitment_tx_fee = COMMITMENT_TX_WEIGHT_PER_HTLC * feerate_per_kw as u64 / 1000 * 1000; + if channel_type_features.supports_anchors_zero_fee_htlc_tx() { + commitment_tx_fee + htlc_success_tx_weight(channel_type_features) * feerate_per_kw as u64 / 1000 + } else { + commitment_tx_fee + } +} + /// Context for dual-funded channels. #[cfg(any(dual_funding, splicing))] pub(super) struct DualFundingChannelContext { @@ -3532,9 +4885,11 @@ pub(super) struct DualFundingChannelContext { pub their_funding_satoshis: u64, /// The funding transaction locktime suggested by the initiator. If set by us, it is always set /// to the current block height to align incentives against fee-sniping. - pub funding_tx_locktime: u32, + pub funding_tx_locktime: LockTime, /// The feerate set by the initiator to be used for the funding transaction. pub funding_feerate_sat_per_1000_weight: u32, + /// The funding inputs we will be contributing to the channel. + pub our_funding_inputs: Vec<(TxIn, TransactionU16LenLimited)>, } // Holder designates channel data owned for the benefit of the user client. @@ -3543,9 +4898,12 @@ pub(super) struct Channel where SP::Target: SignerProvider { pub context: ChannelContext, #[cfg(any(dual_funding, splicing))] pub dual_funding_channel_context: Option, + #[cfg(any(dual_funding, splicing))] + pub interactive_tx_signing_session: Option, } #[cfg(any(test, fuzzing))] +#[derive(Clone)] struct CommitmentTxInfoCached { fee: u64, total_pending_htlcs: usize, @@ -4023,6 +5381,16 @@ impl Channel where self.context.channel_state.clear_waiting_for_batch(); } + #[cfg(any(dual_funding, splicing))] + pub fn set_next_funding_txid(&mut self, txid: &Txid) { + self.context.next_funding_txid = Some(*txid); + } + + #[cfg(any(dual_funding, splicing))] + pub fn clear_next_funding_txid(&mut self) { + self.context.next_funding_txid = None; + } + /// Unsets the existing funding information. /// /// This must only be used if the channel has not yet completed funding and has not been used. @@ -4037,6 +5405,19 @@ impl Channel where self.context.channel_id = temporary_channel_id; } + /// Set the state to ChannelReady. + /// In case of splicing, also mark it complete. + fn set_channel_ready(&mut self, logger: &L) -> Result<(), ChannelError> where L::Target: Logger { + #[cfg(splicing)] + if self.context.is_splice_pending() { + // Mark the splicing process complete + self.context.splice_complete(logger)?; + } + self.context.channel_state = ChannelState::ChannelReady(self.context.channel_state.with_funded_state_flags_mask().into()); + self.context.update_time_counter += 1; + Ok(()) + } + /// Handles a channel_ready message from our peer. If we've already sent our channel_ready /// and the channel is now usable (and public), this may generate an announcement_signatures to /// reply with. @@ -4067,7 +5448,12 @@ impl Channel where let mut check_reconnection = false; match &self.context.channel_state { ChannelState::AwaitingChannelReady(flags) => { - let flags = flags.clone().clear(FundedStateFlags::ALL.into()); + if flags.is_set(AwaitingChannelReadyFlags::IS_SPLICE) { + return Err(ChannelError::Close("channel_ready received, but there is a splicing in progress".to_owned())); + } + let flags = flags.clone() + .clear(FundedStateFlags::ALL.into()) + .clear(AwaitingChannelReadyFlags::IS_SPLICE); debug_assert!(!flags.is_set(AwaitingChannelReadyFlags::OUR_CHANNEL_READY) || !flags.is_set(AwaitingChannelReadyFlags::WAITING_FOR_BATCH)); if flags.clone().clear(AwaitingChannelReadyFlags::WAITING_FOR_BATCH) == AwaitingChannelReadyFlags::THEIR_CHANNEL_READY { // If we reconnected before sending our `channel_ready` they may still resend theirs. @@ -4075,8 +5461,7 @@ impl Channel where } else if flags.clone().clear(AwaitingChannelReadyFlags::WAITING_FOR_BATCH).is_empty() { self.context.channel_state.set_their_channel_ready(); } else if flags == AwaitingChannelReadyFlags::OUR_CHANNEL_READY { - self.context.channel_state = ChannelState::ChannelReady(self.context.channel_state.with_funded_state_flags_mask().into()); - self.context.update_time_counter += 1; + self.set_channel_ready(logger)?; } else { // We're in `WAITING_FOR_BATCH`, so we should wait until we're ready. debug_assert!(flags.is_set(AwaitingChannelReadyFlags::WAITING_FOR_BATCH)); @@ -4120,9 +5505,60 @@ impl Channel where Ok(self.get_announcement_sigs(node_signer, chain_hash, user_config, best_block.height, logger)) } - pub fn update_add_htlc( + /// Handles a splice_locked message from our peer. If we've already sent our splice_locked + /// and the channel is now usable (and public), this may generate an announcement_signatures to + /// reply with. + /// Simialar to `channel_ready`. + #[cfg(splicing)] + pub fn splice_locked( + &mut self, _msg: &msgs::SpliceLocked, node_signer: &NS, chain_hash: ChainHash, + user_config: &UserConfig, best_block: &BestBlock, logger: &L + ) -> Result, ChannelError> + where + NS::Target: NodeSigner, + L::Target: Logger + { + // Our splice_locked shouldn't have been sent if we are waiting for other channels in the + // batch, but we can receive splice_locked messages. + let mut check_reconnection = false; + match &self.context.channel_state { + ChannelState::AwaitingChannelReady(flags) => { + if !flags.is_set(AwaitingChannelReadyFlags::IS_SPLICE) { + return Err(ChannelError::Close("Splice_locked received, but there is no splicing in progress".to_owned())); + } + let flags = flags.clone() + .clear(FundedStateFlags::ALL.into()) + .clear(AwaitingChannelReadyFlags::IS_SPLICE.into()); + debug_assert!(!flags.is_set(AwaitingChannelReadyFlags::OUR_CHANNEL_READY) || !flags.is_set(AwaitingChannelReadyFlags::WAITING_FOR_BATCH)); + if flags.clone().clear(AwaitingChannelReadyFlags::WAITING_FOR_BATCH) == AwaitingChannelReadyFlags::THEIR_CHANNEL_READY { + // If we reconnected before sending our `splice_locked` they may still resend theirs. + check_reconnection = true; + } else if flags.clone().clear(AwaitingChannelReadyFlags::WAITING_FOR_BATCH).is_empty() { + self.context.channel_state.set_their_channel_ready(); + } else if flags == AwaitingChannelReadyFlags::OUR_CHANNEL_READY { + self.set_channel_ready(logger)?; + } else { + // We're in `WAITING_FOR_BATCH`, so we should wait until we're ready. + debug_assert!(flags.is_set(AwaitingChannelReadyFlags::WAITING_FOR_BATCH)); + } + } + // If we reconnected before sending our `splice_locked` they may still resend theirs. + ChannelState::ChannelReady(_) => check_reconnection = true, + _ => return Err(ChannelError::Close("Peer sent a splice_locked at a strange time".to_owned())), + } + if check_reconnection { + return Ok(None); + } + + log_info!(logger, "Received splice_locked from peer for channel {}", &self.context.channel_id()); + + Ok(self.get_announcement_sigs(node_signer, chain_hash, user_config, best_block.height, logger)) + } + + pub fn update_add_htlc( &mut self, msg: &msgs::UpdateAddHTLC, pending_forward_status: PendingHTLCStatus, - ) -> Result<(), ChannelError> { + fee_estimator: &LowerBoundedFeeEstimator, + ) -> Result<(), ChannelError> where F::Target: FeeEstimator { if !matches!(self.context.channel_state, ChannelState::ChannelReady(_)) { return Err(ChannelError::Close("Got add HTLC message when channel was not in an operational state".to_owned())); } @@ -4143,11 +5579,12 @@ impl Channel where return Err(ChannelError::Close(format!("Remote side tried to send less than our minimum HTLC value. Lower limit: ({}). Actual: ({})", self.context.holder_htlc_minimum_msat, msg.amount_msat))); } - let inbound_stats = self.context.get_inbound_pending_htlc_stats(None); - if inbound_stats.pending_htlcs + 1 > self.context.holder_max_accepted_htlcs as u32 { + let dust_exposure_limiting_feerate = self.context.get_dust_exposure_limiting_feerate(&fee_estimator); + let htlc_stats = self.context.get_pending_htlc_stats(None, dust_exposure_limiting_feerate); + if htlc_stats.pending_inbound_htlcs + 1 > self.context.holder_max_accepted_htlcs as usize { return Err(ChannelError::Close(format!("Remote tried to push more than our max accepted HTLCs ({})", self.context.holder_max_accepted_htlcs))); } - if inbound_stats.pending_htlcs_value_msat + msg.amount_msat > self.context.holder_max_htlc_value_in_flight_msat { + if htlc_stats.pending_inbound_htlcs_value_msat + msg.amount_msat > self.context.holder_max_htlc_value_in_flight_msat { return Err(ChannelError::Close(format!("Remote HTLC add would put them over our max HTLC value ({})", self.context.holder_max_htlc_value_in_flight_msat))); } @@ -4173,7 +5610,7 @@ impl Channel where } let pending_value_to_self_msat = - self.context.value_to_self_msat + inbound_stats.pending_htlcs_value_msat - removed_outbound_total_msat; + self.context.value_to_self_msat + htlc_stats.pending_inbound_htlcs_value_msat - removed_outbound_total_msat; let pending_remote_value_msat = self.context.channel_value_satoshis * 1000 - pending_value_to_self_msat; if pending_remote_value_msat < msg.amount_msat { @@ -4306,6 +5743,109 @@ impl Channel where Ok(()) } + #[cfg(any(dual_funding, splicing))] + pub fn commitment_signed_initial_v2(&mut self, msg: &msgs::CommitmentSigned, + best_block: BestBlock, signer_provider: &SP, logger: &L + ) -> Result::EcdsaSigner>, ChannelError> + where L::Target: Logger + { + if !matches!(self.context.channel_state, ChannelState::FundingNegotiated) { + return Err(ChannelError::Close("Received initial commitment_signed before funding transaction constructed!".to_owned())); + } + let is_splice_pending = self.context.is_splice_pending(); + if !is_splice_pending { + if self.context.commitment_secrets.get_min_seen_secret() != (1 << 48) || + self.context.cur_counterparty_commitment_transaction_number != INITIAL_COMMITMENT_NUMBER || + self.context.cur_holder_commitment_transaction_number != INITIAL_COMMITMENT_NUMBER { + panic!("Should not have advanced channel commitment tx numbers prior to funding_created"); + } + } + let _dual_funding_channel_context = self.dual_funding_channel_context.as_mut().ok_or( + ChannelError::Close("Have no context for dual-funded channel".to_owned()) + )?; + + let funding_script = self.context.get_funding_redeemscript(); + + let counterparty_keys = self.context.build_remote_transaction_keys(); + // During splicing negotiation don't advance the commitment point + let comm_number_delta = if !is_splice_pending { 0 } else { 1 }; + let holder_commitment_transaction_number = self.context.cur_holder_commitment_transaction_number + comm_number_delta; + let counterparty_commitment_transaction_number = self.context.cur_counterparty_commitment_transaction_number + comm_number_delta; + let counterparty_initial_commitment_tx = self.context.build_commitment_transaction(counterparty_commitment_transaction_number, &counterparty_keys, false, false, logger).tx; + let counterparty_trusted_tx = counterparty_initial_commitment_tx.trust(); + let counterparty_initial_bitcoin_tx = counterparty_trusted_tx.built_transaction(); + + log_trace!(logger, "Initial counterparty tx for channel {} is: txid {} tx {}", + &self.context.channel_id(), counterparty_initial_bitcoin_tx.txid, encode::serialize_hex(&counterparty_initial_bitcoin_tx.transaction)); + + let holder_signer = self.context.build_holder_transaction_keys(holder_commitment_transaction_number); + let initial_commitment_tx = self.context.build_commitment_transaction(holder_commitment_transaction_number, &holder_signer, true, false, logger).tx; + { + let trusted_tx = initial_commitment_tx.trust(); + let initial_commitment_bitcoin_tx = trusted_tx.built_transaction(); + let sighash = initial_commitment_bitcoin_tx.get_sighash_all(&funding_script, self.context.channel_value_satoshis); + // They sign our commitment transaction, allowing us to broadcast the tx if we wish. + if let Err(_) = self.context.secp_ctx.verify_ecdsa(&sighash, &msg.signature, &self.context.get_counterparty_pubkeys().funding_pubkey) { + return Err(ChannelError::Close("Invalid commitment_signed signature from peer".to_owned())); + } + } + + let holder_commitment_tx = HolderCommitmentTransaction::new( + initial_commitment_tx, + msg.signature, + Vec::new(), + &self.context.get_holder_pubkeys().funding_pubkey, + self.context.counterparty_funding_pubkey() + ); + + self.context.holder_signer.as_ref().validate_holder_commitment(&holder_commitment_tx, Vec::new()) + .map_err(|_| ChannelError::Close("Failed to validate our commitment".to_owned()))?; + + let funding_redeemscript = self.context.get_funding_redeemscript(); + let funding_txo = self.context.get_funding_txo().unwrap(); + let funding_txo_script = funding_redeemscript.to_v0_p2wsh(); + let obscure_factor = get_commitment_transaction_number_obscure_factor(&self.context.get_holder_pubkeys().payment_point, &self.context.get_counterparty_pubkeys().payment_point, self.context.is_outbound()); + let shutdown_script = self.context.shutdown_scriptpubkey.clone().map(|script| script.into_inner()); + let mut monitor_signer = signer_provider.derive_channel_signer(self.context.channel_value_satoshis, self.context.channel_keys_id); + monitor_signer.provide_channel_parameters(&self.context.channel_transaction_parameters); + let channel_monitor = ChannelMonitor::new(self.context.secp_ctx.clone(), monitor_signer, + shutdown_script, self.context.get_holder_selected_contest_delay(), + &self.context.destination_script, (funding_txo, funding_txo_script), + &self.context.channel_transaction_parameters, + funding_redeemscript.clone(), self.context.channel_value_satoshis, + obscure_factor, + holder_commitment_tx, best_block, self.context.counterparty_node_id, self.context.channel_id()); + + let counterparty_commitment_point = if !is_splice_pending { + self.context.counterparty_cur_commitment_point.unwrap() + } else { + // During splicing negotiation don't advance the commitment point + self.context.counterparty_prev_commitment_point.unwrap() + }; + channel_monitor.provide_initial_counterparty_commitment_tx( + counterparty_initial_bitcoin_tx.txid, Vec::new(), + counterparty_commitment_transaction_number, + counterparty_commitment_point, + counterparty_initial_commitment_tx.feerate_per_kw(), + counterparty_initial_commitment_tx.to_broadcaster_value_sat(), + counterparty_initial_commitment_tx.to_countersignatory_value_sat(), logger); + + assert!(!self.context.channel_state.is_monitor_update_in_progress()); // We have no had any monitor(s) yet to fail update! + // Update to next (unless splicing when it in fact stays the same) + self.context.cur_holder_commitment_transaction_number = holder_commitment_transaction_number - 1; + self.context.cur_counterparty_commitment_transaction_number = counterparty_commitment_transaction_number - 1; + + log_info!(logger, "Received initial commitment_signed from peer for channel {}", &self.context.channel_id()); + + let need_channel_ready = { let res = self.check_get_channel_ready(0, logger); res.0.is_some() || res.1.is_some() }; + self.context.channel_state = ChannelState::AwaitingChannelReady( + if is_splice_pending { AwaitingChannelReadyFlags::IS_SPLICE } else { AwaitingChannelReadyFlags::new() } + ); + self.monitor_updating_paused(false, false, need_channel_ready, Vec::new(), Vec::new(), Vec::new()); + + Ok(channel_monitor) + } + pub fn commitment_signed(&mut self, msg: &msgs::CommitmentSigned, logger: &L) -> Result, ChannelError> where L::Target: Logger { @@ -4953,10 +6493,126 @@ impl Channel where log_debug!(logger, "Received a valid revoke_and_ack for channel {} with no reply necessary. {} monitor update.", &self.context.channel_id(), release_state_str); - self.monitor_updating_paused(false, false, false, to_forward_infos, revoked_htlcs, finalized_claimed_htlcs); - return_with_htlcs_to_fail!(htlcs_to_fail); + self.monitor_updating_paused(false, false, false, to_forward_infos, revoked_htlcs, finalized_claimed_htlcs); + return_with_htlcs_to_fail!(htlcs_to_fail); + } + } + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn verify_interactive_tx_signatures(&mut self, _witnesses: &Vec) { + if let Some(ref mut _signing_session) = self.interactive_tx_signing_session { + // Check that sighash_all was used: + // TODO(dual_funding): Check sig for sighash + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn tx_signatures(&mut self, msg: &msgs::TxSignatures, logger: &L) -> Result<(Option, Option), ChannelError> + where L::Target: Logger { + if let Some(ref mut signing_session) = self.interactive_tx_signing_session { + let expected_witness_count = signing_session.remote_inputs_count(); + if msg.witnesses.len() != expected_witness_count { + return Err(ChannelError::Close(format!("Witness count does not match contributed input count, {} {}", + msg.witnesses.len(), expected_witness_count))); + } + let is_splice_pending = self.context.is_splice_pending(); + let expected_shared = if is_splice_pending { 1 } else { 0 }; + let funding_sig_count = if msg.shared_input_signature.is_some() { 1 } else { 0 }; + if funding_sig_count != expected_shared { + return Err(ChannelError::Close(format!("Shared signature count (shared_input_signature) presence does not match expected, {} {}", + funding_sig_count, expected_shared))); + } + + for witness in &msg.witnesses { + if witness.is_empty() { + return Err(ChannelError::Close("Unexpected empty witness in tx_signatures received".to_string())); + } + + // TODO(dual_funding): Check all sigs are SIGHASH_ALL. + + // TODO(dual_funding): I don't see how we're going to be able to ensure witness-standardness + // for spending. Doesn't seem to be anything in rust-bitcoin. + } + + if msg.tx_hash != signing_session.unsigned_tx.txid() { + return Err(ChannelError::Close("The txid for the transaction does not match".to_string())); + } + + if signing_session.counterparty_tx_signatures().is_some() { + log_warn!(logger, "Warning: counterparty_tx_signatures already set! {:?}", signing_session.counterparty_tx_signatures()); + // TODO check if to handle as error + } + let (tx_signatures_opt, mut funding_tx_opt) = signing_session.received_tx_signatures(msg.clone()); + #[cfg(splicing)] + if let Some(funding_tx) = &funding_tx_opt { + if is_splice_pending { + if let Some(cp_sig) = &msg.shared_input_signature { + // Update signature on previous funding input: + // - our signature is (re)generated (was overwritten by witness received in tx_signatures) + // - counterparty signature is set, taken from tx_signatures shared_input_signature field + let (updated_funding_tx, _) = self.context.prev_funding_tx_sign(funding_tx, Some(cp_sig.clone()), logger)?; + funding_tx_opt = Some(updated_funding_tx); + } + + // Note: no state update, state should be AwaitingChannelReady already + debug_assert!(matches!(self.context.channel_state, ChannelState::AwaitingChannelReady(f) if f.is_splice())); + } + } + if funding_tx_opt.is_some() { + self.context.funding_transaction = funding_tx_opt.clone(); + #[cfg(splicing)] + { + self.context.funding_transaction_saved = funding_tx_opt.clone(); + } + } + // Note: no state change (to AwaitingChannelReady) at this point yet + // Note: cannot mark interactive tx session is complete as of yet, funding_transaction_signed() can come later from the client + + Ok((tx_signatures_opt, funding_tx_opt)) + } else { + return Err(ChannelError::Close( + "Unexpected tx_signatures. No funding transaction awaiting signatures".to_string())); + } + } + + #[cfg(any(dual_funding, splicing))] + pub fn tx_init_rbf(&self, _msg: &msgs::TxInitRbf)-> Result { + todo!(); + } + + #[cfg(any(dual_funding, splicing))] + pub fn tx_ack_rbf(&self, _msg: &msgs::TxAckRbf)-> Result { + todo!(); + } + + /// Called when funding tx is signed (local part). Save the funding transaction if completely signed. + #[cfg(any(dual_funding, splicing))] + pub fn funding_transaction_signed(&mut self, channel_id: &ChannelId, witnesses: Vec) -> Result, ChannelError> { + self.verify_interactive_tx_signatures(&witnesses); + if let Some(ref mut signing_session) = self.interactive_tx_signing_session { + // Splicing + // Shared signature (used in splicing): holder signature on the prev funding tx input should have been saved. + // include it in tlvs field + let mut tlvs = None; + if self.context.is_splice_pending() { + if let Some(s) = signing_session.shared_signature { + tlvs = Some(s); + } // TODO error + debug_assert!(tlvs.is_some()); + } + let (tx_signatures_opt, funding_tx_opt) = signing_session.provide_holder_witnesses(*channel_id, witnesses, tlvs); + if funding_tx_opt.is_some() { + self.context.funding_transaction = funding_tx_opt.clone(); + #[cfg(splicing)] + { + self.context.funding_transaction_saved = funding_tx_opt.clone(); } } + Ok(tx_signatures_opt) + } else { + return Err(ChannelError::Warn(format!("Channel with id {} has no pending signing session, not expecting funding signatures", channel_id))); } } @@ -4995,12 +6651,12 @@ impl Channel where } // Before proposing a feerate update, check that we can actually afford the new fee. - let inbound_stats = self.context.get_inbound_pending_htlc_stats(Some(feerate_per_kw)); - let outbound_stats = self.context.get_outbound_pending_htlc_stats(Some(feerate_per_kw)); + let dust_exposure_limiting_feerate = self.context.get_dust_exposure_limiting_feerate(&fee_estimator); + let htlc_stats = self.context.get_pending_htlc_stats(Some(feerate_per_kw), dust_exposure_limiting_feerate); let keys = self.context.build_holder_transaction_keys(self.context.cur_holder_commitment_transaction_number); let commitment_stats = self.context.build_commitment_transaction(self.context.cur_holder_commitment_transaction_number, &keys, true, true, logger); - let buffer_fee_msat = commit_tx_fee_sat(feerate_per_kw, commitment_stats.num_nondust_htlcs + outbound_stats.on_holder_tx_holding_cell_htlcs_count as usize + CONCURRENT_INBOUND_HTLC_FEE_BUFFER as usize, self.context.get_channel_type()) * 1000; - let holder_balance_msat = commitment_stats.local_balance_msat - outbound_stats.holding_cell_msat; + let buffer_fee_msat = commit_tx_fee_sat(feerate_per_kw, commitment_stats.num_nondust_htlcs + htlc_stats.on_holder_tx_outbound_holding_cell_htlcs_count as usize + CONCURRENT_INBOUND_HTLC_FEE_BUFFER as usize, self.context.get_channel_type()) * 1000; + let holder_balance_msat = commitment_stats.local_balance_msat - htlc_stats.outbound_holding_cell_msat; if holder_balance_msat < buffer_fee_msat + self.context.counterparty_selected_channel_reserve_satoshis.unwrap() * 1000 { //TODO: auto-close after a number of failures? log_debug!(logger, "Cannot afford to send new feerate at {}", feerate_per_kw); @@ -5008,14 +6664,12 @@ impl Channel where } // Note, we evaluate pending htlc "preemptive" trimmed-to-dust threshold at the proposed `feerate_per_kw`. - let holder_tx_dust_exposure = inbound_stats.on_holder_tx_dust_exposure_msat + outbound_stats.on_holder_tx_dust_exposure_msat; - let counterparty_tx_dust_exposure = inbound_stats.on_counterparty_tx_dust_exposure_msat + outbound_stats.on_counterparty_tx_dust_exposure_msat; - let max_dust_htlc_exposure_msat = self.context.get_max_dust_htlc_exposure_msat(fee_estimator); - if holder_tx_dust_exposure > max_dust_htlc_exposure_msat { + let max_dust_htlc_exposure_msat = self.context.get_max_dust_htlc_exposure_msat(dust_exposure_limiting_feerate); + if htlc_stats.on_holder_tx_dust_exposure_msat > max_dust_htlc_exposure_msat { log_debug!(logger, "Cannot afford to send new feerate at {} without infringing max dust htlc exposure", feerate_per_kw); return None; } - if counterparty_tx_dust_exposure > max_dust_htlc_exposure_msat { + if htlc_stats.on_counterparty_tx_dust_exposure_msat > max_dust_htlc_exposure_msat { log_debug!(logger, "Cannot afford to send new feerate at {} without infringing max dust htlc exposure", feerate_per_kw); return None; } @@ -5248,20 +6902,16 @@ impl Channel where self.context.pending_update_fee = Some((msg.feerate_per_kw, FeeUpdateState::RemoteAnnounced)); self.context.update_time_counter += 1; // Check that we won't be pushed over our dust exposure limit by the feerate increase. - if !self.context.channel_type.supports_anchors_zero_fee_htlc_tx() { - let inbound_stats = self.context.get_inbound_pending_htlc_stats(None); - let outbound_stats = self.context.get_outbound_pending_htlc_stats(None); - let holder_tx_dust_exposure = inbound_stats.on_holder_tx_dust_exposure_msat + outbound_stats.on_holder_tx_dust_exposure_msat; - let counterparty_tx_dust_exposure = inbound_stats.on_counterparty_tx_dust_exposure_msat + outbound_stats.on_counterparty_tx_dust_exposure_msat; - let max_dust_htlc_exposure_msat = self.context.get_max_dust_htlc_exposure_msat(fee_estimator); - if holder_tx_dust_exposure > max_dust_htlc_exposure_msat { - return Err(ChannelError::Close(format!("Peer sent update_fee with a feerate ({}) which may over-expose us to dust-in-flight on our own transactions (totaling {} msat)", - msg.feerate_per_kw, holder_tx_dust_exposure))); - } - if counterparty_tx_dust_exposure > max_dust_htlc_exposure_msat { - return Err(ChannelError::Close(format!("Peer sent update_fee with a feerate ({}) which may over-expose us to dust-in-flight on our counterparty's transactions (totaling {} msat)", - msg.feerate_per_kw, counterparty_tx_dust_exposure))); - } + let dust_exposure_limiting_feerate = self.context.get_dust_exposure_limiting_feerate(&fee_estimator); + let htlc_stats = self.context.get_pending_htlc_stats(None, dust_exposure_limiting_feerate); + let max_dust_htlc_exposure_msat = self.context.get_max_dust_htlc_exposure_msat(dust_exposure_limiting_feerate); + if htlc_stats.on_holder_tx_dust_exposure_msat > max_dust_htlc_exposure_msat { + return Err(ChannelError::Close(format!("Peer sent update_fee with a feerate ({}) which may over-expose us to dust-in-flight on our own transactions (totaling {} msat)", + msg.feerate_per_kw, htlc_stats.on_holder_tx_dust_exposure_msat))); + } + if htlc_stats.on_counterparty_tx_dust_exposure_msat > max_dust_htlc_exposure_msat { + return Err(ChannelError::Close(format!("Peer sent update_fee with a feerate ({}) which may over-expose us to dust-in-flight on our counterparty's transactions (totaling {} msat)", + msg.feerate_per_kw, htlc_stats.on_counterparty_tx_dust_exposure_msat))); } Ok(()) } @@ -5276,14 +6926,16 @@ impl Channel where let funding_signed = if self.context.signer_pending_funding && !self.context.is_outbound() { self.context.get_funding_signed_msg(logger).1 } else { None }; - let channel_ready = if funding_signed.is_some() { - self.check_get_channel_ready(0) - } else { None }; + let (channel_ready, splice_locked) = if funding_signed.is_some() { + self.check_get_channel_ready(0, logger) + } else { (None, None) }; - log_trace!(logger, "Signer unblocked with {} commitment_update, {} funding_signed and {} channel_ready", + log_trace!(logger, "Signer unblocked with {} commitment_update, {} funding_signed, {} channel_ready, and {} splice_locked", if commitment_update.is_some() { "a" } else { "no" }, if funding_signed.is_some() { "a" } else { "no" }, - if channel_ready.is_some() { "a" } else { "no" }); + if channel_ready.is_some() { "a" } else { "no" }, + if splice_locked.is_some() { "a" } else { "no" }, + ); SignerResumeUpdates { commitment_update, @@ -6105,9 +7757,9 @@ impl Channel where return Err(("Shutdown was already sent", 0x4000|8)) } - let inbound_stats = self.context.get_inbound_pending_htlc_stats(None); - let outbound_stats = self.context.get_outbound_pending_htlc_stats(None); - let max_dust_htlc_exposure_msat = self.context.get_max_dust_htlc_exposure_msat(fee_estimator); + let dust_exposure_limiting_feerate = self.context.get_dust_exposure_limiting_feerate(&fee_estimator); + let htlc_stats = self.context.get_pending_htlc_stats(None, dust_exposure_limiting_feerate); + let max_dust_htlc_exposure_msat = self.context.get_max_dust_htlc_exposure_msat(dust_exposure_limiting_feerate); let (htlc_timeout_dust_limit, htlc_success_dust_limit) = if self.context.get_channel_type().supports_anchors_zero_fee_htlc_tx() { (0, 0) } else { @@ -6117,17 +7769,27 @@ impl Channel where }; let exposure_dust_limit_timeout_sats = htlc_timeout_dust_limit + self.context.counterparty_dust_limit_satoshis; if msg.amount_msat / 1000 < exposure_dust_limit_timeout_sats { - let on_counterparty_tx_dust_htlc_exposure_msat = inbound_stats.on_counterparty_tx_dust_exposure_msat + outbound_stats.on_counterparty_tx_dust_exposure_msat + msg.amount_msat; + let on_counterparty_tx_dust_htlc_exposure_msat = htlc_stats.on_counterparty_tx_dust_exposure_msat + msg.amount_msat; if on_counterparty_tx_dust_htlc_exposure_msat > max_dust_htlc_exposure_msat { log_info!(logger, "Cannot accept value that would put our exposure to dust HTLCs at {} over the limit {} on counterparty commitment tx", on_counterparty_tx_dust_htlc_exposure_msat, max_dust_htlc_exposure_msat); return Err(("Exceeded our dust exposure limit on counterparty commitment tx", 0x1000|7)) } + } else { + let htlc_dust_exposure_msat = + per_outbound_htlc_counterparty_commit_tx_fee_msat(self.context.feerate_per_kw, &self.context.channel_type); + let counterparty_tx_dust_exposure = + htlc_stats.on_counterparty_tx_dust_exposure_msat.saturating_add(htlc_dust_exposure_msat); + if counterparty_tx_dust_exposure > max_dust_htlc_exposure_msat { + log_info!(logger, "Cannot accept value that would put our exposure to tx fee dust at {} over the limit {} on counterparty commitment tx", + counterparty_tx_dust_exposure, max_dust_htlc_exposure_msat); + return Err(("Exceeded our tx fee dust exposure limit on counterparty commitment tx", 0x1000|7)) + } } let exposure_dust_limit_success_sats = htlc_success_dust_limit + self.context.holder_dust_limit_satoshis; if msg.amount_msat / 1000 < exposure_dust_limit_success_sats { - let on_holder_tx_dust_htlc_exposure_msat = inbound_stats.on_holder_tx_dust_exposure_msat + outbound_stats.on_holder_tx_dust_exposure_msat + msg.amount_msat; + let on_holder_tx_dust_htlc_exposure_msat = htlc_stats.on_holder_tx_dust_exposure_msat + msg.amount_msat; if on_holder_tx_dust_htlc_exposure_msat > max_dust_htlc_exposure_msat { log_info!(logger, "Cannot accept value that would put our exposure to dust HTLCs at {} over the limit {} on holder commitment tx", on_holder_tx_dust_htlc_exposure_msat, max_dust_htlc_exposure_msat); @@ -6151,7 +7813,7 @@ impl Channel where } let pending_value_to_self_msat = - self.context.value_to_self_msat + inbound_stats.pending_htlcs_value_msat - removed_outbound_total_msat; + self.context.value_to_self_msat + htlc_stats.pending_inbound_htlcs_value_msat - removed_outbound_total_msat; let pending_remote_value_msat = self.context.channel_value_satoshis * 1000 - pending_value_to_self_msat; @@ -6349,12 +8011,12 @@ impl Channel where self.context.channel_update_status = status; } - fn check_get_channel_ready(&mut self, height: u32) -> Option { + fn check_get_channel_ready(&mut self, height: u32, logger: &L) -> (Option, Option) where L::Target: Logger { // Called: // * always when a new block/transactions are confirmed with the new height // * when funding is signed with a height of 0 if self.context.funding_tx_confirmation_height == 0 && self.context.minimum_depth != Some(0) { - return None; + return (None, None); } let funding_tx_confirmations = height as i64 - self.context.funding_tx_confirmation_height as i64 + 1; @@ -6363,25 +8025,32 @@ impl Channel where } if funding_tx_confirmations < self.context.minimum_depth.unwrap_or(0) as i64 { - return None; + return (None, None); } // If we're still pending the signature on a funding transaction, then we're not ready to send a // channel_ready yet. if self.context.signer_pending_funding { - return None; + return (None, None); } + let was_splice = matches!(self.context.channel_state, ChannelState::AwaitingChannelReady(f) if f.is_splice()); // Note that we don't include ChannelState::WaitingForBatch as we don't want to send // channel_ready until the entire batch is ready. - let need_commitment_update = if matches!(self.context.channel_state, ChannelState::AwaitingChannelReady(f) if f.clone().clear(FundedStateFlags::ALL.into()).is_empty()) { + let await_flags = if let ChannelState::AwaitingChannelReady(f) = self.context.channel_state { + Some(f.clone() + .clear(FundedStateFlags::ALL.into()) + .clear(AwaitingChannelReadyFlags::IS_SPLICE.into())) + } else { + None + }; + let need_commitment_update = if await_flags.is_some() && await_flags.unwrap().is_empty() { self.context.channel_state.set_our_channel_ready(); true - } else if matches!(self.context.channel_state, ChannelState::AwaitingChannelReady(f) if f.clone().clear(FundedStateFlags::ALL.into()) == AwaitingChannelReadyFlags::THEIR_CHANNEL_READY) { - self.context.channel_state = ChannelState::ChannelReady(self.context.channel_state.with_funded_state_flags_mask().into()); - self.context.update_time_counter += 1; + } else if await_flags.is_some() && await_flags.unwrap() == AwaitingChannelReadyFlags::THEIR_CHANNEL_READY { + let _res = self.set_channel_ready(logger); true - } else if matches!(self.context.channel_state, ChannelState::AwaitingChannelReady(f) if f.clone().clear(FundedStateFlags::ALL.into()) == AwaitingChannelReadyFlags::OUR_CHANNEL_READY) { + } else if await_flags.is_some() && await_flags.unwrap() == AwaitingChannelReadyFlags::OUR_CHANNEL_READY { // We got a reorg but not enough to trigger a force close, just ignore. false } else { @@ -6404,19 +8073,27 @@ impl Channel where if need_commitment_update { if !self.context.channel_state.is_monitor_update_in_progress() { if !self.context.channel_state.is_peer_disconnected() { - let next_per_commitment_point = - self.context.holder_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - 1, &self.context.secp_ctx); - return Some(msgs::ChannelReady { - channel_id: self.context.channel_id, - next_per_commitment_point, - short_channel_id_alias: Some(self.context.outbound_scid_alias), - }); + if !was_splice { + let next_per_commitment_point = + self.context.holder_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - 1, &self.context.secp_ctx); + return (Some(msgs::ChannelReady { + channel_id: self.context.channel_id, + next_per_commitment_point, + short_channel_id_alias: Some(self.context.outbound_scid_alias), + }), None); + } else { + // #SPLICING + return (None, Some(msgs::SpliceLocked { + channel_id: self.context.channel_id, + splice_txid: self.context.channel_transaction_parameters.funding_outpoint.unwrap().txid, + })); + } } } else { self.context.monitor_pending_channel_ready = true; } } - None + (None, None) } /// When a transaction is confirmed, we check whether it is or spends the funding transaction @@ -6425,12 +8102,12 @@ impl Channel where pub fn transactions_confirmed( &mut self, block_hash: &BlockHash, height: u32, txdata: &TransactionData, chain_hash: ChainHash, node_signer: &NS, user_config: &UserConfig, logger: &L - ) -> Result<(Option, Option), ClosureReason> + ) -> Result<(Option, Option, Option), ClosureReason> where NS::Target: NodeSigner, L::Target: Logger { - let mut msgs = (None, None); + let mut msgs = (None, None, None); if let Some(funding_txo) = self.context.get_funding_txo() { for &(index_in_block, tx) in txdata.iter() { // Check if the transaction is the expected funding transaction, and if it is, @@ -6470,6 +8147,12 @@ impl Channel where self.context.short_channel_id = match scid_from_parts(height as u64, index_in_block as u64, txo_idx as u64) { Ok(scid) => Some(scid), Err(_) => panic!("Block was bogus - either height was > 16 million, had > 16 million transactions, or had > 65k outputs"), + }; + #[cfg(any(dual_funding, splicing))] + if self.interactive_tx_signing_session.is_some() { + // Mark that the interactive tx session is complete + self.interactive_tx_signing_session = None; + log_info!(logger, "Interactive transaction signing session closed for channel {}", &self.context.channel_id); } } // If this is a coinbase transaction and not a 0-conf channel @@ -6483,10 +8166,16 @@ impl Channel where // If we allow 1-conf funding, we may need to check for channel_ready here and // send it immediately instead of waiting for a best_block_updated call (which // may have already happened for this block). - if let Some(channel_ready) = self.check_get_channel_ready(height) { + let (channel_ready_opt, splice_locked_opt) = self.check_get_channel_ready(height, logger); + if let Some(channel_ready) = channel_ready_opt { log_info!(logger, "Sending a channel_ready to our peer for channel {}", &self.context.channel_id); let announcement_sigs = self.get_announcement_sigs(node_signer, chain_hash, user_config, height, logger); - msgs = (Some(channel_ready), announcement_sigs); + msgs = (Some(channel_ready), None, announcement_sigs); + } + if let Some(splice_locked) = splice_locked_opt { + log_info!(logger, "Sending a splice_locked to our peer for channel {}", &self.context.channel_id); + let announcement_sigs = self.get_announcement_sigs(node_signer, chain_hash, user_config, height, logger); + msgs = (None, Some(splice_locked), announcement_sigs); } } for inp in tx.input.iter() { @@ -6514,7 +8203,7 @@ impl Channel where pub fn best_block_updated( &mut self, height: u32, highest_header_time: u32, chain_hash: ChainHash, node_signer: &NS, user_config: &UserConfig, logger: &L - ) -> Result<(Option, Vec<(HTLCSource, PaymentHash)>, Option), ClosureReason> + ) -> Result<(Option, Option, Vec<(HTLCSource, PaymentHash)>, Option), ClosureReason> where NS::Target: NodeSigner, L::Target: Logger @@ -6525,7 +8214,7 @@ impl Channel where fn do_best_block_updated( &mut self, height: u32, highest_header_time: u32, chain_node_signer: Option<(ChainHash, &NS, &UserConfig)>, logger: &L - ) -> Result<(Option, Vec<(HTLCSource, PaymentHash)>, Option), ClosureReason> + ) -> Result<(Option, Option, Vec<(HTLCSource, PaymentHash)>, Option), ClosureReason> where NS::Target: NodeSigner, L::Target: Logger @@ -6549,12 +8238,20 @@ impl Channel where self.context.update_time_counter = cmp::max(self.context.update_time_counter, highest_header_time); - if let Some(channel_ready) = self.check_get_channel_ready(height) { + let (channel_ready_opt, splice_locked_opt) = self.check_get_channel_ready(height, logger); + if let Some(channel_ready) = channel_ready_opt { let announcement_sigs = if let Some((chain_hash, node_signer, user_config)) = chain_node_signer { self.get_announcement_sigs(node_signer, chain_hash, user_config, height, logger) } else { None }; log_info!(logger, "Sending a channel_ready to our peer for channel {}", &self.context.channel_id); - return Ok((Some(channel_ready), timed_out_htlcs, announcement_sigs)); + return Ok((Some(channel_ready), None, timed_out_htlcs, announcement_sigs)); + } + if let Some(splice_locked) = splice_locked_opt { + let announcement_sigs = if let Some((chain_hash, node_signer, user_config)) = chain_node_signer { + self.get_announcement_sigs(node_signer, chain_hash, user_config, height, logger) + } else { None }; + log_info!(logger, "Sending a splice_locked to our peer for channel {}", &self.context.channel_id); + return Ok((None, Some(splice_locked), timed_out_htlcs, announcement_sigs)); } if matches!(self.context.channel_state, ChannelState::ChannelReady(_)) || @@ -6593,7 +8290,7 @@ impl Channel where let announcement_sigs = if let Some((chain_hash, node_signer, user_config)) = chain_node_signer { self.get_announcement_sigs(node_signer, chain_hash, user_config, height, logger) } else { None }; - Ok((None, timed_out_htlcs, announcement_sigs)) + Ok((None, None, timed_out_htlcs, announcement_sigs)) } /// Indicates the funding transaction is no longer confirmed in the main chain. This may @@ -6609,8 +8306,9 @@ impl Channel where // time we saw and it will be ignored. let best_time = self.context.update_time_counter; match self.do_best_block_updated(reorg_height, best_time, None::<(ChainHash, &&dyn NodeSigner, &UserConfig)>, logger) { - Ok((channel_ready, timed_out_htlcs, announcement_sigs)) => { + Ok((channel_ready, splice_locked, timed_out_htlcs, announcement_sigs)) => { assert!(channel_ready.is_none(), "We can't generate a funding with 0 confirmations?"); + assert!(splice_locked.is_none(), "Cannot occur during splicing ?"); assert!(timed_out_htlcs.is_empty(), "We can't have accepted HTLCs with a timeout before our funding confirmation?"); assert!(announcement_sigs.is_none(), "We can't generate an announcement_sigs with 0 confirmations?"); Ok(()) @@ -6863,13 +8561,58 @@ impl Channel where next_remote_commitment_number: INITIAL_COMMITMENT_NUMBER - self.context.cur_counterparty_commitment_transaction_number - 1, your_last_per_commitment_secret: remote_last_secret, my_current_per_commitment_point: dummy_pubkey, - // TODO(dual_funding): If we've sent `commtiment_signed` for an interactive transaction - // construction but have not received `tx_signatures` we MUST set `next_funding_txid` to the - // txid of that interactive transaction, else we MUST NOT set it. - next_funding_txid: None, + next_funding_txid: self.context.next_funding_txid, } } + /// #SPLICING STEP2 I + /// Inspired by get_open_channel() + /// Get the splice message that can be sent during splice initiation + #[cfg(splicing)] + pub fn get_splice_init(&mut self, our_funding_contribution_satoshis: i64, signer_provider: &SP, + funding_feerate_perkw: u32, locktime: u32 + ) -> msgs::SpliceInit { + if !self.context.is_outbound() { + panic!("Tried to initiate a splice on an inbound channel?"); + } + + // TODO impl, checks + /* + if self.context.have_received_message() { + panic!("Cannot generate an open_channel after we've moved forward"); + } + + if self.cur_holder_commitment_transaction_number != INITIAL_COMMITMENT_NUMBER { + panic!("Tried to send an open_channel for a channel that has already advanced"); + } + + let first_per_commitment_point = self.holder_signer.get_per_commitment_point(self.cur_holder_commitment_transaction_number, &self.secp_ctx); + */ + + // At this point we are not committed to the new channel value yet, but the funding pubkey + // depends on the channel value, so we create here a new funding pubkey with the new + // channel value. + // Note that channel_keys_id is supposed NOT to change + let funding_pubkey = { + // TODO: Funding pubkey generation requires the post channel value, but that is not known yet, + // the acceptor contribution is missing. There is a need for a way to generate a new funding pubkey, + // not based on the channel value + let pre_channel_value = self.context.channel_value_satoshis; + let incomplete_post_splice_channel_value = SplicingChannelValues::compute_post_value(pre_channel_value, our_funding_contribution_satoshis, 0); + let holder_signer = signer_provider.derive_channel_signer(incomplete_post_splice_channel_value, self.context.channel_keys_id); + holder_signer.pubkeys().funding_pubkey + }; + + // TODO how to handle channel capacity, orig is stored in Channel, has to be updated, in the interim there are two + msgs::SpliceInit { + channel_id: self.context.channel_id, + funding_contribution_satoshis: our_funding_contribution_satoshis, + funding_feerate_perkw, + locktime, + funding_pubkey, + require_confirmed_inputs: None, + } + } // Send stuff to our remote peers: @@ -7146,6 +8889,7 @@ impl Channel where channel_id: self.context.channel_id, signature, htlc_signatures, + batch: None, #[cfg(taproot)] partial_signature_with_nonce: None, }, (counterparty_commitment_txid, commitment_stats.htlcs_included))) @@ -7314,7 +9058,7 @@ impl OutboundV1Channel where SP::Target: SignerProvider { outbound_scid_alias: u64, temporary_channel_id: Option ) -> Result, APIError> where ES::Target: EntropySource, - F::Target: FeeEstimator + F::Target: FeeEstimator { let holder_selected_channel_reserve_satoshis = get_holder_selected_channel_reserve_satoshis(channel_value_satoshis, config); if holder_selected_channel_reserve_satoshis < MIN_CHAN_DUST_LIMIT_SATOSHIS { @@ -7347,7 +9091,7 @@ impl OutboundV1Channel where SP::Target: SignerProvider { holder_signer, pubkeys, )?, - unfunded_context: UnfundedChannelContext { unfunded_channel_age_ticks: 0 } + unfunded_context: UnfundedChannelContext::default(), }; Ok(chan) } @@ -7424,7 +9168,11 @@ impl OutboundV1Channel where SP::Target: SignerProvider { self.context.minimum_depth = Some(COINBASE_MATURITY); } - self.context.funding_transaction = Some(funding_transaction); + self.context.funding_transaction = Some(funding_transaction.clone()); + #[cfg(splicing)] + { + self.context.funding_transaction_saved = Some(funding_transaction); + } self.context.is_batch_funding = Some(()).filter(|_| is_batch_funding); let funding_created = self.get_funding_created_msg(logger); @@ -7456,6 +9204,12 @@ impl OutboundV1Channel where SP::Target: SignerProvider { Ok(self.get_open_channel(chain_hash)) } + /// Returns true if we can resume the channel by sending the [`msgs::OpenChannel`] again. + pub fn is_resumable(&self) -> bool { + !self.context.have_received_message() && + self.context.cur_holder_commitment_transaction_number == INITIAL_COMMITMENT_NUMBER + } + pub fn get_open_channel(&self, chain_hash: ChainHash) -> msgs::OpenChannel { if !self.context.is_outbound() { panic!("Tried to open a channel for an inbound channel?"); @@ -7502,135 +9256,8 @@ impl OutboundV1Channel where SP::Target: SignerProvider { // Message handlers pub fn accept_channel(&mut self, msg: &msgs::AcceptChannel, default_limits: &ChannelHandshakeLimits, their_features: &InitFeatures) -> Result<(), ChannelError> { - let peer_limits = if let Some(ref limits) = self.context.inbound_handshake_limits_override { limits } else { default_limits }; - - // Check sanity of message fields: - if !self.context.is_outbound() { - return Err(ChannelError::Close("Got an accept_channel message from an inbound peer".to_owned())); - } - if !matches!(self.context.channel_state, ChannelState::NegotiatingFunding(flags) if flags == NegotiatingFundingFlags::OUR_INIT_SENT) { - return Err(ChannelError::Close("Got an accept_channel message at a strange time".to_owned())); - } - if msg.common_fields.dust_limit_satoshis > 21000000 * 100000000 { - return Err(ChannelError::Close(format!("Peer never wants payout outputs? dust_limit_satoshis was {}", msg.common_fields.dust_limit_satoshis))); - } - if msg.channel_reserve_satoshis > self.context.channel_value_satoshis { - return Err(ChannelError::Close(format!("Bogus channel_reserve_satoshis ({}). Must not be greater than ({})", msg.channel_reserve_satoshis, self.context.channel_value_satoshis))); - } - if msg.common_fields.dust_limit_satoshis > self.context.holder_selected_channel_reserve_satoshis { - return Err(ChannelError::Close(format!("Dust limit ({}) is bigger than our channel reserve ({})", msg.common_fields.dust_limit_satoshis, self.context.holder_selected_channel_reserve_satoshis))); - } - if msg.channel_reserve_satoshis > self.context.channel_value_satoshis - self.context.holder_selected_channel_reserve_satoshis { - return Err(ChannelError::Close(format!("Bogus channel_reserve_satoshis ({}). Must not be greater than channel value minus our reserve ({})", - msg.channel_reserve_satoshis, self.context.channel_value_satoshis - self.context.holder_selected_channel_reserve_satoshis))); - } - let full_channel_value_msat = (self.context.channel_value_satoshis - msg.channel_reserve_satoshis) * 1000; - if msg.common_fields.htlc_minimum_msat >= full_channel_value_msat { - return Err(ChannelError::Close(format!("Minimum htlc value ({}) is full channel value ({})", msg.common_fields.htlc_minimum_msat, full_channel_value_msat))); - } - let max_delay_acceptable = u16::min(peer_limits.their_to_self_delay, MAX_LOCAL_BREAKDOWN_TIMEOUT); - if msg.common_fields.to_self_delay > max_delay_acceptable { - return Err(ChannelError::Close(format!("They wanted our payments to be delayed by a needlessly long period. Upper limit: {}. Actual: {}", max_delay_acceptable, msg.common_fields.to_self_delay))); - } - if msg.common_fields.max_accepted_htlcs < 1 { - return Err(ChannelError::Close("0 max_accepted_htlcs makes for a useless channel".to_owned())); - } - if msg.common_fields.max_accepted_htlcs > MAX_HTLCS { - return Err(ChannelError::Close(format!("max_accepted_htlcs was {}. It must not be larger than {}", msg.common_fields.max_accepted_htlcs, MAX_HTLCS))); - } - - // Now check against optional parameters as set by config... - if msg.common_fields.htlc_minimum_msat > peer_limits.max_htlc_minimum_msat { - return Err(ChannelError::Close(format!("htlc_minimum_msat ({}) is higher than the user specified limit ({})", msg.common_fields.htlc_minimum_msat, peer_limits.max_htlc_minimum_msat))); - } - if msg.common_fields.max_htlc_value_in_flight_msat < peer_limits.min_max_htlc_value_in_flight_msat { - return Err(ChannelError::Close(format!("max_htlc_value_in_flight_msat ({}) is less than the user specified limit ({})", msg.common_fields.max_htlc_value_in_flight_msat, peer_limits.min_max_htlc_value_in_flight_msat))); - } - if msg.channel_reserve_satoshis > peer_limits.max_channel_reserve_satoshis { - return Err(ChannelError::Close(format!("channel_reserve_satoshis ({}) is higher than the user specified limit ({})", msg.channel_reserve_satoshis, peer_limits.max_channel_reserve_satoshis))); - } - if msg.common_fields.max_accepted_htlcs < peer_limits.min_max_accepted_htlcs { - return Err(ChannelError::Close(format!("max_accepted_htlcs ({}) is less than the user specified limit ({})", msg.common_fields.max_accepted_htlcs, peer_limits.min_max_accepted_htlcs))); - } - if msg.common_fields.dust_limit_satoshis < MIN_CHAN_DUST_LIMIT_SATOSHIS { - return Err(ChannelError::Close(format!("dust_limit_satoshis ({}) is less than the implementation limit ({})", msg.common_fields.dust_limit_satoshis, MIN_CHAN_DUST_LIMIT_SATOSHIS))); - } - if msg.common_fields.dust_limit_satoshis > MAX_CHAN_DUST_LIMIT_SATOSHIS { - return Err(ChannelError::Close(format!("dust_limit_satoshis ({}) is greater than the implementation limit ({})", msg.common_fields.dust_limit_satoshis, MAX_CHAN_DUST_LIMIT_SATOSHIS))); - } - if msg.common_fields.minimum_depth > peer_limits.max_minimum_depth { - return Err(ChannelError::Close(format!("We consider the minimum depth to be unreasonably large. Expected minimum: ({}). Actual: ({})", peer_limits.max_minimum_depth, msg.common_fields.minimum_depth))); - } - - if let Some(ty) = &msg.common_fields.channel_type { - if *ty != self.context.channel_type { - return Err(ChannelError::Close("Channel Type in accept_channel didn't match the one sent in open_channel.".to_owned())); - } - } else if their_features.supports_channel_type() { - // Assume they've accepted the channel type as they said they understand it. - } else { - let channel_type = ChannelTypeFeatures::from_init(&their_features); - if channel_type != ChannelTypeFeatures::only_static_remote_key() { - return Err(ChannelError::Close("Only static_remote_key is supported for non-negotiated channel types".to_owned())); - } - self.context.channel_type = channel_type.clone(); - self.context.channel_transaction_parameters.channel_type_features = channel_type; - } - - let counterparty_shutdown_scriptpubkey = if their_features.supports_upfront_shutdown_script() { - match &msg.common_fields.shutdown_scriptpubkey { - &Some(ref script) => { - // Peer is signaling upfront_shutdown and has opt-out with a 0-length script. We don't enforce anything - if script.len() == 0 { - None - } else { - if !script::is_bolt2_compliant(&script, their_features) { - return Err(ChannelError::Close(format!("Peer is signaling upfront_shutdown but has provided an unacceptable scriptpubkey format: {}", script))); - } - Some(script.clone()) - } - }, - // Peer is signaling upfront shutdown but don't opt-out with correct mechanism (a.k.a 0-length script). Peer looks buggy, we fail the channel - &None => { - return Err(ChannelError::Close("Peer is signaling upfront_shutdown but we don't get any script. Use 0-length script to opt-out".to_owned())); - } - } - } else { None }; - - self.context.counterparty_dust_limit_satoshis = msg.common_fields.dust_limit_satoshis; - self.context.counterparty_max_htlc_value_in_flight_msat = cmp::min(msg.common_fields.max_htlc_value_in_flight_msat, self.context.channel_value_satoshis * 1000); - self.context.counterparty_selected_channel_reserve_satoshis = Some(msg.channel_reserve_satoshis); - self.context.counterparty_htlc_minimum_msat = msg.common_fields.htlc_minimum_msat; - self.context.counterparty_max_accepted_htlcs = msg.common_fields.max_accepted_htlcs; - - if peer_limits.trust_own_funding_0conf { - self.context.minimum_depth = Some(msg.common_fields.minimum_depth); - } else { - self.context.minimum_depth = Some(cmp::max(1, msg.common_fields.minimum_depth)); - } - - let counterparty_pubkeys = ChannelPublicKeys { - funding_pubkey: msg.common_fields.funding_pubkey, - revocation_basepoint: RevocationBasepoint::from(msg.common_fields.revocation_basepoint), - payment_point: msg.common_fields.payment_basepoint, - delayed_payment_basepoint: DelayedPaymentBasepoint::from(msg.common_fields.delayed_payment_basepoint), - htlc_basepoint: HtlcBasepoint::from(msg.common_fields.htlc_basepoint) - }; - - self.context.channel_transaction_parameters.counterparty_parameters = Some(CounterpartyChannelTransactionParameters { - selected_contest_delay: msg.common_fields.to_self_delay, - pubkeys: counterparty_pubkeys, - }); - - self.context.counterparty_cur_commitment_point = Some(msg.common_fields.first_per_commitment_point); - self.context.counterparty_shutdown_scriptpubkey = counterparty_shutdown_scriptpubkey; - - self.context.channel_state = ChannelState::NegotiatingFunding( - NegotiatingFundingFlags::OUR_INIT_SENT | NegotiatingFundingFlags::THEIR_INIT_SENT - ); - self.context.inbound_handshake_limits_override = None; // We're done enforcing limits on our peer's handshake now. - - Ok(()) + self.context.do_accept_channel_checks( + default_limits, their_features, &msg.common_fields, msg.channel_reserve_satoshis) } /// Handles a funding_signed message from the remote end. @@ -7726,9 +9353,11 @@ impl OutboundV1Channel where SP::Target: SignerProvider { context: self.context, #[cfg(any(dual_funding, splicing))] dual_funding_channel_context: None, + #[cfg(any(dual_funding, splicing))] + interactive_tx_signing_session: None, }; - let need_channel_ready = channel.check_get_channel_ready(0).is_some(); + let need_channel_ready = channel.check_get_channel_ready(0, logger).0.is_some(); channel.monitor_updating_paused(false, false, need_channel_ready, Vec::new(), Vec::new(), Vec::new()); Ok((channel, channel_monitor)) } @@ -7834,7 +9463,7 @@ impl InboundV1Channel where SP::Target: SignerProvider { msg.push_msat, msg.common_fields.clone(), )?, - unfunded_context: UnfundedChannelContext { unfunded_channel_age_ticks: 0 } + unfunded_context: UnfundedChannelContext::default(), }; Ok(chan) } @@ -8016,8 +9645,10 @@ impl InboundV1Channel where SP::Target: SignerProvider { context: self.context, #[cfg(any(dual_funding, splicing))] dual_funding_channel_context: None, + #[cfg(any(dual_funding, splicing))] + interactive_tx_signing_session: None, }; - let need_channel_ready = channel.check_get_channel_ready(0).is_some(); + let need_channel_ready = channel.check_get_channel_ready(0, logger).0.is_some(); channel.monitor_updating_paused(false, false, need_channel_ready, Vec::new(), Vec::new(), Vec::new()); Ok((channel, funding_signed, channel_monitor)) @@ -8029,17 +9660,42 @@ impl InboundV1Channel where SP::Target: SignerProvider { pub(super) struct OutboundV2Channel where SP::Target: SignerProvider { pub context: ChannelContext, pub unfunded_context: UnfundedChannelContext, - #[cfg(any(dual_funding, splicing))] pub dual_funding_context: DualFundingChannelContext, } +/// Calculate funding values for interactive tx for splicing, based on channel value changes +#[cfg(splicing)] +fn calculate_funding_values( + pre_channel_value: u64, + our_funding_contribution: i64, + their_funding_contribution: i64, + is_initiator: bool, +) -> Result<(u64, u64), ChannelError> { + // Initiator also adds the previous funding as input + let mut our_contribution_with_prev = our_funding_contribution; + let mut their_contribution_with_prev = their_funding_contribution; + if is_initiator { + our_contribution_with_prev = our_contribution_with_prev.saturating_add(pre_channel_value as i64); + } else { + their_contribution_with_prev = their_contribution_with_prev.saturating_add(pre_channel_value as i64); + } + if our_contribution_with_prev < 0 || their_contribution_with_prev < 0 { + return Err(ChannelError::Close(format!( + "Funding contribution cannot be negative! ours {} theirs {} pre {} initiator {} acceptor {}", + our_contribution_with_prev, their_contribution_with_prev, pre_channel_value, + our_funding_contribution, their_funding_contribution + ))); + } + Ok((our_contribution_with_prev.abs() as u64, their_contribution_with_prev.abs() as u64)) +} + #[cfg(any(dual_funding, splicing))] impl OutboundV2Channel where SP::Target: SignerProvider { pub fn new( fee_estimator: &LowerBoundedFeeEstimator, entropy_source: &ES, signer_provider: &SP, counterparty_node_id: PublicKey, their_features: &InitFeatures, funding_satoshis: u64, - user_id: u128, config: &UserConfig, current_chain_height: u32, outbound_scid_alias: u64, - funding_confirmation_target: ConfirmationTarget, + funding_inputs: Vec<(TxIn, TransactionU16LenLimited)>, user_id: u128, config: &UserConfig, + current_chain_height: u32, outbound_scid_alias: u64, funding_confirmation_target: ConfirmationTarget, ) -> Result, APIError> where ES::Target: EntropySource, F::Target: FeeEstimator, @@ -8054,7 +9710,11 @@ impl OutboundV2Channel where SP::Target: SignerProvider { funding_satoshis, MIN_CHAN_DUST_LIMIT_SATOSHIS); let funding_feerate_sat_per_1000_weight = fee_estimator.bounded_sat_per_1000_weight(funding_confirmation_target); - let funding_tx_locktime = current_chain_height; + let funding_tx_locktime = LockTime::from_height(current_chain_height) + .map_err(|_| APIError::APIMisuseError { + err: format!( + "Provided current chain height of {} doesn't make sense for a height-based timelock for the funding transaction", + current_chain_height) })?; let chan = Self { context: ChannelContext::new_for_outbound_channel( @@ -8075,12 +9735,13 @@ impl OutboundV2Channel where SP::Target: SignerProvider { holder_signer, pubkeys, )?, - unfunded_context: UnfundedChannelContext { unfunded_channel_age_ticks: 0 }, + unfunded_context: UnfundedChannelContext::default(), dual_funding_context: DualFundingChannelContext { our_funding_satoshis: funding_satoshis, their_funding_satoshis: 0, funding_tx_locktime, funding_feerate_sat_per_1000_weight, + our_funding_inputs: funding_inputs, } }; Ok(chan) @@ -8142,10 +9803,56 @@ impl OutboundV2Channel where SP::Target: SignerProvider { }, funding_feerate_sat_per_1000_weight: self.context.feerate_per_kw, second_per_commitment_point, - locktime: self.dual_funding_context.funding_tx_locktime, + locktime: self.dual_funding_context.funding_tx_locktime.to_consensus_u32(), require_confirmed_inputs: None, } } + + pub fn begin_interactive_funding_tx_construction(&mut self, signer_provider: &SP, + entropy_source: &ES, holder_node_id: PublicKey, + ) -> Result, APIError> + where ES::Target: EntropySource + { + self.context.begin_interactive_funding_tx_construction(&self.dual_funding_context, + signer_provider, entropy_source, holder_node_id, true /* is_initiator */) + } + + pub fn funding_tx_constructed( + mut self, counterparty_node_id: &PublicKey, mut signing_session: InteractiveTxSigningSession, logger: &L + ) -> Result<(Channel, msgs::CommitmentSigned, Option), (Self, ChannelError)> + where + L::Target: Logger + { + let (commitment_signed, funding_ready_for_sig_event) = match self.internal_funding_tx_constructed( + counterparty_node_id, &mut signing_session, logger) { + Ok(res) => res, + Err(err) => return Err((self, err)), + }; + + let channel = Channel { + context: self.context, + dual_funding_channel_context: Some(self.dual_funding_context), + interactive_tx_signing_session: Some(signing_session), + }; + + Ok((channel, commitment_signed, funding_ready_for_sig_event)) + } + + pub fn accept_channel_v2(&mut self, msg: &msgs::AcceptChannelV2, default_limits: &ChannelHandshakeLimits, + their_features: &InitFeatures) -> Result<(), ChannelError> { + // According to the spec we MUST fail the negotiation if `require_confirmed_inputs` is set in + // `accept_channel2` but we cannot provide confirmed inputs. We're not going to check if the user + // upheld this requirement, so we just defer the failure to the counterparty's checks during + // interactive transaction construction and remain blissfully unaware here. + + // Now we can generate the `channel_id` since we have our counterparty's `revocation_basepoint`. + self.context.channel_id = ChannelId::v2_from_revocation_basepoints( + &self.context.get_holder_pubkeys().revocation_basepoint, &RevocationBasepoint::from(msg.common_fields.revocation_basepoint)); + self.dual_funding_context.their_funding_satoshis = msg.funding_satoshis; + self.context.do_accept_channel_checks( + default_limits, their_features, &msg.common_fields, get_v2_channel_reserve_satoshis( + msg.common_fields.dust_limit_satoshis, self.context.channel_value_satoshis)) + } } // A not-yet-funded inbound (from counterparty) channel using V2 channel establishment. @@ -8163,8 +9870,9 @@ impl InboundV2Channel where SP::Target: SignerProvider { pub fn new( fee_estimator: &LowerBoundedFeeEstimator, entropy_source: &ES, signer_provider: &SP, counterparty_node_id: PublicKey, our_supported_features: &ChannelTypeFeatures, - their_features: &InitFeatures, msg: &msgs::OpenChannelV2, funding_satoshis: u64, user_id: u128, - config: &UserConfig, current_chain_height: u32, logger: &L, + their_features: &InitFeatures, msg: &msgs::OpenChannelV2, funding_satoshis: u64, + funding_inputs: Vec<(TxIn, TransactionU16LenLimited)>, user_id: u128, config: &UserConfig, + current_chain_height: u32, logger: &L, ) -> Result, ChannelError> where ES::Target: EntropySource, F::Target: FeeEstimator, @@ -8220,12 +9928,13 @@ impl InboundV2Channel where SP::Target: SignerProvider { let chan = Self { context, - unfunded_context: UnfundedChannelContext { unfunded_channel_age_ticks: 0 }, + unfunded_context: UnfundedChannelContext::default(), dual_funding_context: DualFundingChannelContext { our_funding_satoshis: funding_satoshis, their_funding_satoshis: msg.common_fields.funding_satoshis, - funding_tx_locktime: msg.locktime, + funding_tx_locktime: LockTime::from_consensus(msg.locktime), funding_feerate_sat_per_1000_weight: msg.funding_feerate_sat_per_1000_weight, + our_funding_inputs: funding_inputs, } }; @@ -8296,9 +10005,208 @@ impl InboundV2Channel where SP::Target: SignerProvider { /// inbound channel without accepting it. /// /// [`msgs::AcceptChannelV2`]: crate::ln::msgs::AcceptChannelV2 - #[cfg(test)] - pub fn get_accept_channel_v2_message(&self) -> msgs::AcceptChannelV2 { - self.generate_accept_channel_v2_message() + // #[cfg(all(test, any(dual_funding, splicing)))] + // pub fn get_accept_channel_v2_message(&self) -> msgs::AcceptChannelV2 { + // self.generate_accept_channel_v2_message() + // } + + pub fn begin_interactive_funding_tx_construction(&mut self, signer_provider: &SP, + entropy_source: &ES, holder_node_id: PublicKey, + ) -> Result, APIError> + where ES::Target: EntropySource + { + self.context.begin_interactive_funding_tx_construction(&self.dual_funding_context, + signer_provider, entropy_source, holder_node_id, false /* is_initiator */) + } + + pub fn funding_tx_constructed( + mut self, counterparty_node_id: &PublicKey, mut signing_session: InteractiveTxSigningSession, logger: &L + ) -> Result<(Channel, msgs::CommitmentSigned, Option), (Self, ChannelError)> + where + L::Target: Logger + { + let (commitment_signed, funding_ready_for_sig_event) = match self.internal_funding_tx_constructed( + counterparty_node_id, &mut signing_session, logger) { + Ok(res) => res, + Err(err) => return Err((self, err)), + }; + + let channel = Channel { + context: self.context, + dual_funding_channel_context: Some(self.dual_funding_context), + interactive_tx_signing_session: Some(signing_session), + }; + + Ok((channel, commitment_signed, funding_ready_for_sig_event)) + } +} + +#[cfg(any(dual_funding, splicing))] +impl V2Channel where SP::Target: SignerProvider { + pub fn is_outbound(&self) -> bool { + match self { + Self::UnfundedOutboundV2(_ch) => true, + Self::UnfundedInboundV2(_ch) => false, + } + } + + pub fn begin_interactive_funding_tx_construction(&mut self, signer_provider: &SP, + entropy_source: &ES, holder_node_id: PublicKey, + ) -> Result, APIError> + where ES::Target: EntropySource + { + match self { + Self::UnfundedOutboundV2(ref mut ch) => { + ch.context.begin_interactive_funding_tx_construction(&ch.dual_funding_context, + signer_provider, entropy_source, holder_node_id, true) + }, + Self::UnfundedInboundV2(ref mut ch) => { + ch.context.begin_interactive_funding_tx_construction(&ch.dual_funding_context, + signer_provider, entropy_source, holder_node_id, false) + }, + } + } + + pub fn funding_tx_constructed( + self, counterparty_node_id: &PublicKey, mut signing_session: InteractiveTxSigningSession, logger: &L + ) -> Result<(Channel, msgs::CommitmentSigned, Option), (Self, ChannelError)> + where + L::Target: Logger + { + match self { + Self::UnfundedOutboundV2(mut ch) => { + let (commitment_signed, funding_ready_for_sig_event) = match ch.internal_funding_tx_constructed( + counterparty_node_id, &mut signing_session, logger) { + Ok(res) => res, + Err(err) => return Err((Self::UnfundedOutboundV2(ch), err)), + }; + + let channel = Channel { + context: ch.context, + dual_funding_channel_context: Some(ch.dual_funding_context), + interactive_tx_signing_session: Some(signing_session), + }; + + Ok((channel, commitment_signed, funding_ready_for_sig_event)) + }, + Self::UnfundedInboundV2(mut ch) => { + let (commitment_signed, funding_ready_for_sig_event) = match ch.internal_funding_tx_constructed( + counterparty_node_id, &mut signing_session, logger) { + Ok(res) => res, + Err(err) => return Err((Self::UnfundedInboundV2(ch), err)), + }; + + let channel = Channel { + context: ch.context, + dual_funding_channel_context: Some(ch.dual_funding_context), + interactive_tx_signing_session: Some(signing_session), + }; + + Ok((channel, commitment_signed, funding_ready_for_sig_event)) + }, + } + } + + /// Create new channel for splice + #[cfg(splicing)] + pub fn new_spliced( + is_outbound: bool, + pre_splice_context: &ChannelContext, + signer_provider: &SP, + counterparty_funding_pubkey: &PublicKey, + our_funding_contribution: i64, + their_funding_contribution: i64, + funding_inputs: Vec<(TxIn, TransactionU16LenLimited)>, + funding_tx_locktime: LockTime, + funding_feerate_sat_per_1000_weight: u32, + logger: &L, + ) -> Result where L::Target: Logger + { + let pre_channel_value = pre_splice_context.get_value_satoshis(); + let post_channel_value = SplicingChannelValues::compute_post_value(pre_channel_value, our_funding_contribution, their_funding_contribution); + // Create new signer, using the new channel value. + // Note: channel_keys_id is not changed + let holder_signer = signer_provider.derive_channel_signer(post_channel_value, pre_splice_context.channel_keys_id); + + /* + let temporary_channel_id = Some(ChannelId::temporary_v2_from_revocation_basepoint(&pubkeys.revocation_basepoint)); + + let holder_selected_channel_reserve_satoshis = get_v2_channel_reserve_satoshis( + funding_satoshis, MIN_CHAN_DUST_LIMIT_SATOSHIS); + + let funding_feerate_sat_per_1000_weight = fee_estimator.bounded_sat_per_1000_weight(funding_confirmation_target); + let funding_tx_locktime = LockTime::from_height(current_chain_height) + .map_err(|_| APIError::APIMisuseError { + err: format!( + "Provided current chain height of {} doesn't make sense for a height-based timelock for the funding transaction", + current_chain_height) })?; + */ + + let context = ChannelContext::new_for_splice( + pre_splice_context, + is_outbound, + counterparty_funding_pubkey, + our_funding_contribution, + their_funding_contribution, + holder_signer, + logger, + )?; + + let (our_funding_satoshis, their_funding_satoshis) = calculate_funding_values( + pre_channel_value, + our_funding_contribution, + their_funding_contribution, + is_outbound, + )?; + + let dual_funding_context = DualFundingChannelContext { + our_funding_satoshis, + their_funding_satoshis, + funding_tx_locktime, + funding_feerate_sat_per_1000_weight, + our_funding_inputs: funding_inputs, + }; + let unfunded_context = UnfundedChannelContext::default(); + let post_chan = if is_outbound { + Self::UnfundedOutboundV2(OutboundV2Channel { + context, + dual_funding_context, + unfunded_context, + }) + } else { + Self::UnfundedInboundV2(InboundV2Channel { + context, + dual_funding_context, + unfunded_context, + }) + }; + + Ok(post_chan) + } + + /// #SPLICING STEP4 A + /// Get the splice_ack message that can be sent in response to splice initiation + /// TODO move to ChannelContext + #[cfg(splicing)] + pub fn get_splice_ack(&mut self, our_funding_contribution_satoshis: i64) -> Result { + if self.is_outbound() { + panic!("Tried to accept a splice on an outound channel?"); + } + + // TODO checks + + // TODO check + // self.context.channel_state = ChannelState::NegotiatingFunding(NegotiatingFundingFlags::OUR_INIT_SENT | NegotiatingFundingFlags::THEIR_INIT_SENT); + + // Note: at this point keys are already updated + let funding_pubkey = self.context().get_holder_pubkeys().funding_pubkey; + // TODO how to handle channel capacity, orig is stored in Channel, has to be updated, in the interim there are two + Ok(msgs::SpliceAck { + channel_id: self.context().channel_id, // pending_splice.pre_channel_id.unwrap(), // TODO + funding_contribution_satoshis: our_funding_contribution_satoshis, + funding_pubkey, + require_confirmed_inputs: None, + }) } } @@ -8327,6 +10235,97 @@ fn get_initial_channel_type(config: &UserConfig, their_features: &InitFeatures) ret } +#[cfg(any(dual_funding, splicing))] +fn get_initial_counterparty_commitment_signature( + context: &mut ChannelContext, logger: &L +) -> Result +where + SP::Target: SignerProvider, + L::Target: Logger +{ + let is_splice_pending = context.is_splice_pending(); + let counterparty_commitment_transaction_number = if !is_splice_pending { + context.cur_counterparty_commitment_transaction_number + } else { + // During splicing negotiation don't advance the commitment point + context.cur_counterparty_commitment_transaction_number + 1 + }; + let counterparty_keys = context.build_remote_transaction_keys(); + let counterparty_initial_commitment_tx = context.build_commitment_transaction( + counterparty_commitment_transaction_number, &counterparty_keys, false, false, logger).tx; + match context.holder_signer { + // TODO (taproot|arik): move match into calling method for Taproot + ChannelSignerType::Ecdsa(ref ecdsa) => { + Ok(ecdsa.sign_counterparty_commitment(&counterparty_initial_commitment_tx, Vec::new(), Vec::new(), &context.secp_ctx) + .map_err(|_| ChannelError::Close("Failed to get signatures for new commitment_signed".to_owned()))?.0) + }, + // TODO (taproot|arik) + #[cfg(taproot)] + _ => todo!(), + } +} + +#[cfg(any(dual_funding, splicing))] +fn get_initial_commitment_signed( + context: &mut ChannelContext, transaction: ConstructedTransaction, is_splice: bool, logger: &L +) -> Result +where + SP::Target: SignerProvider, + L::Target: Logger +{ + if !matches!( + context.channel_state, ChannelState::NegotiatingFunding(flags) + if flags == (NegotiatingFundingFlags::OUR_INIT_SENT | NegotiatingFundingFlags::THEIR_INIT_SENT)) { + panic!("Tried to get a funding_created messsage at a time other than immediately after initial handshake completion (or tried to get funding_created twice)"); + } + if !is_splice { + if context.commitment_secrets.get_min_seen_secret() != (1 << 48) || + context.cur_counterparty_commitment_transaction_number != INITIAL_COMMITMENT_NUMBER || + context.cur_holder_commitment_transaction_number != INITIAL_COMMITMENT_NUMBER { + panic!("Should not have advanced channel commitment tx numbers prior to initial commitment_signed"); + } + } + + // Find the funding output + let funding_redeemscript = context.get_funding_redeemscript().to_v0_p2wsh(); + let funding_outputs = transaction.find_output_by_script(&funding_redeemscript, None); + let funding_outpoint_index = if funding_outputs.len() == 1 { + funding_outputs[0].0 as u16 + } else if funding_outputs.len() == 0 { + return Err(ChannelError::Close("No output matched the script_pubkey (get_initial_commitment_signed)".to_owned())); + } else { // > 1 + return Err(ChannelError::Close("Multiple outputs matched the expected script".to_owned())); + }; + let funding_txo = OutPoint { txid: transaction.txid(), index: funding_outpoint_index }; + context.channel_transaction_parameters.funding_outpoint = Some(funding_txo); + context.holder_signer.as_mut().provide_channel_parameters(&context.channel_transaction_parameters); + + let signature = match get_initial_counterparty_commitment_signature(context, logger) { + Ok(res) => res, + Err(e) => { + log_error!(logger, "Got bad signatures: {:?}!", e); + context.channel_transaction_parameters.funding_outpoint = None; + return Err(e); + } + }; + + if context.signer_pending_funding { + log_trace!(logger, "Counterparty commitment signature ready for funding_created message: clearing signer_pending_funding"); + context.signer_pending_funding = false; + } + + log_info!(logger, "Generated commitment_signed for peer for channel {}", &context.channel_id()); + + Ok(msgs::CommitmentSigned { + channel_id: context.channel_id.clone(), + htlc_signatures: vec![], + signature, + batch: None, + #[cfg(taproot)] + partial_signature_with_nonce: None, + }) +} + const SERIALIZATION_VERSION: u8 = 4; const MIN_SERIALIZATION_VERSION: u8 = 3; @@ -8655,6 +10654,9 @@ impl Writeable for Channel where SP::Target: SignerProvider { self.context.channel_transaction_parameters.write(writer)?; self.context.funding_transaction.write(writer)?; + // TODO check BW compatibility; we may go with being non saved; if it is missing, splicing will not be possible + #[cfg(splicing)] + self.context.funding_transaction_saved.write(writer)?; self.context.counterparty_cur_commitment_point.write(writer)?; self.context.counterparty_prev_commitment_point.write(writer)?; @@ -8747,6 +10749,7 @@ impl Writeable for Channel where SP::Target: SignerProvider { (43, malformed_htlcs, optional_vec), // Added in 0.0.119 // 45 and 47 are reserved for async signing (49, self.context.local_initiated_shutdown, option), // Added in 0.0.122 + (51, self.context.next_funding_txid, option), // Added in 0.0.124 }); Ok(()) @@ -8986,6 +10989,8 @@ impl<'a, 'b, 'c, ES: Deref, SP: Deref> ReadableArgs<(&'a ES, &'b SP, u32, &'c Ch let mut channel_parameters: ChannelTransactionParameters = Readable::read(reader)?; let funding_transaction: Option = Readable::read(reader)?; + #[cfg(splicing)] + let funding_transaction_saved: Option = Readable::read(reader)?; let counterparty_cur_commitment_point = Readable::read(reader)?; @@ -9283,6 +11288,8 @@ impl<'a, 'b, 'c, ES: Deref, SP: Deref> ReadableArgs<(&'a ES, &'b SP, u32, &'c Ch channel_transaction_parameters: channel_parameters, funding_transaction, + #[cfg(splicing)] + funding_transaction_saved, is_batch_funding, counterparty_cur_commitment_point, @@ -9322,9 +11329,23 @@ impl<'a, 'b, 'c, ES: Deref, SP: Deref> ReadableArgs<(&'a ES, &'b SP, u32, &'c Ch local_initiated_shutdown, blocked_monitor_updates: blocked_monitor_updates.unwrap(), + + #[cfg(any(dual_funding, splicing))] + interactive_tx_constructor: None, + // If we've sent `commitment_signed` for an interactive transaction construction, + // but have not received `tx_signatures` we MUST set `next_funding_txid` to the + // txid of that interactive transaction, else we MUST NOT set it. + next_funding_txid: None, + + #[cfg(splicing)] + pending_splice_pre: None, + #[cfg(splicing)] + pending_splice_post: None, }, #[cfg(any(dual_funding, splicing))] dual_funding_channel_context: None, + #[cfg(any(dual_funding, splicing))] + interactive_tx_signing_session: None, }) } } @@ -11097,6 +13118,6 @@ mod tests { // Clear the ChannelState::WaitingForBatch only when called by ChannelManager. node_a_chan.set_batch_ready(); assert_eq!(node_a_chan.context.channel_state, ChannelState::AwaitingChannelReady(AwaitingChannelReadyFlags::THEIR_CHANNEL_READY)); - assert!(node_a_chan.check_get_channel_ready(0).is_some()); + assert!(node_a_chan.check_get_channel_ready(0, &&logger).0.is_some()); } } diff --git a/lightning/src/ln/channel_keys.rs b/lightning/src/ln/channel_keys.rs index 9e839b15e3c..c4d0a10ca42 100644 --- a/lightning/src/ln/channel_keys.rs +++ b/lightning/src/ln/channel_keys.rs @@ -192,7 +192,7 @@ pub fn add_public_key_tweak( /// Master key used in conjunction with per_commitment_point to generate [htlcpubkey](https://github.com/lightning/bolts/blob/master/03-transactions.md#key-derivation) for the latest state of a channel. /// A watcher can be given a [RevocationBasepoint] to generate per commitment [RevocationKey] to create justice transactions. -#[derive(PartialEq, Eq, Clone, Copy, Debug, Hash)] +#[derive(PartialEq, PartialOrd, Eq, Clone, Copy, Debug, Hash)] pub struct RevocationBasepoint(pub PublicKey); basepoint_impl!(RevocationBasepoint); key_read_write!(RevocationBasepoint); diff --git a/lightning/src/ln/channel_splice.rs b/lightning/src/ln/channel_splice.rs new file mode 100644 index 00000000000..77aab865f24 --- /dev/null +++ b/lightning/src/ln/channel_splice.rs @@ -0,0 +1,212 @@ +// This file is Copyright its original authors, visible in version control +// history. +// +// This file is licensed under the Apache License, Version 2.0 or the MIT license +// , at your option. +// You may not use this file except in accordance with one or both of these +// licenses. + +// Splicing related utilities + +use crate::chain::transaction::OutPoint; +use crate::ln::channel::ChannelError; +use crate::prelude::*; +use crate::util::ser::TransactionU16LenLimited; +use bitcoin::{ScriptBuf, Sequence, Transaction, TxIn, Witness}; + +/// Holds the pre-splice channel value, the contributions of the peers, and can compute the post-splice channel value. +#[derive(Clone)] +pub(crate) struct SplicingChannelValues { + /// The pre splice value + pub pre_channel_value: u64, + pub our_funding_contribution: i64, + pub their_funding_contribution: i64, +} + +impl SplicingChannelValues { + fn add_checked(base: u64, delta: i64) -> u64 { + if delta >= 0 { + base.saturating_add(delta as u64) + } else { + base.saturating_sub(delta.abs() as u64) + } + } + + /// Compute the post-splice channel value from the pre-splice values and the peer contributions + pub fn compute_post_value(pre_channel_value: u64, our_funding_contribution: i64, their_funding_contribution: i64) -> u64 { + Self::add_checked(Self::add_checked(pre_channel_value, our_funding_contribution), their_funding_contribution) + } + + /// The post-splice channel value, computed from the pre-splice values and the peer contributions + pub fn post_channel_value(&self) -> u64 { + Self::add_checked(self.pre_channel_value, self.delta_channel_value()) + } + + /// The computed change in the channel value + pub fn delta_channel_value(&self) -> i64 { + self.our_funding_contribution.saturating_add(self.their_funding_contribution) + } +} + +/// Info about a pending splice, used in the pre-splice channel +#[derive(Clone)] +pub(crate) struct PendingSpliceInfoPre { + /// Previous and next channel values + values: SplicingChannelValues, + // /// Reference to the post-splice channel (may be missing if channel_id is not yet known or the same) + // pub post_channel_id: Option, + pub funding_feerate_perkw: u32, + pub locktime: u32, + /// The funding inputs we will be contributing to the splice. + pub our_funding_inputs: Vec<(TxIn, TransactionU16LenLimited)>, +} + +impl PendingSpliceInfoPre { + pub(crate) fn new(pre_channel_value: u64, our_funding_contribution: i64, their_funding_contribution: i64, + funding_feerate_perkw: u32, locktime: u32, + our_funding_inputs: Vec<(TxIn, TransactionU16LenLimited)>, + ) -> Self { + Self { + values: SplicingChannelValues { pre_channel_value, our_funding_contribution, their_funding_contribution }, + funding_feerate_perkw, locktime, our_funding_inputs, + } + } + + /// Accessor + pub(crate) fn our_funding_contribution(&self) -> i64 { self.values.our_funding_contribution } +} + +/// Info about a pending splice, used in the post-splice channel +#[derive(Clone)] +pub(crate) struct PendingSpliceInfoPost { + /// Previous and next channel values + values: SplicingChannelValues, + // /// Reference to the pre-splice channel (may be missing if channel_id was the same) + // pub pre_channel_id: Option, + + /// Save here the previous funding transaction + pub pre_funding_transaction: Option, + /// Save here the previous funding TXO + pub pre_funding_txo: Option, +} + +impl PendingSpliceInfoPost { + pub(crate) fn new( + pre_channel_value: u64, our_funding_contribution: i64, their_funding_contribution: i64, + pre_funding_transaction: Option, pre_funding_txo: Option, + ) -> Self { + Self { + values: SplicingChannelValues { pre_channel_value, our_funding_contribution, their_funding_contribution }, + pre_funding_transaction, pre_funding_txo, + } + } + + /// Accessor + pub(crate) fn pre_channel_value(&self) -> u64 { self.values.pre_channel_value } + + /// The post-splice channel value, computed from the pre-splice values and the peer contributions + pub(crate) fn post_channel_value(&self) -> u64 { self.values.post_channel_value() } + + /// Get a transaction input that is the previous funding transaction + pub(super) fn get_input_of_previous_funding(&self) -> Result<(TxIn, TransactionU16LenLimited), ChannelError> { + if let Some(pre_funding_transaction) = &self.pre_funding_transaction { + if let Some(pre_funding_txo) = &self.pre_funding_txo { + Ok(( + TxIn { + previous_output: pre_funding_txo.into_bitcoin_outpoint(), + script_sig: ScriptBuf::new(), + sequence: Sequence::ZERO, + witness: Witness::new(), + }, + TransactionU16LenLimited(pre_funding_transaction.clone()), + )) + } else { + Err(ChannelError::Warn("Internal error: Missing previous funding transaction outpoint".to_string())) + } + } else { + Err(ChannelError::Warn("Internal error: Missing previous funding transaction".to_string())) + } + } + + /// Within the given transaction, find the input that corresponds to the previous funding transaction + pub(super) fn find_input_of_previous_funding(&self, tx: &Transaction) -> Result { + if let Some(pre_funding_txo) = &self.pre_funding_txo { + for idx in 0..tx.input.len() { + if tx.input[idx].previous_output == pre_funding_txo.into_bitcoin_outpoint() { + return Ok(idx as u16); + } + } + // Not found + Err(ChannelError::Warn("Internal error: Previous funding transaction not found in the inputs of the new funding transaction".to_string())) + } else { + Err(ChannelError::Warn("Internal error: Missing previous funding transaction outpoint".to_string())) + } + } +} + + +#[cfg(test)] +mod tests { + use crate::ln::channel_splice::PendingSpliceInfoPost; + + fn create_pending_splice_info(pre_channel_value: u64, our_funding_contribution: i64, their_funding_contribution: i64) -> PendingSpliceInfoPost { + PendingSpliceInfoPost::new(pre_channel_value, our_funding_contribution, their_funding_contribution, None, None) + } + + #[test] + fn test_pending_splice_info_new() { + { + // increase, small amounts + let ps = create_pending_splice_info(9_000, 6_000, 0); + assert_eq!(ps.pre_channel_value(), 9_000); + assert_eq!(ps.post_channel_value(), 15_000); + } + { + // increase, small amounts + let ps = create_pending_splice_info(9_000, 4_000, 2_000); + assert_eq!(ps.pre_channel_value(), 9_000); + assert_eq!(ps.post_channel_value(), 15_000); + } + { + // increase, small amounts + let ps = create_pending_splice_info(9_000, 0, 6_000); + assert_eq!(ps.pre_channel_value(), 9_000); + assert_eq!(ps.post_channel_value(), 15_000); + } + { + // decrease, small amounts + let ps = create_pending_splice_info(15_000, -6_000, 0); + assert_eq!(ps.pre_channel_value(), 15_000); + assert_eq!(ps.post_channel_value(), 9_000); + } + { + // decrease, small amounts + let ps = create_pending_splice_info(15_000, -4_000, -2_000); + assert_eq!(ps.pre_channel_value(), 15_000); + assert_eq!(ps.post_channel_value(), 9_000); + } + { + // increase and decrease + let ps = create_pending_splice_info(15_000, 4_000, -2_000); + assert_eq!(ps.pre_channel_value(), 15_000); + assert_eq!(ps.post_channel_value(), 17_000); + } + let base2: u64 = 2; + let huge63i3 = (base2.pow(63) - 3) as i64; + assert_eq!(huge63i3, 9223372036854775805); + assert_eq!(-huge63i3, -9223372036854775805); + { + // increase, large amount + let ps = create_pending_splice_info(9_000, huge63i3, 3); + assert_eq!(ps.pre_channel_value(), 9_000); + assert_eq!(ps.post_channel_value(), 9223372036854784807); + } + { + // increase, large amounts + let ps = create_pending_splice_info(9_000, huge63i3, huge63i3); + assert_eq!(ps.pre_channel_value(), 9_000); + assert_eq!(ps.post_channel_value(), 9223372036854784807); + } + } +} diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs index 61f1f18166e..15a7731859a 100644 --- a/lightning/src/ln/channelmanager.rs +++ b/lightning/src/ln/channelmanager.rs @@ -21,6 +21,8 @@ use bitcoin::blockdata::block::Header; use bitcoin::blockdata::transaction::Transaction; use bitcoin::blockdata::constants::ChainHash; use bitcoin::key::constants::SECRET_KEY_SIZE; +#[cfg(splicing)] +use bitcoin::locktime::absolute::LockTime; use bitcoin::network::constants::Network; use bitcoin::hashes::Hash; @@ -30,6 +32,7 @@ use bitcoin::hash_types::{BlockHash, Txid}; use bitcoin::secp256k1::{SecretKey,PublicKey}; use bitcoin::secp256k1::Secp256k1; use bitcoin::{secp256k1, Sequence}; +use bitcoin::TxIn; use crate::blinded_path::{BlindedPath, NodeIdLookUp}; use crate::blinded_path::payment::{Bolt12OfferContext, Bolt12RefundContext, PaymentConstraints, PaymentContext, ReceiveTlvs}; @@ -44,17 +47,25 @@ use crate::events::{Event, EventHandler, EventsProvider, MessageSendEvent, Messa // construct one themselves. use crate::ln::inbound_payment; use crate::ln::types::{ChannelId, PaymentHash, PaymentPreimage, PaymentSecret}; -use crate::ln::channel::{self, Channel, ChannelPhase, ChannelContext, ChannelError, ChannelUpdateStatus, ShutdownResult, UnfundedChannelContext, UpdateFulfillCommitFetch, OutboundV1Channel, InboundV1Channel, WithChannelContext}; +use crate::ln::channel::{Channel, ChannelPhase, ChannelContext, ChannelError, ChannelUpdateStatus, ShutdownResult, UnfundedChannelContext, UpdateFulfillCommitFetch, OutboundV1Channel, InboundV1Channel, WithChannelContext}; pub use crate::ln::channel::{InboundHTLCDetails, InboundHTLCStateDetails, OutboundHTLCDetails, OutboundHTLCStateDetails}; +#[cfg(any(dual_funding, splicing))] +use crate::ln::channel::{ChannelVariants, HasChannelContext, InboundV2Channel, OutboundV2Channel, InteractivelyFunded as _, V2Channel}; +#[cfg(splicing)] +use crate::ln::channel_splice::{PendingSpliceInfoPre, SplicingChannelValues}; use crate::ln::features::{Bolt12InvoiceFeatures, ChannelFeatures, ChannelTypeFeatures, InitFeatures, NodeFeatures}; #[cfg(any(feature = "_test_utils", test))] use crate::ln::features::Bolt11InvoiceFeatures; +#[cfg(any(dual_funding, splicing))] +use crate::ln::interactivetxs::InteractiveTxMessageSend; use crate::routing::router::{BlindedTail, InFlightHtlcs, Path, Payee, PaymentParameters, Route, RouteParameters, Router}; use crate::ln::onion_payment::{check_incoming_htlc_cltv, create_recv_pending_htlc_info, create_fwd_pending_htlc_info, decode_incoming_update_add_htlc_onion, InboundHTLCErr, NextPacketDetails}; use crate::ln::msgs; use crate::ln::onion_utils; use crate::ln::onion_utils::{HTLCFailReason, INVALID_ONION_BLINDING}; use crate::ln::msgs::{ChannelMessageHandler, DecodeError, LightningError}; +#[cfg(any(dual_funding, splicing))] +use crate::ln::msgs::CommitmentUpdate; #[cfg(test)] use crate::ln::outbound_payment; use crate::ln::outbound_payment::{Bolt12PaymentError, OutboundPayments, PaymentAttempts, PendingOutboundPayment, SendAlongPathArgs, StaleExpiration}; @@ -74,6 +85,7 @@ use crate::util::wakers::{Future, Notifier}; use crate::util::scid_utils::fake_scid; use crate::util::string::UntrustedString; use crate::util::ser::{BigSize, FixedLengthReader, Readable, ReadableArgs, MaybeReadable, Writeable, Writer, VecWriter}; +use crate::util::ser::TransactionU16LenLimited; use crate::util::logger::{Level, Logger, WithContext}; use crate::util::errors::APIError; #[cfg(not(c_bindings))] @@ -106,6 +118,8 @@ use core::ops::Deref; pub use crate::ln::outbound_payment::{PaymentSendFailure, ProbeSendFailure, Retry, RetryableSendFailure, RecipientOnionFields}; use crate::ln::script::ShutdownScript; +use std::str; + // We hold various information about HTLC relay in the HTLC objects in Channel itself: // // Upon receipt of an HTLC from a peer, we'll give it a PendingHTLCStatus indicating if it should @@ -927,6 +941,15 @@ impl PeerState where SP::Target: SignerProvider { ChannelPhase::UnfundedOutboundV2(_) => true, #[cfg(any(dual_funding, splicing))] ChannelPhase::UnfundedInboundV2(_) => false, + #[cfg(splicing)] + ChannelPhase::RefundingV2((_pre_chan, post_chans)) => { + if let Some(pending) = post_chans.get_pending() { + pending.is_outbound() + } else { + // funded + true + } + }, } ) && self.monitor_update_blocked_actions.is_empty() @@ -945,11 +968,18 @@ impl PeerState where SP::Target: SignerProvider { } } +#[derive(Clone)] +pub(super) enum OpenChannelMessage { + V1(msgs::OpenChannel), + #[cfg(any(dual_funding, splicing))] + V2(msgs::OpenChannelV2), +} + /// A not-yet-accepted inbound (from counterparty) channel. Once /// accepted, the parameters will be used to construct a channel. pub(super) struct InboundChannelRequest { /// The original OpenChannel message. - pub open_channel_msg: msgs::OpenChannel, + pub open_channel_msg: OpenChannelMessage, /// The number of ticks remaining before the request expires. pub ticks_remaining: i32, } @@ -2796,6 +2826,22 @@ macro_rules! convert_chan_phase_err { ChannelPhase::UnfundedInboundV2(channel) => { convert_chan_phase_err!($self, $err, channel, $channel_id, UNFUNDED_CHANNEL) }, + #[cfg(splicing)] + ChannelPhase::RefundingV2((_, channels)) => { + if let Some(funded_channel) = channels.get_funded_channel_mut() { + convert_chan_phase_err!($self, $err, funded_channel, $channel_id, FUNDED_CHANNEL) + } else { + match channels.get_pending_mut() { + Some(V2Channel::UnfundedOutboundV2(ch)) => { + convert_chan_phase_err!($self, $err, ch, $channel_id, UNFUNDED_CHANNEL) + }, + Some(V2Channel::UnfundedInboundV2(ch)) => { + convert_chan_phase_err!($self, $err, ch, $channel_id, UNFUNDED_CHANNEL) + }, + None => panic!("No channel"), + } + } + }, } }; } @@ -2862,8 +2908,31 @@ macro_rules! send_channel_ready { }} } +/// Macro to send out `splice_locked` message, similar to `send_channel_ready`. +#[cfg(splicing)] +macro_rules! send_splice_locked { + ($self: ident, $pending_msg_events: expr, $channel: expr, $splice_locked_msg: expr) => {{ + $pending_msg_events.push(events::MessageSendEvent::SendSpliceLocked { + node_id: $channel.context.get_counterparty_node_id(), + msg: $splice_locked_msg, + }); + // Note that we may send a `splice_locked` multiple times for a channel if we reconnect, so + // we allow collisions, but we shouldn't ever be updating the channel ID pointed to. + let mut short_to_chan_info = $self.short_to_chan_info.write().unwrap(); + let outbound_alias_insert = short_to_chan_info.insert($channel.context.outbound_scid_alias(), ($channel.context.get_counterparty_node_id(), $channel.context.channel_id())); + assert!(outbound_alias_insert.is_none() || outbound_alias_insert.unwrap() == ($channel.context.get_counterparty_node_id(), $channel.context.channel_id()), + "SCIDs should never collide - ensure you weren't behind the chain tip by a full month when creating channels"); + if let Some(real_scid) = $channel.context.get_short_channel_id() { + let scid_insert = short_to_chan_info.insert(real_scid, ($channel.context.get_counterparty_node_id(), $channel.context.channel_id())); + assert!(scid_insert.is_none() || scid_insert.unwrap() == ($channel.context.get_counterparty_node_id(), $channel.context.channel_id()), + "SCIDs should never collide - ensure you weren't behind the chain tip by a full month when creating channels"); + } + }} +} + macro_rules! emit_channel_pending_event { ($locked_events: expr, $channel: expr) => { + let is_splice = $channel.context.is_splice_pending(); if $channel.context.should_emit_channel_pending_event() { $locked_events.push_back((events::Event::ChannelPending { channel_id: $channel.context.channel_id(), @@ -2872,14 +2941,16 @@ macro_rules! emit_channel_pending_event { user_channel_id: $channel.context.get_user_id(), funding_txo: $channel.context.get_funding_txo().unwrap().into_bitcoin_outpoint(), channel_type: Some($channel.context.get_channel_type().clone()), + is_splice, }, None)); $channel.context.set_channel_pending_event_emitted(); } } } -macro_rules! emit_channel_ready_event { - ($locked_events: expr, $channel: expr) => { +/// The `is_splice` flag is passed explicitly. +macro_rules! emit_channel_ready_event_with_splice { + ($locked_events: expr, $channel: expr, $is_splice: expr) => { if $channel.context.should_emit_channel_ready_event() { debug_assert!($channel.context.channel_pending_event_emitted()); $locked_events.push_back((events::Event::ChannelReady { @@ -2887,12 +2958,20 @@ macro_rules! emit_channel_ready_event { user_channel_id: $channel.context.get_user_id(), counterparty_node_id: $channel.context.get_counterparty_node_id(), channel_type: $channel.context.get_channel_type().clone(), + is_splice: $is_splice, }, None)); $channel.context.set_channel_ready_event_emitted(); } } } +macro_rules! emit_channel_ready_event { + ($locked_events: expr, $channel: expr) => { + let is_splice = $channel.context.is_splice_pending(); + emit_channel_ready_event_with_splice!($locked_events, $channel, is_splice); + } +} + macro_rules! handle_monitor_update_completion { ($self: ident, $peer_state_lock: expr, $peer_state: expr, $per_peer_state_lock: expr, $chan: expr) => { { let logger = WithChannelContext::from(&$self.logger, &$chan.context); @@ -3254,8 +3333,61 @@ where /// [`Event::FundingGenerationReady::temporary_channel_id`]: events::Event::FundingGenerationReady::temporary_channel_id /// [`Event::ChannelClosed::channel_id`]: events::Event::ChannelClosed::channel_id pub fn create_channel(&self, their_network_key: PublicKey, channel_value_satoshis: u64, push_msat: u64, user_channel_id: u128, temporary_channel_id: Option, override_config: Option) -> Result { - if channel_value_satoshis < 1000 { - return Err(APIError::APIMisuseError { err: format!("Channel value must be at least 1000 satoshis. It was {}", channel_value_satoshis) }); + self.create_channel_internal(false, their_network_key, channel_value_satoshis, vec![], None, + push_msat, user_channel_id, temporary_channel_id, override_config) + } + + /// Creates a new outbound dual-funded channel to the given remote node and with the given value + /// contributed by us. + /// + /// `user_channel_id` will be provided back as in + /// [`Event::FundingGenerationReady::user_channel_id`] to allow tracking of which events + /// correspond with which `create_channel` call. Note that the `user_channel_id` defaults to a + /// randomized value for inbound channels. `user_channel_id` has no meaning inside of LDK, it + /// is simply copied to events and otherwise ignored. + /// + /// `funding_satoshis` is the amount we are contributing to the channel. + /// Raises [`APIError::APIMisuseError`] when `funding_satoshis` > 2**24. + /// + /// The `funding_conf_target` parameter sets the priority of the funding transaction for appropriate + /// fee estimation. If `None`, then [`ConfirmationTarget::Normal`] is used. + /// + /// Raises [`APIError::ChannelUnavailable`] if the channel cannot be opened due to failing to + /// generate a shutdown scriptpubkey or destination script set by + /// [`SignerProvider::get_shutdown_scriptpubkey`] or [`SignerProvider::get_destination_script`]. + /// + /// Note that we do not check if you are currently connected to the given peer. If no + /// connection is available, the outbound `open_channel` message may fail to send, resulting in + /// the channel eventually being silently forgotten (dropped on reload). + /// + /// Returns the new Channel's temporary `channel_id`. This ID will appear as + /// [`Event::FundingGenerationReady::temporary_channel_id`] and in + /// [`ChannelDetails::channel_id`] until after + /// [`ChannelManager::funding_transaction_generated`] is called, swapping the Channel's ID for + /// one derived from the funding transaction's TXID. If the counterparty rejects the channel + /// immediately, this temporary ID will appear in [`Event::ChannelClosed::channel_id`]. + /// + /// [`Event::FundingGenerationReady::user_channel_id`]: events::Event::FundingGenerationReady::user_channel_id + /// [`Event::FundingGenerationReady::temporary_channel_id`]: events::Event::FundingGenerationReady::temporary_channel_id + /// [`Event::ChannelClosed::channel_id`]: events::Event::ChannelClosed::channel_id + /// [`ConfirmationTarget::Normal`]: chain::chaininterface::ConfirmationTarget + #[cfg(any(dual_funding, splicing))] + pub fn create_dual_funded_channel(&self, their_network_key: PublicKey, funding_satoshis: u64, + funding_inputs: Vec<(TxIn, Transaction)>, funding_conf_target: Option, user_channel_id: u128, + override_config: Option) -> Result + { + let funding_inputs = Self::length_limit_holder_input_prev_txs(funding_inputs)?; + self.create_channel_internal(true, their_network_key, funding_satoshis, funding_inputs, + funding_conf_target, 0, user_channel_id, None, override_config) + } + + // TODO(dual_funding): Remove param _-prefix once #[cfg(dual_funding)] is dropped. + fn create_channel_internal(&self, _is_v2: bool, their_network_key: PublicKey, funding_satoshis: u64, + _funding_inputs: Vec<(TxIn,TransactionU16LenLimited)>, _funding_conf_target: Option, + push_msat: u64, user_channel_id: u128, temporary_channel_id: Option, override_config: Option, + ) -> Result { + if funding_satoshis < 1000 { + return Err(APIError::APIMisuseError { err: format!("Channel value must be at least 1000 satoshis. It was {}", funding_satoshis) }); } let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self); @@ -3275,24 +3407,76 @@ where } } - let channel = { - let outbound_scid_alias = self.create_and_insert_outbound_scid_alias(); - let their_features = &peer_state.latest_features; - let config = if override_config.is_some() { override_config.as_ref().unwrap() } else { &self.default_configuration }; - match OutboundV1Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider, their_network_key, - their_features, channel_value_satoshis, push_msat, user_channel_id, config, - self.best_block.read().unwrap().height, outbound_scid_alias, temporary_channel_id) - { - Ok(res) => res, - Err(e) => { - self.outbound_scid_aliases.lock().unwrap().remove(&outbound_scid_alias); - return Err(e); - }, - } + let outbound_scid_alias = self.create_and_insert_outbound_scid_alias(); + let their_features = &peer_state.latest_features; + let config = if override_config.is_some() { override_config.as_ref().unwrap() } else { &self.default_configuration }; + + // TODO(dual_funding): Merge this with below when cfg is removed. + #[cfg(not(any(dual_funding, splicing)))] + let (channel_phase, msg_send_event) = { + let channel = { + match OutboundV1Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider, their_network_key, + their_features, funding_satoshis, push_msat, user_channel_id, config, + self.best_block.read().unwrap().height, outbound_scid_alias, temporary_channel_id) + { + Ok(res) => res, + Err(e) => { + self.outbound_scid_aliases.lock().unwrap().remove(&outbound_scid_alias); + return Err(e); + }, + } + }; + let res = channel.get_open_channel(self.chain_hash); + let event = events::MessageSendEvent::SendOpenChannel { + node_id: their_network_key, + msg: res, + }; + (ChannelPhase::UnfundedOutboundV1(channel), event) + }; + + #[cfg(any(dual_funding, splicing))] + let (channel_phase, msg_send_event) = if _is_v2 { + let channel = { + match OutboundV2Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider, their_network_key, + their_features, funding_satoshis, _funding_inputs, user_channel_id, config, + self.best_block.read().unwrap().height, outbound_scid_alias, + _funding_conf_target.unwrap_or(ConfirmationTarget::NonAnchorChannelFee)) + { + Ok(res) => res, + Err(e) => { + self.outbound_scid_aliases.lock().unwrap().remove(&outbound_scid_alias); + return Err(e); + }, + } + }; + let res = channel.get_open_channel_v2(self.chain_hash); + let event = events::MessageSendEvent::SendOpenChannelV2 { + node_id: their_network_key, + msg: res, + }; + (ChannelPhase::UnfundedOutboundV2(channel), event) + } else { + let channel = { + match OutboundV1Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider, their_network_key, + their_features, funding_satoshis, push_msat, user_channel_id, config, + self.best_block.read().unwrap().height, outbound_scid_alias, temporary_channel_id) + { + Ok(res) => res, + Err(e) => { + self.outbound_scid_aliases.lock().unwrap().remove(&outbound_scid_alias); + return Err(e); + }, + } + }; + let res = channel.get_open_channel(self.chain_hash); + let event = events::MessageSendEvent::SendOpenChannel { + node_id: their_network_key, + msg: res, + }; + (ChannelPhase::UnfundedOutboundV1(channel), event) }; - let res = channel.get_open_channel(self.chain_hash); - let temporary_channel_id = channel.context.channel_id(); + let temporary_channel_id = channel_phase.context().channel_id(); match peer_state.channel_by_id.entry(temporary_channel_id) { hash_map::Entry::Occupied(_) => { if cfg!(fuzzing) { @@ -3301,16 +3485,101 @@ where panic!("RNG is bad???"); } }, - hash_map::Entry::Vacant(entry) => { entry.insert(ChannelPhase::UnfundedOutboundV1(channel)); } + hash_map::Entry::Vacant(entry) => { entry.insert(channel_phase); } } - peer_state.pending_msg_events.push(events::MessageSendEvent::SendOpenChannel { - node_id: their_network_key, - msg: res, - }); + peer_state.pending_msg_events.push(msg_send_event); Ok(temporary_channel_id) } + /// #SPLICING STEP1 I + /// Inspired by create_channel() and close_channel() + /// Initiate a splice, to change the channel capacity + /// TODO update docu flow + /// TODO funding_feerate_perkw + /// TODO locktime + /// + /// Doc on message flow: + /// splice_channel(): + /// --- splice ------------------> signals intent for splice, with change amount + /// <-------------- splice_ack --- accepts proposed splice + /// splice_funding_generated(): + /// --- splice_created ----------> sends new funding transaction parameters. + /// In future, this should be a set of tx_add_input, tx_add_output, etc. + /// <------------- tx_complete --- + /// --- splice_comm_sign --------> send commitment signature + /// <-------- splice_comm_ack --- send commitment signature + /// --- splice_signed -----------> send signature on funding tx. In future this should be tx_signatures + /// <------- splice_signed_ack --- send signature on funding tx. In future this should be tx_signatures + /// [new funding tx can be broadcast] + #[cfg(splicing)] + pub fn splice_channel( + &self, channel_id: &ChannelId, their_network_key: &PublicKey, our_funding_contribution_satoshis: i64, + funding_inputs: Vec<(TxIn, Transaction)>, funding_feerate_perkw: u32, locktime: u32 + ) -> Result<(), APIError> { + let funding_inputs = Self::length_limit_holder_input_prev_txs(funding_inputs)?; + + let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self); + // We want to make sure the lock is actually acquired by PersistenceNotifierGuard. + debug_assert!(&self.total_consistency_lock.try_write().is_err()); + + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(their_network_key) + .ok_or_else(|| APIError::APIMisuseError{ err: format!("Not connected to node: {}", their_network_key) })?; + + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + // Look for channel + match peer_state.channel_by_id.entry(channel_id.clone()) { + hash_map::Entry::Vacant(_) => return Err(APIError::ChannelUnavailable{err: format!("Channel with id {} not found for the passed counterparty node_id {}", channel_id, their_network_key) }), + hash_map::Entry::Occupied(mut chan_phase_entry) => { + if let ChannelPhase::Funded(chan) = chan_phase_entry.get_mut() { + let pre_channel_value = chan.context.get_value_satoshis(); + // TODO check for i64 overflow + if our_funding_contribution_satoshis < 0 && -our_funding_contribution_satoshis > (pre_channel_value as i64) { + return Err(APIError::APIMisuseError { err: format!("Post-splicing channel value cannot be negative. It was {} - {}", pre_channel_value, -our_funding_contribution_satoshis) }); + } + + if our_funding_contribution_satoshis < 0 { + return Err(APIError::APIMisuseError { err: format!("TODO: Splice-out not supported, only splice in, contribution {}, channel_id {}", -our_funding_contribution_satoshis, channel_id) }); + } + + // Note: post-splice channel value is not yet known at this point, acceptor contribution is not known + // (Cannot test for miminum required post-splice channel value) + + if chan.context.pending_splice_pre.is_some() { + return Err(APIError::ChannelUnavailable { err: format!("Channel has already a splice pending, channel id {}", channel_id) }); + } + + let their_funding_contribution = 0i64; // not yet known + chan.context.pending_splice_pre = Some(PendingSpliceInfoPre::new( + pre_channel_value, our_funding_contribution_satoshis, their_funding_contribution, + funding_feerate_perkw, locktime, funding_inputs + )); + + // Check channel id + let post_splice_v2_channel_id = chan.context.generate_v2_channel_id_from_revocation_basepoints(); + if post_splice_v2_channel_id != chan.context.channel_id() { + return Err(APIError::APIMisuseError { err: format!("Channel ID would change during splicing (e.g. splice on V1 channel), not yet supported, channel id {} {}", + chan.context.channel_id(), post_splice_v2_channel_id) }); + } + + let msg = chan.get_splice_init(our_funding_contribution_satoshis, &self.signer_provider, funding_feerate_perkw, locktime); + + peer_state.pending_msg_events.push(events::MessageSendEvent::SendSpliceInit { + node_id: *their_network_key, + msg, + }); + + Ok(()) + } else { + return Err(APIError::ChannelUnavailable { err: format!("Channel with id {} is not funded", channel_id) }); + } + + }, + } + } + fn list_funded_channels_with_filter)) -> bool + Copy>(&self, f: Fn) -> Vec { // Allocate our best estimate of the number of channels we have in the `res` // Vec. Sadly the `short_to_chan_info` map doesn't cover channels without @@ -3329,6 +3598,11 @@ where .filter_map(|(chan_id, phase)| match phase { // Only `Channels` in the `ChannelPhase::Funded` phase can be considered funded. ChannelPhase::Funded(chan) => Some((chan_id, chan)), + // Both pre and post exist + #[cfg(splicing)] + ChannelPhase::RefundingV2((pre_chan, _post_chans)) => { + Some((chan_id, pre_chan)) + }, _ => None, }) .filter(f) @@ -3674,6 +3948,8 @@ where // Unfunded channel has no update (None, chan_phase.context().get_counterparty_node_id()) }, + #[cfg(splicing)] + ChannelPhase::RefundingV2(_) => todo!("splicing"), } } else if peer_state.inbound_channel_request_by_id.remove(channel_id).is_some() { log_error!(logger, "Force-closing channel {}", &channel_id); @@ -4128,39 +4404,44 @@ where let mut peer_state_lock = peer_state_mutex.lock().unwrap(); let peer_state = &mut *peer_state_lock; if let hash_map::Entry::Occupied(mut chan_phase_entry) = peer_state.channel_by_id.entry(id) { - match chan_phase_entry.get_mut() { - ChannelPhase::Funded(chan) => { - if !chan.context.is_live() { - return Err(APIError::ChannelUnavailable{err: "Peer for first hop currently disconnected".to_owned()}); - } - let funding_txo = chan.context.get_funding_txo().unwrap(); - let logger = WithChannelContext::from(&self.logger, &chan.context); - let send_res = chan.send_htlc_and_commit(htlc_msat, payment_hash.clone(), - htlc_cltv, HTLCSource::OutboundRoute { - path: path.clone(), - session_priv: session_priv.clone(), - first_hop_htlc_msat: htlc_msat, - payment_id, - }, onion_packet, None, &self.fee_estimator, &&logger); - match break_chan_phase_entry!(self, send_res, chan_phase_entry) { - Some(monitor_update) => { - match handle_new_monitor_update!(self, funding_txo, monitor_update, peer_state_lock, peer_state, per_peer_state, chan) { - false => { - // Note that MonitorUpdateInProgress here indicates (per function - // docs) that we will resend the commitment update once monitor - // updating completes. Therefore, we must return an error - // indicating that it is unsafe to retry the payment wholesale, - // which we do in the send_payment check for - // MonitorUpdateInProgress, below. - return Err(APIError::MonitorUpdateInProgress); - }, - true => {}, - } + let chan = match chan_phase_entry.get_mut() { + ChannelPhase::Funded(chan) => { chan }, + // Both pre and post exist + // TODO(splicing): Handle on both + #[cfg(splicing)] + ChannelPhase::RefundingV2((ref mut pre_chan, ref mut _post_chans)) => { + pre_chan + }, + _ => return Err(APIError::ChannelUnavailable{err: "Channel to first hop is unfunded".to_owned()}), + }; + if !chan.context.is_live() { + return Err(APIError::ChannelUnavailable{err: "Peer for first hop currently disconnected".to_owned()}); + } + let funding_txo = chan.context.get_funding_txo().unwrap(); + let logger = WithChannelContext::from(&self.logger, &chan.context); + let send_res = chan.send_htlc_and_commit(htlc_msat, payment_hash.clone(), + htlc_cltv, HTLCSource::OutboundRoute { + path: path.clone(), + session_priv: session_priv.clone(), + first_hop_htlc_msat: htlc_msat, + payment_id, + }, onion_packet, None, &self.fee_estimator, &&logger); + match break_chan_phase_entry!(self, send_res, chan_phase_entry) { + Some(monitor_update) => { + match handle_new_monitor_update!(self, funding_txo, monitor_update, peer_state_lock, peer_state, per_peer_state, chan) { + false => { + // Note that MonitorUpdateInProgress here indicates (per function + // docs) that we will resend the commitment update once monitor + // updating completes. Therefore, we must return an error + // indicating that it is unsafe to retry the payment wholesale, + // which we do in the send_payment check for + // MonitorUpdateInProgress, below. + return Err(APIError::MonitorUpdateInProgress); }, - None => {}, + true => {}, } }, - _ => return Err(APIError::ChannelUnavailable{err: "Channel to first hop is unfunded".to_owned()}), + None => {}, }; } else { // The channel was likely removed after we fetched the id from the @@ -4753,6 +5034,130 @@ where result } + /* Note: contribute_funding_inputs() is no longer used + /// Call this to contribute inputs to a funding transaction for dual-funding. + /// + /// Returns an [`APIError::APIMisuseError`] if the contributed inputs spent non-SegWit outputs + /// or if the input amounts will not sufficiently cover the holder `funding_satoshis` and fees. + /// Any amount left over and above dust will be returned as change. + /// + /// Returns [`APIError::ChannelUnavailable`] if a inputs have already been provided for the + /// funding transaction of the channel or if the channel has been closed as indicated by + /// [`Event::ChannelClosed`]. + /// + /// [`Event::FundingInputsContributionReady`]: crate::events::Event::FundingInputsContributionReady + /// [`Event::ChannelClosed`]: crate::events::Event::ChannelClosed + #[cfg(any(dual_funding, splicing))] + pub fn contribute_funding_inputs(&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, + funding_inputs: Vec<(TxIn, Transaction)>) -> Result<(), APIError> { + let funding_inputs = funding_inputs.into_iter().map(|(txin, tx)| { + match TransactionU16LenLimited::new(tx) { + Ok(tx) => Ok((txin, tx)), + Err(err) => Err(err) + } + }).collect::, ()>>() + .map_err(|_| APIError::APIMisuseError { err: "One or more transactions had a serialized length exceeding 65535 bytes".into() })?; + + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| APIError::ChannelUnavailable { err: format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id) })?; + + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + match peer_state.channel_by_id.entry(*channel_id) { + hash_map::Entry::Occupied(mut phase) => { + let tx_msg_opt = match phase.get_mut() { + ChannelPhase::UnfundedOutboundV2(chan) => { + chan.begin_interactive_funding_tx_construction(&self.signer_provider, + &self.entropy_source, self.get_our_node_id(), funding_inputs)? + }, + ChannelPhase::UnfundedInboundV2(chan) => { + chan.begin_interactive_funding_tx_construction(&self.signer_provider, + &self.entropy_source, self.get_our_node_id(), funding_inputs)? + }, + _ => { + return Err(APIError::ChannelUnavailable { + err: format!("Channel with ID {} is not an unfunded V2 channel", counterparty_node_id) }); + }, + }; + if let Some(tx_msg) = tx_msg_opt { + let msg_send_event = match tx_msg { + InteractiveTxMessageSend::TxAddInput(msg) => events::MessageSendEvent::SendTxAddInput { + node_id: *counterparty_node_id, msg }, + InteractiveTxMessageSend::TxAddOutput(msg) => events::MessageSendEvent::SendTxAddOutput { + node_id: *counterparty_node_id, msg }, + InteractiveTxMessageSend::TxComplete(msg) => events::MessageSendEvent::SendTxComplete { + node_id: *counterparty_node_id, msg }, + }; + peer_state.pending_msg_events.push(msg_send_event); + } + }, + hash_map::Entry::Vacant(_) => return Err(APIError::ChannelUnavailable {err: format!( + "Channel with id {} not found for the passed counterparty node_id {}", + channel_id, counterparty_node_id) }), + } + Ok(()) + } + */ + + /// Handles a signed funding transaction generated by interactive transaction construction and + /// provided by the client. + /// + /// Do NOT broadcast the funding transaction yourself. When we have safely received our + /// counterparty's signature(s) the funding transaction will automatically be broadcast via the + /// [`BroadcasterInterface`] provided when this `ChannelManager` was constructed. + #[cfg(any(dual_funding, splicing))] + pub fn funding_transaction_signed(&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, + transaction: Transaction) -> Result<(), APIError> { + let witnesses: Vec<_> = transaction.input.into_iter().filter_map(|input| { + if input.witness.is_empty() { None } else { Some(input.witness) } + }).collect(); + + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| APIError::ChannelUnavailable { + err: format!("Can't find a peer matching the passed counterparty node_id {}", + counterparty_node_id) })?; + + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + + match peer_state.channel_by_id.get_mut(channel_id) { + Some(ChannelPhase::Funded(chan)) => { + let tx_signatures_opt = chan.funding_transaction_signed(channel_id, witnesses) + .map_err(|_err| APIError::APIMisuseError { + err: format!("Channel with id {} has no pending signing session, not expecting funding signatures", channel_id) + })?; + if let Some(tx_signatures) = tx_signatures_opt { + peer_state.pending_msg_events.push(events::MessageSendEvent::SendTxSignatures { + node_id: *counterparty_node_id, + msg: tx_signatures, + }); + } + }, + #[cfg(splicing)] + Some(ChannelPhase::RefundingV2((_, chans))) => { + let tx_signatures_opt = chans.funding_transaction_signed(channel_id, witnesses) + .map_err(|_err| APIError::APIMisuseError { + err: format!("Channel with id {} not expecting funding signatures", channel_id) + })?; + if let Some(tx_signatures) = tx_signatures_opt { + peer_state.pending_msg_events.push(events::MessageSendEvent::SendTxSignatures { + node_id: *counterparty_node_id, + msg: tx_signatures, + }); + } + }, + Some(_) => return Err(APIError::APIMisuseError { + err: format!("Channel with id {} not expecting funding signatures", channel_id)}), + None => return Err(APIError::ChannelUnavailable{ + err: format!("Channel with id {} not found for the passed counterparty node_id {}", channel_id, + counterparty_node_id) }), + } + + Ok(()) + } + /// Atomically applies partial updates to the [`ChannelConfig`] of the given channels. /// /// Once the updates are applied, each eligible channel (advertised with a known short channel @@ -5923,6 +6328,8 @@ where process_unfunded_channel_tick(chan_id, &mut chan.context, &mut chan.unfunded_context, pending_msg_events, counterparty_node_id) }, + #[cfg(splicing)] + ChannelPhase::RefundingV2(_) => todo!("splicing"), } }); @@ -6157,12 +6564,22 @@ where let peer_state = &mut *peer_state_lock; match peer_state.channel_by_id.entry(channel_id) { hash_map::Entry::Occupied(chan_phase_entry) => { - if let ChannelPhase::Funded(chan) = chan_phase_entry.get() { - self.get_htlc_inbound_temp_fail_err_and_data(0x1000|7, &chan) - } else { - // We shouldn't be trying to fail holding cell HTLCs on an unfunded channel. - debug_assert!(false); - (0x4000|10, Vec::new()) + match chan_phase_entry.get() { + ChannelPhase::Funded(chan) => { + self.get_htlc_inbound_temp_fail_err_and_data(0x1000|7, &chan) + }, + // Both post and pre exist + // TODO(splicing): Handle on both + #[cfg(splicing)] + ChannelPhase::RefundingV2((pre_chan, _post_chans)) => { + let chan = pre_chan; + self.get_htlc_inbound_temp_fail_err_and_data(0x1000|7, &chan) + } + _ => { + // We shouldn't be trying to fail holding cell HTLCs on an unfunded channel. + debug_assert!(false); + (0x4000|10, Vec::new()) + } } }, hash_map::Entry::Vacant(_) => (0x4000|10, Vec::new()) @@ -6723,7 +7140,7 @@ where /// Gets the node_id held by this ChannelManager pub fn get_our_node_id(&self) -> PublicKey { - self.our_network_pubkey.clone() + self.our_network_pubkey } fn handle_monitor_update_completion_actions>(&self, actions: I) { @@ -6844,7 +7261,7 @@ where } if let Some(tx) = funding_broadcastable { - log_info!(logger, "Broadcasting funding transaction with txid {}", tx.txid()); + log_info!(logger, "Broadcasting funding transaction with txid {} len {}", tx.txid(), tx.encode().len()); self.tx_broadcaster.broadcast_transactions(&[&tx]); } @@ -6905,6 +7322,8 @@ where } /// Accepts a request to open a channel after a [`Event::OpenChannelRequest`]. + // TODO(dual_funding): Make these part of doc comments when #[cfg(dual_funding)] is dropped. + // Can also be called after a [`Event::OpenChannelV2Request`] when we are not contributing any funds. /// /// The `temporary_channel_id` parameter indicates which inbound channel should be accepted, /// and the `counterparty_node_id` parameter is the id of the peer which has requested to open @@ -6919,13 +7338,18 @@ where /// used to accept such channels. /// /// [`Event::OpenChannelRequest`]: events::Event::OpenChannelRequest + // TODO(dual_funding): Make these part of doc comments when #[cfg(dual_funding)] is dropped. + // [`Event::OpenChannelV2Request`]: events::Event::OpenChannelV2Request /// [`Event::ChannelClosed::user_channel_id`]: events::Event::ChannelClosed::user_channel_id pub fn accept_inbound_channel(&self, temporary_channel_id: &ChannelId, counterparty_node_id: &PublicKey, user_channel_id: u128) -> Result<(), APIError> { - self.do_accept_inbound_channel(temporary_channel_id, counterparty_node_id, false, user_channel_id) + self.do_accept_inbound_channel(temporary_channel_id, counterparty_node_id, false, user_channel_id, 0, vec![]) } - /// Accepts a request to open a channel after a [`events::Event::OpenChannelRequest`], treating - /// it as confirmed immediately. + /// Accepts a request to open a channel after a [`Event::OpenChannelRequest`], treating it as + /// confirmed immediately. + // TODO(dual_funding): Make these part of doc comments when #[cfg(dual_funding)] is dropped. + // Can also be called after a [`Event::OpenChannelV2Request`] when we are not contributing any, + // funds, treating it as confirmed immediately. /// /// The `user_channel_id` parameter will be provided back in /// [`Event::ChannelClosed::user_channel_id`] to allow tracking of which events correspond @@ -6941,13 +7365,61 @@ where /// does not pay to the correct script the correct amount, *you will lose funds*. /// /// [`Event::OpenChannelRequest`]: events::Event::OpenChannelRequest + // TODO(dual_funding): Make these part of doc comments when #[cfg(dual_funding)] is dropped. + /// [`Event::OpenChannelV2Request`]: events::Event::OpenChannelV2Request /// [`Event::ChannelClosed::user_channel_id`]: events::Event::ChannelClosed::user_channel_id pub fn accept_inbound_channel_from_trusted_peer_0conf(&self, temporary_channel_id: &ChannelId, counterparty_node_id: &PublicKey, user_channel_id: u128) -> Result<(), APIError> { - self.do_accept_inbound_channel(temporary_channel_id, counterparty_node_id, true, user_channel_id) + self.do_accept_inbound_channel(temporary_channel_id, counterparty_node_id, true, user_channel_id, 0, vec![]) } - fn do_accept_inbound_channel(&self, temporary_channel_id: &ChannelId, counterparty_node_id: &PublicKey, accept_0conf: bool, user_channel_id: u128) -> Result<(), APIError> { - + /// Accepts a request to open a dual-funded channel with a contribution provided by us after an + /// [`Event::OpenChannelV2Request`]. + /// + /// The `temporary_channel_id` parameter indicates which inbound channel should be accepted, + /// and the `counterparty_node_id` parameter is the id of the peer which has requested to open + /// the channel. + /// + /// The `user_channel_id` parameter will be provided back in + /// [`Event::ChannelClosed::user_channel_id`] to allow tracking of which events correspond + /// with which `accept_inbound_channel_*` call. + /// + /// `funding_satoshis` is the amount we are contributing to the channel. + /// Raises [`APIError::APIMisuseError`] when `funding_satoshis` > 2**24. + /// + /// Note that this method will return an error and reject the channel, if it requires support + /// for zero confirmations. + /// TODO(dual_funding): Discussion on complications with 0conf dual-funded channels where "locking" + /// of UTXOs used for funding would be required and other issues. + /// See: https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-May/003920.html + /// + /// + /// [`Event::OpenChannelV2Request`]: events::Event::OpenChannelV2Request + /// [`Event::ChannelClosed::user_channel_id`]: events::Event::ChannelClosed::user_channel_id + #[cfg(any(dual_funding, splicing))] + pub fn accept_inbound_channel_with_contribution(&self, temporary_channel_id: &ChannelId, + counterparty_node_id: &PublicKey, user_channel_id: u128, funding_satoshis: u64, + funding_inputs: Vec<(TxIn, Transaction)>) -> Result<(), APIError> { + let funding_inputs = Self::length_limit_holder_input_prev_txs(funding_inputs)?; + self.do_accept_inbound_channel(temporary_channel_id, counterparty_node_id, false, user_channel_id, + funding_satoshis, funding_inputs) + } + + #[cfg(any(dual_funding, splicing))] + fn length_limit_holder_input_prev_txs(funding_inputs: Vec<(TxIn, Transaction)>) -> Result, APIError> { + funding_inputs.into_iter().map(|(txin, tx)| { + match TransactionU16LenLimited::new(tx) { + Ok(tx) => Ok((txin, tx)), + Err(err) => Err(err) + } + }).collect::, ()>>() + .map_err(|_| APIError::APIMisuseError { err: "One or more transactions had a serialized length exceeding 65535 bytes".into() }) + } + + // TODO(dual_funding): Remove param _-prefix once #[cfg(dual_funding)] is dropped. + fn do_accept_inbound_channel( + &self, temporary_channel_id: &ChannelId, counterparty_node_id: &PublicKey, accept_0conf: bool, + user_channel_id: u128, _funding_satoshis: u64, _funding_inputs: Vec<(TxIn, TransactionU16LenLimited)>, + ) -> Result<(), APIError> { let logger = WithContext::from(&self.logger, Some(*counterparty_node_id), Some(*temporary_channel_id)); let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self); @@ -6972,12 +7444,50 @@ where let res = match peer_state.inbound_channel_request_by_id.remove(temporary_channel_id) { Some(unaccepted_channel) => { let best_block_height = self.best_block.read().unwrap().height; - InboundV1Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider, - counterparty_node_id.clone(), &self.channel_type_features(), &peer_state.latest_features, - &unaccepted_channel.open_channel_msg, user_channel_id, &self.default_configuration, best_block_height, - &self.logger, accept_0conf).map_err(|err| MsgHandleErrInternal::from_chan_no_close(err, *temporary_channel_id)) + match unaccepted_channel.open_channel_msg { + OpenChannelMessage::V1(open_channel_msg) => { + InboundV1Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider, + *counterparty_node_id, &self.channel_type_features(), &peer_state.latest_features, + &open_channel_msg, user_channel_id, &self.default_configuration, best_block_height, + &self.logger, accept_0conf).map(|channel| ChannelPhase::UnfundedInboundV1(channel)) + .map_err(|err| MsgHandleErrInternal::from_chan_no_close(err, *temporary_channel_id)) + }, + #[cfg(any(dual_funding, splicing))] + OpenChannelMessage::V2(open_channel_msg) => { + let channel_res = InboundV2Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider, + counterparty_node_id.clone(), &self.channel_type_features(), &peer_state.latest_features, + &open_channel_msg, _funding_satoshis, _funding_inputs, user_channel_id, &self.default_configuration, best_block_height, + &self.logger); + match channel_res { + Ok(mut channel) => { + let tx_msg_opt_res = channel.begin_interactive_funding_tx_construction(&self.signer_provider, + &self.entropy_source, self.get_our_node_id()); + match tx_msg_opt_res { + Ok(tx_msg_opt) => { + if let Some(tx_msg) = tx_msg_opt { + let msg_send_event = match tx_msg { + InteractiveTxMessageSend::TxAddInput(msg) => events::MessageSendEvent::SendTxAddInput { + node_id: *counterparty_node_id, msg }, + InteractiveTxMessageSend::TxAddOutput(msg) => events::MessageSendEvent::SendTxAddOutput { + node_id: *counterparty_node_id, msg }, + InteractiveTxMessageSend::TxComplete(msg) => events::MessageSendEvent::SendTxComplete { + node_id: *counterparty_node_id, msg }, + }; + peer_state.pending_msg_events.push(msg_send_event); + } + Ok(ChannelPhase::UnfundedInboundV2(channel)) + }, + Err(_) => { + Err(MsgHandleErrInternal::from_chan_no_close(ChannelError::Close("V2 channel rejected due to sender error".into()), *temporary_channel_id)) + } + } + }, + Err(_) => Err(MsgHandleErrInternal::from_chan_no_close(ChannelError::Close("V2 channel rejected due to sender error".into()), *temporary_channel_id)), + } + }, + } }, - _ => { + None => { let err_str = "No such channel awaiting to be accepted.".to_owned(); log_error!(logger, "{}", err_str); @@ -6992,19 +7502,19 @@ where match handle_error!(self, Result::<(), MsgHandleErrInternal>::Err(err), *counterparty_node_id) { Ok(_) => unreachable!("`handle_error` only returns Err as we've passed in an Err"), Err(e) => { - return Err(APIError::ChannelUnavailable { err: e.err }); + Err(APIError::ChannelUnavailable { err: e.err }) }, } } - Ok(mut channel) => { + Ok(mut channel_phase) => { if accept_0conf { - // This should have been correctly configured by the call to InboundV1Channel::new. - debug_assert!(channel.context.minimum_depth().unwrap() == 0); - } else if channel.context.get_channel_type().requires_zero_conf() { + // This should have been correctly configured by the call to Inbound(V1/V2)Channel::new. + debug_assert!(channel_phase.context().minimum_depth().unwrap() == 0); + } else if channel_phase.context().get_channel_type().requires_zero_conf() { let send_msg_err_event = events::MessageSendEvent::HandleError { - node_id: channel.context.get_counterparty_node_id(), + node_id: channel_phase.context().get_counterparty_node_id(), action: msgs::ErrorAction::SendErrorMessage{ - msg: msgs::ErrorMessage { channel_id: temporary_channel_id.clone(), data: "No zero confirmation channels accepted".to_owned(), } + msg: msgs::ErrorMessage { channel_id: *temporary_channel_id, data: "No zero confirmation channels accepted".to_owned(), } } }; peer_state.pending_msg_events.push(send_msg_err_event); @@ -7018,9 +7528,9 @@ where // channels per-peer we can accept channels from a peer with existing ones. if is_only_peer_channel && peers_without_funded_channels >= MAX_UNFUNDED_CHANNEL_PEERS { let send_msg_err_event = events::MessageSendEvent::HandleError { - node_id: channel.context.get_counterparty_node_id(), + node_id: channel_phase.context().get_counterparty_node_id(), action: msgs::ErrorAction::SendErrorMessage{ - msg: msgs::ErrorMessage { channel_id: temporary_channel_id.clone(), data: "Have too many peers with unfunded channels, not accepting new ones".to_owned(), } + msg: msgs::ErrorMessage { channel_id: *temporary_channel_id, data: "Have too many peers with unfunded channels, not accepting new ones".to_owned(), } } }; peer_state.pending_msg_events.push(send_msg_err_event); @@ -7033,20 +7543,68 @@ where // Now that we know we have a channel, assign an outbound SCID alias. let outbound_scid_alias = self.create_and_insert_outbound_scid_alias(); - channel.context.set_outbound_scid_alias(outbound_scid_alias); - - peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel { - node_id: channel.context.get_counterparty_node_id(), - msg: channel.accept_inbound_channel(), - }); + channel_phase.context_mut().set_outbound_scid_alias(outbound_scid_alias); - peer_state.channel_by_id.insert(temporary_channel_id.clone(), ChannelPhase::UnfundedInboundV1(channel)); + match channel_phase { + ChannelPhase::UnfundedInboundV1(mut channel) => { + peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel { + node_id: channel.context.get_counterparty_node_id(), + msg: channel.accept_inbound_channel() }); + peer_state.channel_by_id.insert(*temporary_channel_id, + ChannelPhase::UnfundedInboundV1(channel)); + }, + #[cfg(any(dual_funding, splicing))] + ChannelPhase::UnfundedInboundV2(mut channel) => { + peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannelV2 { + node_id: channel.context.get_counterparty_node_id(), + msg: channel.accept_inbound_dual_funded_channel() }); + peer_state.channel_by_id.insert(channel.context.channel_id(), + ChannelPhase::UnfundedInboundV2(channel)); + }, + _ => { + debug_assert!(false); + // This should be unreachable, but if it is then we would have dropped the inbound channel request + // and there'd be nothing to clean up as we haven't added anything to the channel_by_id map yet. + return Err(APIError::APIMisuseError { + err: "Channel somehow changed to a non-inbound channel before accepting".to_owned() }) + } + } Ok(()) }, } } + /// Checks related to inputs and their amounts related to establishing dual-funded channels. + #[cfg(any(dual_funding, splicing))] + fn dual_funding_amount_checks(funding_satoshis: u64, funding_inputs: &Vec<(TxIn, Transaction)>) + -> Result<(), APIError> { + if funding_satoshis < 1000 { + return Err(APIError::APIMisuseError { + err: format!("Funding amount must be at least 1000 satoshis. It was {} sats", funding_satoshis), + }); + } + + // Check that vouts exist for each TxIn in provided transactions. + for (idx, input) in funding_inputs.iter().enumerate() { + if input.1.output.get(input.0.previous_output.vout as usize).is_none() { + return Err(APIError::APIMisuseError { + err: format!("Transaction with txid {} does not have an output with vout of {} corresponding to TxIn at funding_inputs[{}]", + input.1.txid(), input.0.previous_output.vout, idx), + }); + } + } + + let total_input_satoshis: u64 = funding_inputs.iter().map(|input| input.1.output[input.0.previous_output.vout as usize].value).sum(); + if total_input_satoshis < funding_satoshis { + Err(APIError::APIMisuseError { + err: format!("Total value of funding inputs must be at least funding amount. It was {} sats", + total_input_satoshis) }) + } else { + Ok(()) + } + } + /// Gets the number of peers which match the given filter and do not have any funded, outbound, /// or 0-conf channels. /// @@ -7090,7 +7648,7 @@ where num_unfunded_channels += 1; } }, - // TODO(dual_funding): Combine this match arm with above once #[cfg(any(dual_funding, splicing))] is removed. + // TODO(dual_funding): Combine this match arm with above once #[cfg(dual_funding)] is removed. #[cfg(any(dual_funding, splicing))] ChannelPhase::UnfundedInboundV2(chan) => { // Only inbound V2 channels that are not 0conf and that we do not contribute to will be @@ -7109,23 +7667,33 @@ where ChannelPhase::UnfundedOutboundV2(_) => { // Outbound channels don't contribute to the unfunded count in the DoS context. continue; - } + }, + #[cfg(splicing)] + ChannelPhase::RefundingV2(_) => todo!("splicing"), } } num_unfunded_channels + peer.inbound_channel_request_by_id.len() } - fn internal_open_channel(&self, counterparty_node_id: &PublicKey, msg: &msgs::OpenChannel) -> Result<(), MsgHandleErrInternal> { - // Note that the ChannelManager is NOT re-persisted on disk after this, so any changes are - // likely to be lost on restart! - if msg.common_fields.chain_hash != self.chain_hash { - return Err(MsgHandleErrInternal::send_err_msg_no_close("Unknown genesis block hash".to_owned(), - msg.common_fields.temporary_channel_id.clone())); + fn internal_open_channel(&self, counterparty_node_id: &PublicKey, msg: OpenChannelMessage) -> Result<(), MsgHandleErrInternal> { + let (chain_hash, temporary_channel_id) = match msg.clone() { + OpenChannelMessage::V1(msg) => (msg.common_fields.chain_hash, msg.common_fields.temporary_channel_id), + #[cfg(any(dual_funding, splicing))] + OpenChannelMessage::V2(msg) => (msg.common_fields.chain_hash, msg.common_fields.temporary_channel_id), + }; + + // Do common open_channel(2) checks + + // Note that the ChannelManager is NOT re-persisted on disk after this, so any changes are + // likely to be lost on restart! + if chain_hash != self.chain_hash { + return Err(MsgHandleErrInternal::send_err_msg_no_close("Unknown genesis block hash".to_owned(), + temporary_channel_id.clone())); } if !self.default_configuration.accept_inbound_channels { return Err(MsgHandleErrInternal::send_err_msg_no_close("No inbound channels accepted".to_owned(), - msg.common_fields.temporary_channel_id.clone())); + temporary_channel_id.clone())); } // Get the number of peers with channels, but without funded ones. We don't care too much @@ -7140,7 +7708,7 @@ where debug_assert!(false); MsgHandleErrInternal::send_err_msg_no_close( format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), - msg.common_fields.temporary_channel_id.clone()) + temporary_channel_id.clone()) })?; let mut peer_state_lock = peer_state_mutex.lock().unwrap(); let peer_state = &mut *peer_state_lock; @@ -7154,80 +7722,131 @@ where { return Err(MsgHandleErrInternal::send_err_msg_no_close( "Have too many peers with unfunded channels, not accepting new ones".to_owned(), - msg.common_fields.temporary_channel_id.clone())); + temporary_channel_id.clone())); } let best_block_height = self.best_block.read().unwrap().height; if Self::unfunded_channel_count(peer_state, best_block_height) >= MAX_UNFUNDED_CHANS_PER_PEER { return Err(MsgHandleErrInternal::send_err_msg_no_close( format!("Refusing more than {} unfunded channels.", MAX_UNFUNDED_CHANS_PER_PEER), - msg.common_fields.temporary_channel_id.clone())); + temporary_channel_id.clone())); } - let channel_id = msg.common_fields.temporary_channel_id; + let channel_id = temporary_channel_id; let channel_exists = peer_state.has_channel(&channel_id); if channel_exists { return Err(MsgHandleErrInternal::send_err_msg_no_close( "temporary_channel_id collision for the same peer!".to_owned(), - msg.common_fields.temporary_channel_id.clone())); - } + temporary_channel_id.clone())); + } + + // Version-specific checks and logic + match msg { + OpenChannelMessage::V1(ref msg) => { + // If we're doing manual acceptance checks on the channel, then defer creation until we're sure we want to accept. + if self.default_configuration.manually_accept_inbound_channels { + let mut pending_events = self.pending_events.lock().unwrap(); + pending_events.push_back((events::Event::OpenChannelRequest { + temporary_channel_id: msg.common_fields.temporary_channel_id.clone(), + counterparty_node_id: counterparty_node_id.clone(), + funding_satoshis: msg.common_fields.funding_satoshis, + push_msat: msg.push_msat, + channel_type: msg.common_fields.channel_type.clone().unwrap(), + }, None)); + peer_state.inbound_channel_request_by_id.insert(temporary_channel_id, InboundChannelRequest { + open_channel_msg: OpenChannelMessage::V1(msg.clone()), + ticks_remaining: UNACCEPTED_INBOUND_CHANNEL_AGE_LIMIT_TICKS, + }); + return Ok(()); + } - // If we're doing manual acceptance checks on the channel, then defer creation until we're sure we want to accept. - if self.default_configuration.manually_accept_inbound_channels { - let channel_type = channel::channel_type_from_open_channel( - &msg.common_fields, &peer_state.latest_features, &self.channel_type_features() - ).map_err(|e| - MsgHandleErrInternal::from_chan_no_close(e, msg.common_fields.temporary_channel_id) - )?; - let mut pending_events = self.pending_events.lock().unwrap(); - pending_events.push_back((events::Event::OpenChannelRequest { - temporary_channel_id: msg.common_fields.temporary_channel_id.clone(), - counterparty_node_id: counterparty_node_id.clone(), - funding_satoshis: msg.common_fields.funding_satoshis, - push_msat: msg.push_msat, - channel_type, - }, None)); - peer_state.inbound_channel_request_by_id.insert(channel_id, InboundChannelRequest { - open_channel_msg: msg.clone(), - ticks_remaining: UNACCEPTED_INBOUND_CHANNEL_AGE_LIMIT_TICKS, - }); - return Ok(()); - } + // Otherwise create the channel right now. + let mut random_bytes = [0u8; 16]; + random_bytes.copy_from_slice(&self.entropy_source.get_secure_random_bytes()[..16]); + let user_channel_id = u128::from_be_bytes(random_bytes); + let mut channel = match InboundV1Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider, + counterparty_node_id.clone(), &self.channel_type_features(), &peer_state.latest_features, &msg, user_channel_id, + &self.default_configuration, best_block_height, &self.logger, /*is_0conf=*/false) + { + Err(e) => { + return Err(MsgHandleErrInternal::from_chan_no_close(e, msg.common_fields.temporary_channel_id)); + }, + Ok(res) => res + }; - // Otherwise create the channel right now. - let mut random_bytes = [0u8; 16]; - random_bytes.copy_from_slice(&self.entropy_source.get_secure_random_bytes()[..16]); - let user_channel_id = u128::from_be_bytes(random_bytes); - let mut channel = match InboundV1Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider, - counterparty_node_id.clone(), &self.channel_type_features(), &peer_state.latest_features, msg, user_channel_id, - &self.default_configuration, best_block_height, &self.logger, /*is_0conf=*/false) - { - Err(e) => { - return Err(MsgHandleErrInternal::from_chan_no_close(e, msg.common_fields.temporary_channel_id)); + let channel_type = channel.context.get_channel_type(); + if channel_type.requires_zero_conf() { + return Err(MsgHandleErrInternal::send_err_msg_no_close("No zero confirmation channels accepted".to_owned(), msg.common_fields.temporary_channel_id.clone())); + } + if channel_type.requires_anchors_zero_fee_htlc_tx() { + return Err(MsgHandleErrInternal::send_err_msg_no_close("No channels with anchor outputs accepted".to_owned(), msg.common_fields.temporary_channel_id.clone())); + } + + let outbound_scid_alias = self.create_and_insert_outbound_scid_alias(); + channel.context.set_outbound_scid_alias(outbound_scid_alias); + + peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel { + node_id: counterparty_node_id.clone(), + msg: channel.accept_inbound_channel(), + }); + peer_state.channel_by_id.insert(temporary_channel_id, ChannelPhase::UnfundedInboundV1(channel)); }, - Ok(res) => res - }; + #[cfg(any(dual_funding, splicing))] + OpenChannelMessage::V2(ref msg) => { + // If we're doing manual acceptance checks on the channel, then defer creation until we're sure + // we want to accept and, optionally, contribute to the channel value. + if self.default_configuration.manually_accept_inbound_channels { + let mut pending_events = self.pending_events.lock().unwrap(); + pending_events.push_back((events::Event::OpenChannelV2Request { + temporary_channel_id: msg.common_fields.temporary_channel_id.clone(), + counterparty_node_id: counterparty_node_id.clone(), + counterparty_funding_satoshis: msg.common_fields.funding_satoshis, + channel_type: msg.common_fields.channel_type.clone().unwrap(), + }, None)); + peer_state.inbound_channel_request_by_id.insert(temporary_channel_id, InboundChannelRequest { + open_channel_msg: OpenChannelMessage::V2(msg.clone()), + ticks_remaining: UNACCEPTED_INBOUND_CHANNEL_AGE_LIMIT_TICKS, + }); + return Ok(()); + } - let channel_type = channel.context.get_channel_type(); - if channel_type.requires_zero_conf() { - return Err(MsgHandleErrInternal::send_err_msg_no_close( - "No zero confirmation channels accepted".to_owned(), - msg.common_fields.temporary_channel_id.clone())); - } - if channel_type.requires_anchors_zero_fee_htlc_tx() { - return Err(MsgHandleErrInternal::send_err_msg_no_close( - "No channels with anchor outputs accepted".to_owned(), - msg.common_fields.temporary_channel_id.clone())); - } + // Otherwise create the channel right now. + let mut random_bytes = [0u8; 16]; + random_bytes.copy_from_slice(&self.entropy_source.get_secure_random_bytes()[..16]); + let user_channel_id = u128::from_be_bytes(random_bytes); + let mut channel = match InboundV2Channel::new(&self.fee_estimator, &self.entropy_source, + &self.signer_provider, counterparty_node_id.clone(), &self.channel_type_features(), + &peer_state.latest_features, &msg, 0, vec![], user_channel_id, &self.default_configuration, + best_block_height, &self.logger) + { + Err(e) => { + return Err(MsgHandleErrInternal::from_chan_no_close(e, msg.common_fields.temporary_channel_id)); + }, + Ok(res) => res + }; - let outbound_scid_alias = self.create_and_insert_outbound_scid_alias(); - channel.context.set_outbound_scid_alias(outbound_scid_alias); + let channel_type = channel.context.get_channel_type(); + if channel_type.requires_zero_conf() { + return Err(MsgHandleErrInternal::send_err_msg_no_close("No zero confirmation channels accepted".to_owned(), msg.common_fields.temporary_channel_id.clone())); + } + if channel_type.requires_anchors_zero_fee_htlc_tx() { + return Err(MsgHandleErrInternal::send_err_msg_no_close("No channels with anchor outputs accepted".to_owned(), msg.common_fields.temporary_channel_id.clone())); + } - peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel { - node_id: counterparty_node_id.clone(), - msg: channel.accept_inbound_channel(), - }); - peer_state.channel_by_id.insert(channel_id, ChannelPhase::UnfundedInboundV1(channel)); + let outbound_scid_alias = self.create_and_insert_outbound_scid_alias(); + channel.context.set_outbound_scid_alias(outbound_scid_alias); + + channel.begin_interactive_funding_tx_construction(&self.signer_provider, &self.entropy_source, self.get_our_node_id()) + .map_err(|_| MsgHandleErrInternal::send_err_msg_no_close( + "Failed to start interactive transaction construction".to_owned(), msg.common_fields.temporary_channel_id))?; + + peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannelV2 { + node_id: counterparty_node_id.clone(), + msg: channel.accept_inbound_dual_funded_channel(), + }); + peer_state.channel_by_id.insert(channel.context.channel_id(), ChannelPhase::UnfundedInboundV2(channel)); + }, + } Ok(()) } @@ -7269,6 +7888,59 @@ where Ok(()) } + #[cfg(any(dual_funding, splicing))] + fn internal_accept_channel_v2(&self, counterparty_node_id: &PublicKey, msg: &msgs::AcceptChannelV2) -> Result<(), MsgHandleErrInternal> { + // Note that the ChannelManager is NOT re-persisted on disk after this, so any changes are + // likely to be lost on restart! + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| { + debug_assert!(false); + MsgHandleErrInternal::send_err_msg_no_close(format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), msg.common_fields.temporary_channel_id) + })?; + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + + let (mut chan, channel_id) = { + match peer_state.channel_by_id.remove(&msg.common_fields.temporary_channel_id) { + Some(phase) => { + match phase { + ChannelPhase::UnfundedOutboundV2(mut chan) => { + if let Err(err) = chan.accept_channel_v2(&msg, &self.default_configuration.channel_handshake_limits, &peer_state.latest_features) { + let (_, res) = convert_chan_phase_err!(self, err, chan, &msg.common_fields.temporary_channel_id, UNFUNDED_CHANNEL); + let _: Result<(), _> = handle_error!(self, Err(res), *counterparty_node_id); + } + let channel_id = chan.context.channel_id(); + (chan, channel_id) + }, + _ => { + peer_state.channel_by_id.insert(msg.common_fields.temporary_channel_id, phase); + return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got an unexpected accept_channel2 message from peer with counterparty_node_id {}", counterparty_node_id), msg.common_fields.temporary_channel_id)); + } + } + }, + None => return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.common_fields.temporary_channel_id)) + } + }; + let tx_msg_opt = chan.begin_interactive_funding_tx_construction(&self.signer_provider, + &self.entropy_source, self.get_our_node_id()) + .map_err(|_| MsgHandleErrInternal::from_chan_no_close( + ChannelError::Close("V2 channel rejected due to sender error".into()), channel_id))?; + if let Some(tx_msg) = tx_msg_opt { + let msg_send_event = match tx_msg { + InteractiveTxMessageSend::TxAddInput(msg) => events::MessageSendEvent::SendTxAddInput { + node_id: *counterparty_node_id, msg }, + InteractiveTxMessageSend::TxAddOutput(msg) => events::MessageSendEvent::SendTxAddOutput { + node_id: *counterparty_node_id, msg }, + InteractiveTxMessageSend::TxComplete(msg) => events::MessageSendEvent::SendTxComplete { + node_id: *counterparty_node_id, msg }, + }; + peer_state.pending_msg_events.push(msg_send_event); + } + peer_state.channel_by_id.insert(chan.context.channel_id(), ChannelPhase::UnfundedOutboundV2(chan)); + Ok(()) + } + fn internal_funding_created(&self, counterparty_node_id: &PublicKey, msg: &msgs::FundingCreated) -> Result<(), MsgHandleErrInternal> { let best_block = *self.best_block.read().unwrap(); @@ -7426,6 +8098,352 @@ where } } + #[cfg(any(dual_funding, splicing))] + fn internal_tx_add_input(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxAddInput) -> Result<(), MsgHandleErrInternal> { + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| { + debug_assert!(false); + MsgHandleErrInternal::send_err_msg_no_close( + format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), + msg.channel_id) + })?; + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + match peer_state.channel_by_id.entry(msg.channel_id) { + hash_map::Entry::Occupied(mut chan_phase_entry) => { + let channel_phase = chan_phase_entry.get_mut(); + let msg_send_event = match channel_phase { + ChannelPhase::UnfundedInboundV2(ref mut channel) => { + channel.tx_add_input(msg).into_msg_send_event(counterparty_node_id) + }, + ChannelPhase::UnfundedOutboundV2(ref mut channel) => { + channel.tx_add_input(msg).into_msg_send_event(counterparty_node_id) + }, + #[cfg(splicing)] + ChannelPhase::RefundingV2((ref _pre_chan, ref mut post_chans)) => { + match post_chans.tx_add_input(msg) { + Ok(ia_res) => ia_res.into_msg_send_event(counterparty_node_id), + Err(err) => { + try_chan_phase_entry!(self, Err(ChannelError::Warn( + format!("Error while tx_add_input: {:?}", err) + )), chan_phase_entry) + } + } + }, + _ => try_chan_phase_entry!(self, Err(ChannelError::Warn( + "Got a tx_add_input message with no interactive transaction construction expected or in-progress" + .into())), chan_phase_entry) + }; + peer_state.pending_msg_events.push(msg_send_event); + Ok(()) + }, + hash_map::Entry::Vacant(_) => { + Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) + } + } + } + + #[cfg(any(dual_funding, splicing))] + fn internal_tx_add_output(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxAddOutput) -> Result<(), MsgHandleErrInternal> { + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| { + debug_assert!(false); + MsgHandleErrInternal::send_err_msg_no_close( + format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), + msg.channel_id) + })?; + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + match peer_state.channel_by_id.entry(msg.channel_id) { + hash_map::Entry::Occupied(mut chan_phase_entry) => { + let channel_phase = chan_phase_entry.get_mut(); + let msg_send_event = match channel_phase { + ChannelPhase::UnfundedInboundV2(ref mut channel) => { + channel.tx_add_output(msg).into_msg_send_event(counterparty_node_id) + }, + ChannelPhase::UnfundedOutboundV2(ref mut channel) => { + channel.tx_add_output(msg).into_msg_send_event(counterparty_node_id) + }, + #[cfg(splicing)] + ChannelPhase::RefundingV2((ref _pre_chan, ref mut post_chans)) => { + match post_chans.tx_add_output(msg) { + Ok(ia_res) => ia_res.into_msg_send_event(counterparty_node_id), + Err(err) => { + try_chan_phase_entry!(self, Err(ChannelError::Warn( + format!("Error while tx_add_output: {:?}", err) + )), chan_phase_entry) + } + } + }, + _ => try_chan_phase_entry!(self, Err(ChannelError::Warn( + "Got a tx_add_output message with no interactive transaction construction expected or in-progress" + .into())), chan_phase_entry) + }; + peer_state.pending_msg_events.push(msg_send_event); + Ok(()) + }, + hash_map::Entry::Vacant(_) => { + Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) + } + } + } + + #[cfg(any(dual_funding, splicing))] + fn internal_tx_remove_input(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxRemoveInput) -> Result<(), MsgHandleErrInternal> { + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| { + debug_assert!(false); + MsgHandleErrInternal::send_err_msg_no_close( + format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), + msg.channel_id) + })?; + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + match peer_state.channel_by_id.entry(msg.channel_id) { + hash_map::Entry::Occupied(mut chan_phase_entry) => { + let channel_phase = chan_phase_entry.get_mut(); + let msg_send_event = match channel_phase { + ChannelPhase::UnfundedInboundV2(ref mut channel) => { + channel.tx_remove_input(msg).into_msg_send_event(counterparty_node_id) + }, + ChannelPhase::UnfundedOutboundV2(ref mut channel) => { + channel.tx_remove_input(msg).into_msg_send_event(counterparty_node_id) + }, + _ => try_chan_phase_entry!(self, Err(ChannelError::Warn( + "Got a tx_remove_input message with no interactive transaction construction expected or in-progress" + .into())), chan_phase_entry) + }; + peer_state.pending_msg_events.push(msg_send_event); + Ok(()) + }, + hash_map::Entry::Vacant(_) => { + Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) + } + } + } + + #[cfg(any(dual_funding, splicing))] + fn internal_tx_remove_output(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxRemoveOutput) -> Result<(), MsgHandleErrInternal> { + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| { + debug_assert!(false); + MsgHandleErrInternal::send_err_msg_no_close( + format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), + msg.channel_id) + })?; + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + match peer_state.channel_by_id.entry(msg.channel_id) { + hash_map::Entry::Occupied(mut chan_phase_entry) => { + let channel_phase = chan_phase_entry.get_mut(); + let msg_send_event = match channel_phase { + ChannelPhase::UnfundedInboundV2(ref mut channel) => { + channel.tx_remove_output(msg).into_msg_send_event(counterparty_node_id) + }, + ChannelPhase::UnfundedOutboundV2(ref mut channel) => { + channel.tx_remove_output(msg).into_msg_send_event(counterparty_node_id) + }, + _ => try_chan_phase_entry!(self, Err(ChannelError::Warn( + "Got a tx_remove_output message with no interactive transaction construction expected or in-progress" + .into())), chan_phase_entry) + }; + peer_state.pending_msg_events.push(msg_send_event); + Ok(()) + }, + hash_map::Entry::Vacant(_) => { + Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) + } + } + } + + #[cfg(any(dual_funding, splicing))] + fn internal_tx_complete(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxComplete) -> Result<(), MsgHandleErrInternal> { + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| { + debug_assert!(false); + MsgHandleErrInternal::send_err_msg_no_close( + format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), + msg.channel_id) + })?; + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + match peer_state.channel_by_id.entry(msg.channel_id) { + hash_map::Entry::Occupied(mut chan_phase_entry) => { + let channel_phase = chan_phase_entry.get_mut(); + let (msg_send_event_opt, signing_session_opt) = match channel_phase { + ChannelPhase::UnfundedInboundV2(channel) => { + channel.tx_complete(msg).into_msg_send_event(counterparty_node_id) + }, + ChannelPhase::UnfundedOutboundV2(channel) => { + channel.tx_complete(msg).into_msg_send_event(counterparty_node_id) + }, + #[cfg(splicing)] + ChannelPhase::RefundingV2((_pre_chan, post_chans)) => { + match post_chans.tx_complete(msg) { + Ok(ia_res) => ia_res.into_msg_send_event(counterparty_node_id), + Err(err) => { + try_chan_phase_entry!(self, Err(ChannelError::Warn( + format!("Error while tx_complete message: {:?}", err) + )), chan_phase_entry) + } + } + }, + _ => try_chan_phase_entry!(self, Err(ChannelError::Close( + "Got a tx_complete message with no interactive transaction construction expected or in-progress" + .into())), chan_phase_entry) + }; + if let Some(msg_send_event) = msg_send_event_opt { + peer_state.pending_msg_events.push(msg_send_event); + } + if let Some(signing_session) = signing_session_opt { + let funding_txid = signing_session.unsigned_tx.txid(); + let (channel_id, channel_phase) = chan_phase_entry.remove_entry(); + let (res, pre_splice_chan) = match channel_phase { + ChannelPhase::UnfundedOutboundV2(chan) => { + (chan.funding_tx_constructed(counterparty_node_id, signing_session, &self.logger).map_err( + |(chan, err)| { + (ChannelPhase::UnfundedOutboundV2(chan), err) + } + ), None::>) + }, + ChannelPhase::UnfundedInboundV2(chan) => { + (chan.funding_tx_constructed(counterparty_node_id, signing_session, &self.logger).map_err( + |(chan, err)| { + (ChannelPhase::UnfundedInboundV2(chan), err) + } + ), None) + }, + #[cfg(splicing)] + ChannelPhase::RefundingV2((pre_chan, post_chans)) => { + match post_chans.funding_tx_constructed(counterparty_node_id, signing_session, &self.logger) { + Ok(r) => (Ok(r), Some(pre_chan)), + Err((chans, err)) => (Err((ChannelPhase::RefundingV2((pre_chan, chans)), err)), None), + } + }, + _ => (Err((channel_phase, ChannelError::Warn( + "Got a tx_complete message with no interactive transaction construction expected or in-progress" + .into()))), None), + }; + match res { + Ok((mut channel, commitment_signed, funding_ready_for_sig_event_opt)) => { + if let Some(funding_ready_for_sig_event) = funding_ready_for_sig_event_opt { + let mut pending_events = self.pending_events.lock().unwrap(); + pending_events.push_back((funding_ready_for_sig_event, None)); + } + peer_state.pending_msg_events.push(events::MessageSendEvent::UpdateHTLCs { + node_id: counterparty_node_id.clone(), + updates: CommitmentUpdate { + commitment_signed, + update_add_htlcs: vec![], + update_fulfill_htlcs: vec![], + update_fail_htlcs: vec![], + update_fail_malformed_htlcs: vec![], + update_fee: None, + }, + }); + channel.set_next_funding_txid(&funding_txid); + if let Some(pre_chan) = pre_splice_chan { + #[cfg(splicing)] + peer_state.channel_by_id.insert(channel_id.clone(), ChannelPhase::RefundingV2((pre_chan, ChannelVariants::new(channel)))); + } else { + peer_state.channel_by_id.insert(channel_id.clone(), ChannelPhase::Funded(channel)); + } + }, + Err((channel_phase, _err)) => { + peer_state.channel_by_id.insert(channel_id, channel_phase); + return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("{}", counterparty_node_id), msg.channel_id)) + }, + } + } + Ok(()) + }, + hash_map::Entry::Vacant(_) => { + Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) + } + } + } + + #[cfg(any(dual_funding, splicing))] + fn internal_tx_signatures(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxSignatures) + -> Result<(), MsgHandleErrInternal> { + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| { + debug_assert!(false); + MsgHandleErrInternal::send_err_msg_no_close( + format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), + msg.channel_id) + })?; + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + match peer_state.channel_by_id.entry(msg.channel_id) { + hash_map::Entry::Occupied(mut chan_phase_entry) => { + let channel_phase = chan_phase_entry.get_mut(); + let chan = match channel_phase { + ChannelPhase::Funded(chan) => chan, + #[cfg(splicing)] + ChannelPhase::RefundingV2((_, chans)) => { + if let Some(funded) = chans.get_funded_channel_mut() { + funded + } else { + try_chan_phase_entry!(self, Err(ChannelError::Close( + "Got tx_signatures while renegotiating but no funded channel" + .into())), chan_phase_entry) + } + }, + _ => try_chan_phase_entry!(self, Err(ChannelError::Close( + "Got tx_signatures message with no funded channel" + .into())), chan_phase_entry) + }; + let (tx_signatures_opt, funding_tx_opt) = try_chan_phase_entry!(self, chan.tx_signatures(&msg, &self.logger), chan_phase_entry); + chan.clear_next_funding_txid(); + if let Some(tx_signatures) = tx_signatures_opt { + peer_state.pending_msg_events.push(events::MessageSendEvent::SendTxSignatures { + node_id: *counterparty_node_id, + msg: tx_signatures, + }); + } + if let Some(ref funding_tx) = funding_tx_opt { + self.tx_broadcaster.broadcast_transactions(&[funding_tx]); + { + let mut pending_events = self.pending_events.lock().unwrap(); + emit_channel_pending_event!(pending_events, chan); + } + } + Ok(()) + }, + hash_map::Entry::Vacant(_) => { + Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) + } + } + } + + #[cfg(any(dual_funding, splicing))] + fn internal_tx_init_rbf(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxInitRbf) { + let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( + "Dual-funded channels not supported".to_owned(), + msg.channel_id.clone())), *counterparty_node_id); + } + + #[cfg(any(dual_funding, splicing))] + fn internal_tx_ack_rbf(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxAckRbf) { + let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( + "Dual-funded channels not supported".to_owned(), + msg.channel_id.clone())), *counterparty_node_id); + } + + #[cfg(any(dual_funding, splicing))] + fn internal_tx_abort(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxAbort) { + let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( + "Dual-funded channels not supported".to_owned(), + msg.channel_id.clone())), *counterparty_node_id); + } + fn internal_channel_ready(&self, counterparty_node_id: &PublicKey, msg: &msgs::ChannelReady) -> Result<(), MsgHandleErrInternal> { // Note that the ChannelManager is NOT re-persisted on disk after this (unless we error // closing a channel), so any changes are likely to be lost on restart! @@ -7481,6 +8499,108 @@ where } } + #[cfg(splicing)] + fn internal_splice_locked(&self, counterparty_node_id: &PublicKey, msg: &msgs::SpliceLocked) -> Result<(), MsgHandleErrInternal> { + // Note that the ChannelManager is NOT re-persisted on disk after this (unless we error + // closing a channel), so any changes are likely to be lost on restart! + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| { + debug_assert!(false); + MsgHandleErrInternal::send_err_msg_no_close(format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), msg.channel_id) + })?; + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + match peer_state.channel_by_id.entry(msg.channel_id) { + hash_map::Entry::Occupied(mut chan_phase_entry) => { + match chan_phase_entry.get_mut() { + // Note: no need to cover Funded + ChannelPhase::RefundingV2((_, chans)) => { + if let Some(_funded) = chans.get_funded_channel() { + // OK, noop + } else { + return try_chan_phase_entry!(self, Err(ChannelError::Close( + "Got a splice_locked message while renegotiating, but there is no funded channel!".into())), chan_phase_entry); + } + }, + _ => { + return try_chan_phase_entry!(self, Err(ChannelError::Close( + "Got a splice_locked message for an unfunded channel!".into())), chan_phase_entry); + }, + } + }, + hash_map::Entry::Vacant(_) => { + return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) + } + } + + // Remove + let phase = peer_state.channel_by_id.remove(&msg.channel_id); + let chan = match phase { + // Some(ChannelPhase::Funded(chan)) => chan, + Some(ChannelPhase::RefundingV2((_, mut chans))) => { + if let Some(funded) = chans.take_funded_channel() { + funded + } else { + return Err(MsgHandleErrInternal::send_err_msg_no_close("Internal error TODO".into(), msg.channel_id)); + } + }, + _ => { + return Err(MsgHandleErrInternal::send_err_msg_no_close("Internal error TODO".into(), msg.channel_id)); + }, + }; + // Re-add as Funded + peer_state.channel_by_id.insert(msg.channel_id, ChannelPhase::Funded(chan)); + + match peer_state.channel_by_id.entry(msg.channel_id) { + hash_map::Entry::Occupied(mut chan_phase_entry) => { + match chan_phase_entry.get_mut() { + // Note: no need to cover RefundingV2 + ChannelPhase::Funded(chan) => { + let logger = WithChannelContext::from(&self.logger, &chan.context); + let announcement_sigs_opt = try_chan_phase_entry!(self, chan.splice_locked(&msg, &self.node_signer, + self.chain_hash, &self.default_configuration, &self.best_block.read().unwrap(), &&logger), chan_phase_entry); + if let Some(announcement_sigs) = announcement_sigs_opt { + log_trace!(logger, "Sending announcement_signatures for channel {}", chan.context.channel_id()); + peer_state.pending_msg_events.push(events::MessageSendEvent::SendAnnouncementSignatures { + node_id: counterparty_node_id.clone(), + msg: announcement_sigs, + }); + } else if chan.context.is_usable() { + // If we're sending an announcement_signatures, we'll send the (public) + // channel_update after sending a channel_announcement when we receive our + // counterparty's announcement_signatures. Thus, we only bother to send a + // channel_update here if the channel is not public, i.e. we're not sending an + // announcement_signatures. + log_trace!(logger, "Sending private initial channel_update for our counterparty on channel {}", chan.context.channel_id()); + if let Ok(msg) = self.get_channel_update_for_unicast(chan) { + peer_state.pending_msg_events.push(events::MessageSendEvent::SendChannelUpdate { + node_id: counterparty_node_id.clone(), + msg, + }); + } + } + + { + let mut pending_events = self.pending_events.lock().unwrap(); + // Pass the splice flag explicitly, as splice has jsut been completed by this time point + emit_channel_ready_event_with_splice!(pending_events, chan, true); + } + + Ok(()) + }, + _ => { + try_chan_phase_entry!(self, Err(ChannelError::Close( + "Got a splice_locked message for an unfunded channel!".into())), chan_phase_entry) + }, + } + }, + hash_map::Entry::Vacant(_) => { + Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) + } + } + } + fn internal_shutdown(&self, counterparty_node_id: &PublicKey, msg: &msgs::Shutdown) -> Result<(), MsgHandleErrInternal> { let mut dropped_htlcs: Vec<(HTLCSource, PaymentHash)> = Vec::new(); let mut finish_shutdown = None; @@ -7535,10 +8655,13 @@ where #[cfg(any(dual_funding, splicing))] ChannelPhase::UnfundedInboundV2(_) | ChannelPhase::UnfundedOutboundV2(_) => { let context = phase.context_mut(); - log_error!(self.logger, "Immediately closing unfunded channel {} as peer asked to cooperatively shut it down (which is unnecessary)", &msg.channel_id); + let logger = WithChannelContext::from(&self.logger, context); + log_error!(logger, "Immediately closing unfunded channel {} as peer asked to cooperatively shut it down (which is unnecessary)", &msg.channel_id); let mut chan = remove_channel_phase!(self, chan_phase_entry); finish_shutdown = Some(chan.context_mut().force_shutdown(false, ClosureReason::CounterpartyCoopClosedUnfundedChannel)); }, + #[cfg(splicing)] + ChannelPhase::RefundingV2(_) => todo!("splicing"), } } else { return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) @@ -7637,58 +8760,66 @@ where let peer_state = &mut *peer_state_lock; match peer_state.channel_by_id.entry(msg.channel_id) { hash_map::Entry::Occupied(mut chan_phase_entry) => { - if let ChannelPhase::Funded(chan) = chan_phase_entry.get_mut() { - let mut pending_forward_info = match decoded_hop_res { - Ok((next_hop, shared_secret, next_packet_pk_opt)) => - self.construct_pending_htlc_status( - msg, counterparty_node_id, shared_secret, next_hop, - chan.context.config().accept_underpaying_htlcs, next_packet_pk_opt, - ), - Err(e) => PendingHTLCStatus::Fail(e) - }; - let logger = WithChannelContext::from(&self.logger, &chan.context); - // If the update_add is completely bogus, the call will Err and we will close, - // but if we've sent a shutdown and they haven't acknowledged it yet, we just - // want to reject the new HTLC and fail it backwards instead of forwarding. - if let Err((_, error_code)) = chan.can_accept_incoming_htlc(&msg, &self.fee_estimator, &logger) { - if msg.blinding_point.is_some() { - pending_forward_info = PendingHTLCStatus::Fail(HTLCFailureMsg::Malformed( - msgs::UpdateFailMalformedHTLC { + let chan = match chan_phase_entry.get_mut() { + ChannelPhase::Funded(chan) => { chan }, + // Both pre and post exist + // TODO(splicing): handle on both + #[cfg(splicing)] + ChannelPhase::RefundingV2((pre_chan, _post_chans)) => { + pre_chan + }, + _ => { + return try_chan_phase_entry!(self, Err(ChannelError::Close( + "Got an update_add_htlc message for an unfunded channel!".into())), chan_phase_entry); + } + }; + let mut pending_forward_info = match decoded_hop_res { + Ok((next_hop, shared_secret, next_packet_pk_opt)) => + self.construct_pending_htlc_status( + msg, counterparty_node_id, shared_secret, next_hop, + chan.context.config().accept_underpaying_htlcs, next_packet_pk_opt, + ), + Err(e) => PendingHTLCStatus::Fail(e) + }; + let logger = WithChannelContext::from(&self.logger, &chan.context); + // If the update_add is completely bogus, the call will Err and we will close, + // but if we've sent a shutdown and they haven't acknowledged it yet, we just + // want to reject the new HTLC and fail it backwards instead of forwarding. + if let Err((_, error_code)) = chan.can_accept_incoming_htlc(&msg, &self.fee_estimator, &logger) { + if msg.blinding_point.is_some() { + pending_forward_info = PendingHTLCStatus::Fail(HTLCFailureMsg::Malformed( + msgs::UpdateFailMalformedHTLC { + channel_id: msg.channel_id, + htlc_id: msg.htlc_id, + sha256_of_onion: [0; 32], + failure_code: INVALID_ONION_BLINDING, + } + )) + } else { + match pending_forward_info { + PendingHTLCStatus::Forward(PendingHTLCInfo { + ref incoming_shared_secret, ref routing, .. + }) => { + let reason = if routing.blinded_failure().is_some() { + HTLCFailReason::reason(INVALID_ONION_BLINDING, vec![0; 32]) + } else if (error_code & 0x1000) != 0 { + let (real_code, error_data) = self.get_htlc_inbound_temp_fail_err_and_data(error_code, chan); + HTLCFailReason::reason(real_code, error_data) + } else { + HTLCFailReason::from_failure_code(error_code) + }.get_encrypted_failure_packet(incoming_shared_secret, &None); + let msg = msgs::UpdateFailHTLC { channel_id: msg.channel_id, htlc_id: msg.htlc_id, - sha256_of_onion: [0; 32], - failure_code: INVALID_ONION_BLINDING, - } - )) - } else { - match pending_forward_info { - PendingHTLCStatus::Forward(PendingHTLCInfo { - ref incoming_shared_secret, ref routing, .. - }) => { - let reason = if routing.blinded_failure().is_some() { - HTLCFailReason::reason(INVALID_ONION_BLINDING, vec![0; 32]) - } else if (error_code & 0x1000) != 0 { - let (real_code, error_data) = self.get_htlc_inbound_temp_fail_err_and_data(error_code, chan); - HTLCFailReason::reason(real_code, error_data) - } else { - HTLCFailReason::from_failure_code(error_code) - }.get_encrypted_failure_packet(incoming_shared_secret, &None); - let msg = msgs::UpdateFailHTLC { - channel_id: msg.channel_id, - htlc_id: msg.htlc_id, - reason - }; - pending_forward_info = PendingHTLCStatus::Fail(HTLCFailureMsg::Relay(msg)); - }, - _ => {}, - } + reason + }; + pending_forward_info = PendingHTLCStatus::Fail(HTLCFailureMsg::Relay(msg)); + }, + _ => {}, } } - try_chan_phase_entry!(self, chan.update_add_htlc(&msg, pending_forward_info), chan_phase_entry); - } else { - return try_chan_phase_entry!(self, Err(ChannelError::Close( - "Got an update_add_htlc message for an unfunded channel!".into())), chan_phase_entry); } + try_chan_phase_entry!(self, chan.update_add_htlc(&msg, pending_forward_info, &self.fee_estimator), chan_phase_entry); }, hash_map::Entry::Vacant(_) => return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) } @@ -7800,6 +8931,8 @@ where } fn internal_commitment_signed(&self, counterparty_node_id: &PublicKey, msg: &msgs::CommitmentSigned) -> Result<(), MsgHandleErrInternal> { + #[cfg(any(dual_funding, splicing))] + let best_block = *self.best_block.read().unwrap(); let per_peer_state = self.per_peer_state.read().unwrap(); let peer_state_mutex = per_peer_state.get(counterparty_node_id) .ok_or_else(|| { @@ -7810,21 +8943,90 @@ where let peer_state = &mut *peer_state_lock; match peer_state.channel_by_id.entry(msg.channel_id) { hash_map::Entry::Occupied(mut chan_phase_entry) => { - if let ChannelPhase::Funded(chan) = chan_phase_entry.get_mut() { - let logger = WithChannelContext::from(&self.logger, &chan.context); - let funding_txo = chan.context.get_funding_txo(); - let monitor_update_opt = try_chan_phase_entry!(self, chan.commitment_signed(&msg, &&logger), chan_phase_entry); - if let Some(monitor_update) = monitor_update_opt { - handle_new_monitor_update!(self, funding_txo.unwrap(), monitor_update, peer_state_lock, - peer_state, per_peer_state, chan); - } - Ok(()) - } else { - return try_chan_phase_entry!(self, Err(ChannelError::Close( - "Got a commitment_signed message for an unfunded channel!".into())), chan_phase_entry); + match chan_phase_entry.get_mut() { + ChannelPhase::Funded(chan) => { + let logger = WithChannelContext::from(&self.logger, &chan.context); + let funding_txo = chan.context.get_funding_txo(); + + #[cfg(any(dual_funding, splicing))] + let interactive_tx_signing_in_progress = chan.interactive_tx_signing_session.is_some(); + #[cfg(not(any(dual_funding, splicing)))] + let interactive_tx_signing_in_progress = false; + + if interactive_tx_signing_in_progress { + #[cfg(any(dual_funding, splicing))] + { + let monitor = try_chan_phase_entry!(self, + chan.commitment_signed_initial_v2(&msg, best_block, &self.signer_provider, &&logger), chan_phase_entry); + if let Ok(persist_status) = self.chain_monitor.watch_channel(chan.context.get_funding_txo().unwrap(), monitor) { + if let Some(tx_signatures) = chan.interactive_tx_signing_session.as_mut().map(|session| session.received_commitment_signed(msg.clone())).flatten() { + // At this point we have received a commitment signed and we are watching the channel so + // we're good to send our tx_signatures if we're up first. + peer_state.pending_msg_events.push(events::MessageSendEvent::SendTxSignatures { + node_id: *counterparty_node_id, + msg: tx_signatures, + }); + } + handle_new_monitor_update!(self, persist_status, peer_state_lock, peer_state, per_peer_state, chan, INITIAL_MONITOR); + } else { + try_chan_phase_entry!(self, Err(ChannelError::Close("Channel funding outpoint was a duplicate".to_owned())), chan_phase_entry) + } + } + } else { + let monitor_update_opt = try_chan_phase_entry!(self, chan.commitment_signed(&msg, &&logger), chan_phase_entry); + if let Some(monitor_update) = monitor_update_opt { + handle_new_monitor_update!(self, funding_txo.unwrap(), monitor_update, peer_state_lock, + peer_state, per_peer_state, chan); + } + } + Ok(()) + }, + // Both pre and post exist + #[cfg(splicing)] + ChannelPhase::RefundingV2((pre_chan, post_chans)) => { + // Try handling on post + match post_chans.commitment_signed_initial_v2(&msg, best_block, &self.signer_provider, &self.logger) { + Ok(res) => { + if let Some((post_chan, monitor)) = res { + let post_funding_txo = post_chan.context.get_funding_txo().unwrap(); + if let Ok(persist_status) = self.chain_monitor.watch_channel(post_funding_txo, monitor) { + if let Some(tx_signatures) = post_chan.interactive_tx_signing_session.as_mut().map(|session| session.received_commitment_signed(msg.clone())).flatten() { + // At this point we have received a commitment signed and we are watching the channel so + // we're good to send our tx_signatures if we're up first. + peer_state.pending_msg_events.push(events::MessageSendEvent::SendTxSignatures { + node_id: *counterparty_node_id, + msg: tx_signatures, + }); + } + handle_new_monitor_update!(self, persist_status, peer_state_lock, peer_state, per_peer_state, post_chan, INITIAL_MONITOR); + } else { + try_chan_phase_entry!(self, Err(ChannelError::Close("Channel funding outpoint was a duplicate".to_owned())), chan_phase_entry) + } + Ok(()) + } else { + // Handle on pre + let logger = WithContext::from(&self.logger, None, Some(msg.channel_id)); + log_trace!(logger, "Handling commitment_signed on pre-splice channel"); + + let monitor_update_opt = try_chan_phase_entry!(self, pre_chan.commitment_signed(&msg, &&logger), chan_phase_entry); + if let Some(monitor_update) = monitor_update_opt { + let pre_funding_txo = pre_chan.context.get_funding_txo().unwrap(); + handle_new_monitor_update!(self, pre_funding_txo, monitor_update, peer_state_lock, peer_state, per_peer_state, pre_chan); + } + + // TODO(splicing): Handle on post, as not initial + + Ok(()) + } + } + Err(err) => try_chan_phase_entry!(self, Err(err), chan_phase_entry), + } + }, + _ => try_chan_phase_entry!(self, Err(ChannelError::Close( + "Got a commitment_signed message for an unfunded channel!".into())), chan_phase_entry), } }, - hash_map::Entry::Vacant(_) => return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) + hash_map::Entry::Vacant(_) => Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) } } @@ -7998,27 +9200,36 @@ where let peer_state = &mut *peer_state_lock; match peer_state.channel_by_id.entry(msg.channel_id) { hash_map::Entry::Occupied(mut chan_phase_entry) => { - if let ChannelPhase::Funded(chan) = chan_phase_entry.get_mut() { - let logger = WithChannelContext::from(&self.logger, &chan.context); - let funding_txo_opt = chan.context.get_funding_txo(); - let mon_update_blocked = if let Some(funding_txo) = funding_txo_opt { - self.raa_monitor_updates_held( - &peer_state.actions_blocking_raa_monitor_updates, funding_txo, msg.channel_id, - *counterparty_node_id) - } else { false }; - let (htlcs_to_fail, monitor_update_opt) = try_chan_phase_entry!(self, - chan.revoke_and_ack(&msg, &self.fee_estimator, &&logger, mon_update_blocked), chan_phase_entry); - if let Some(monitor_update) = monitor_update_opt { - let funding_txo = funding_txo_opt - .expect("Funding outpoint must have been set for RAA handling to succeed"); - handle_new_monitor_update!(self, funding_txo, monitor_update, - peer_state_lock, peer_state, per_peer_state, chan); + let chan = match chan_phase_entry.get_mut() { + ChannelPhase::Funded(chan) => { chan } + // Both post and pre exist + // TODO(splicing) handle on both pre and post + #[cfg(splicing)] + ChannelPhase::RefundingV2((pre_chan, _post_chans)) => { + pre_chan } - htlcs_to_fail - } else { - return try_chan_phase_entry!(self, Err(ChannelError::Close( - "Got a revoke_and_ack message for an unfunded channel!".into())), chan_phase_entry); + _ => { + return try_chan_phase_entry!(self, Err(ChannelError::Close( + "Got a revoke_and_ack message for an unfunded channel!".into())), chan_phase_entry); + + } + }; + let logger = WithChannelContext::from(&self.logger, &chan.context); + let funding_txo_opt = chan.context.get_funding_txo(); + let mon_update_blocked = if let Some(funding_txo) = funding_txo_opt { + self.raa_monitor_updates_held( + &peer_state.actions_blocking_raa_monitor_updates, funding_txo, msg.channel_id, + *counterparty_node_id) + } else { false }; + let (htlcs_to_fail, monitor_update_opt) = try_chan_phase_entry!(self, + chan.revoke_and_ack(&msg, &self.fee_estimator, &&logger, mon_update_blocked), chan_phase_entry); + if let Some(monitor_update) = monitor_update_opt { + let funding_txo = funding_txo_opt + .expect("Funding outpoint must have been set for RAA handling to succeed"); + handle_new_monitor_update!(self, funding_txo, monitor_update, + peer_state_lock, peer_state, per_peer_state, chan); } + htlcs_to_fail }, hash_map::Entry::Vacant(_) => return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)) } @@ -8189,52 +9400,265 @@ where if let Some(upd) = channel_update { peer_state.pending_msg_events.push(upd); } - need_lnd_workaround + need_lnd_workaround + } else { + return try_chan_phase_entry!(self, Err(ChannelError::Close( + "Got a channel_reestablish message for an unfunded channel!".into())), chan_phase_entry); + } + }, + hash_map::Entry::Vacant(_) => { + log_debug!(logger, "Sending bogus ChannelReestablish for unknown channel {} to force channel closure", + msg.channel_id); + // Unfortunately, lnd doesn't force close on errors + // (https://github.com/lightningnetwork/lnd/blob/abb1e3463f3a83bbb843d5c399869dbe930ad94f/htlcswitch/link.go#L2119). + // One of the few ways to get an lnd counterparty to force close is by + // replicating what they do when restoring static channel backups (SCBs). They + // send an invalid `ChannelReestablish` with `0` commitment numbers and an + // invalid `your_last_per_commitment_secret`. + // + // Since we received a `ChannelReestablish` for a channel that doesn't exist, we + // can assume it's likely the channel closed from our point of view, but it + // remains open on the counterparty's side. By sending this bogus + // `ChannelReestablish` message now as a response to theirs, we trigger them to + // force close broadcasting their latest state. If the closing transaction from + // our point of view remains unconfirmed, it'll enter a race with the + // counterparty's to-be-broadcast latest commitment transaction. + peer_state.pending_msg_events.push(MessageSendEvent::SendChannelReestablish { + node_id: *counterparty_node_id, + msg: msgs::ChannelReestablish { + channel_id: msg.channel_id, + next_local_commitment_number: 0, + next_remote_commitment_number: 0, + your_last_per_commitment_secret: [1u8; 32], + my_current_per_commitment_point: PublicKey::from_slice(&[2u8; 33]).unwrap(), + next_funding_txid: None, + }, + }); + return Err(MsgHandleErrInternal::send_err_msg_no_close( + format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", + counterparty_node_id), msg.channel_id) + ) + } + } + }; + + if let Some(channel_ready_msg) = need_lnd_workaround { + self.internal_channel_ready(counterparty_node_id, &channel_ready_msg)?; + } + Ok(NotifyOption::SkipPersistHandleEvents) + } + + // #SPLICING STEP3 A + // Inspired by handle_open_channel() + // Logic for incoming splicing request + #[cfg(splicing)] + fn internal_splice_init(&self, counterparty_node_id: &PublicKey, msg: &msgs::SpliceInit) -> Result<(), MsgHandleErrInternal> { + // TODO checks + // TODO check if we accept splicing, quiscence + + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| { + debug_assert!(false); + MsgHandleErrInternal::send_err_msg_no_close(format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), msg.channel_id.clone()) + })?; + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + + let our_funding_contribution = 0i64; + + // Look for channel + match peer_state.channel_by_id.entry(msg.channel_id) { + hash_map::Entry::Vacant(_) => return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}, channel_id {}", counterparty_node_id, msg.channel_id), msg.channel_id)), + hash_map::Entry::Occupied(chan_entry) => { + if let ChannelPhase::Funded(chan) = chan_entry.get() { + let pre_channel_value = chan.context.get_value_satoshis(); + // TODO check for i64 overflow + if msg.funding_contribution_satoshis < 0 && -msg.funding_contribution_satoshis > (pre_channel_value as i64) { + return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Post-splicing channel value cannot be negative. It was {} - {}", pre_channel_value, -msg.funding_contribution_satoshis), msg.channel_id)); + } + + if msg.funding_contribution_satoshis < 0 { + return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Splice-out not supported, only splice in, relative {}", -msg.funding_contribution_satoshis), msg.channel_id)); + } + + let post_channel_value = SplicingChannelValues::compute_post_value(pre_channel_value, msg.funding_contribution_satoshis, our_funding_contribution); + if post_channel_value < 1000 { + return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Post-splicing channel value must be at least 1000 satoshis. It was {}", post_channel_value), msg.channel_id)); + } + + // Check if a splice has been initiated already. Note: this could be handled more nicely, and support multiple outstanding splice's, the incoming splice_ack matters anyways + if chan.context.pending_splice_pre.is_some() { + return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Channel has already a splice pending, channel id {}", msg.channel_id), msg.channel_id)); + } + + // Check channel id + let post_splice_v2_channel_id = chan.context.generate_v2_channel_id_from_revocation_basepoints(); + if post_splice_v2_channel_id != chan.context.channel_id() { + return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Channel ID would change during splicing (e.g. splice on V1 channel), not yet supported, channel id {} {}", + chan.context.channel_id(), post_splice_v2_channel_id), msg.channel_id)); + } + } else { + return Err(MsgHandleErrInternal::send_err_msg_no_close("Channel in wrong state".to_owned(), msg.channel_id.clone())); + } + }, + }; + + // Change channel, phase changes, remove and add + // Remove the pre channel + // TODO should be removed only if channel id does not change? + let prev_chan = match peer_state.channel_by_id.remove(&msg.channel_id) { + None => return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}, channel_id {}", counterparty_node_id, msg.channel_id), msg.channel_id)), + Some(chan_phase) => { + if let ChannelPhase::Funded(chan) = chan_phase { + chan + } else { + return Err(MsgHandleErrInternal::send_err_msg_no_close("Channel in wrong state".to_owned(), msg.channel_id.clone())); + } + } + }; + + let post_chan = V2Channel::new_spliced( + false, + &prev_chan.context, + &self.signer_provider, + &msg.funding_pubkey, + our_funding_contribution, + msg.funding_contribution_satoshis, + Vec::new(), + LockTime::from_consensus(msg.locktime), + msg.funding_feerate_perkw, + &self.logger, + ).map_err(|e| MsgHandleErrInternal::from_chan_no_close(e, msg.channel_id))?; + + // Add the modified channel + let post_chan_id = post_chan.context().channel_id(); + peer_state.channel_by_id.insert(post_chan_id, ChannelPhase::RefundingV2( + (prev_chan, ChannelVariants::new_with_pending(post_chan)) + )); + + // Perform state changes + match peer_state.channel_by_id.entry(post_chan_id) { + hash_map::Entry::Vacant(_) => return Err(MsgHandleErrInternal::send_err_msg_no_close("Internal consistency error".to_string(), post_chan_id)), + hash_map::Entry::Occupied(mut chan_entry) => { + if let ChannelPhase::RefundingV2((_pre_chan, post_chans)) = chan_entry.get_mut() { + match post_chans.splice_init(our_funding_contribution, &self.signer_provider, &self.entropy_source, self.get_our_node_id(), &self.logger) { + Ok(splice_ack_msg) => { + peer_state.pending_msg_events.push(events::MessageSendEvent::SendSpliceAck { + node_id: *counterparty_node_id, + msg: splice_ack_msg, + }); + Ok(()) + }, + Err(err) => { + Err(MsgHandleErrInternal::from_chan_no_close(err, post_chan_id)) + } + } + } else { + Err(MsgHandleErrInternal::send_err_msg_no_close("Internal consistency error: splice_init while not renegotiating".to_string(), post_chan_id)) + } + } + } + } + + // #SPLICING STEP5 I + // Logic for incoming splicing_ack message + #[cfg(splicing)] + fn internal_splice_ack(&self, counterparty_node_id: &PublicKey, msg: &msgs::SpliceAck) -> Result<(), MsgHandleErrInternal> { + // TODO checks + // TODO check if we have initiated splicing + + // Note that the ChannelManager is NOT re-persisted on disk after this, so any changes are + // likely to be lost on restart! + let per_peer_state = self.per_peer_state.read().unwrap(); + let peer_state_mutex = per_peer_state.get(counterparty_node_id) + .ok_or_else(|| { + debug_assert!(false); + MsgHandleErrInternal::send_err_msg_no_close(format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id), msg.channel_id.clone()) + })?; + let mut peer_state_lock = peer_state_mutex.lock().unwrap(); + let peer_state = &mut *peer_state_lock; + + // Look for channel + let pending_splice = match peer_state.channel_by_id.entry(msg.channel_id) { + hash_map::Entry::Vacant(_) => return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.channel_id)), + hash_map::Entry::Occupied(chan) => { + if let ChannelPhase::Funded(chan) = chan.get() { + // check if splice is pending + if let Some(pending_splice) = &chan.context.pending_splice_pre { + // Note: this is incomplete (their funding contribution is not set) + pending_splice.clone() } else { - return try_chan_phase_entry!(self, Err(ChannelError::Close( - "Got a channel_reestablish message for an unfunded channel!".into())), chan_phase_entry); + return Err(MsgHandleErrInternal::send_err_msg_no_close("Channel is not in pending splice".to_owned(), msg.channel_id.clone())); } - }, - hash_map::Entry::Vacant(_) => { - log_debug!(logger, "Sending bogus ChannelReestablish for unknown channel {} to force channel closure", - msg.channel_id); - // Unfortunately, lnd doesn't force close on errors - // (https://github.com/lightningnetwork/lnd/blob/abb1e3463f3a83bbb843d5c399869dbe930ad94f/htlcswitch/link.go#L2119). - // One of the few ways to get an lnd counterparty to force close is by - // replicating what they do when restoring static channel backups (SCBs). They - // send an invalid `ChannelReestablish` with `0` commitment numbers and an - // invalid `your_last_per_commitment_secret`. - // - // Since we received a `ChannelReestablish` for a channel that doesn't exist, we - // can assume it's likely the channel closed from our point of view, but it - // remains open on the counterparty's side. By sending this bogus - // `ChannelReestablish` message now as a response to theirs, we trigger them to - // force close broadcasting their latest state. If the closing transaction from - // our point of view remains unconfirmed, it'll enter a race with the - // counterparty's to-be-broadcast latest commitment transaction. - peer_state.pending_msg_events.push(MessageSendEvent::SendChannelReestablish { - node_id: *counterparty_node_id, - msg: msgs::ChannelReestablish { - channel_id: msg.channel_id, - next_local_commitment_number: 0, - next_remote_commitment_number: 0, - your_last_per_commitment_secret: [1u8; 32], - my_current_per_commitment_point: PublicKey::from_slice(&[2u8; 33]).unwrap(), - next_funding_txid: None, - }, - }); - return Err(MsgHandleErrInternal::send_err_msg_no_close( - format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", - counterparty_node_id), msg.channel_id) - ) + } else { + return Err(MsgHandleErrInternal::send_err_msg_no_close("Channel in wrong state".to_owned(), msg.channel_id.clone())); + } + }, + }; + + // Change channel, phase changes, remove and add + // Remove the pre channel + // TODO should be removed only if channel id does not change? + let prev_chan = match peer_state.channel_by_id.remove(&msg.channel_id) { + None => return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}, channel_id {}", counterparty_node_id, msg.channel_id), msg.channel_id)), + Some(chan_phase) => { + if let ChannelPhase::Funded(chan) = chan_phase { + chan + } else { + return Err(MsgHandleErrInternal::send_err_msg_no_close("Channel in wrong state".to_owned(), msg.channel_id.clone())); } } }; - if let Some(channel_ready_msg) = need_lnd_workaround { - self.internal_channel_ready(counterparty_node_id, &channel_ready_msg)?; + let post_chan = V2Channel::new_spliced( + true, + &prev_chan.context, + &self.signer_provider, + &msg.funding_pubkey, + pending_splice.our_funding_contribution(), + msg.funding_contribution_satoshis, + pending_splice.our_funding_inputs, + LockTime::from_consensus(pending_splice.locktime), + pending_splice.funding_feerate_perkw, + &self.logger, + ).map_err(|e| MsgHandleErrInternal::from_chan_no_close(e, msg.channel_id))?; + + // Add the modified channel + let post_chan_id = post_chan.context().channel_id(); + peer_state.channel_by_id.insert(post_chan_id, ChannelPhase::RefundingV2( + (prev_chan, ChannelVariants::new_with_pending(post_chan)) + )); + + // Perform state changes + match peer_state.channel_by_id.entry(post_chan_id) { + hash_map::Entry::Vacant(_) => return Err(MsgHandleErrInternal::send_err_msg_no_close("Internal consistency error".to_string(), post_chan_id)), + hash_map::Entry::Occupied(mut chan_entry) => { + if let ChannelPhase::RefundingV2((_pre_chan, post_chans)) = chan_entry.get_mut() { + match post_chans.splice_ack(&self.signer_provider, &self.entropy_source, self.get_our_node_id(), &self.logger) { + Ok(tx_msg_opt) => { + if let Some(tx_msg) = tx_msg_opt { + let msg_send_event = match tx_msg { + InteractiveTxMessageSend::TxAddInput(msg) => events::MessageSendEvent::SendTxAddInput { + node_id: *counterparty_node_id, msg }, + InteractiveTxMessageSend::TxAddOutput(msg) => events::MessageSendEvent::SendTxAddOutput { + node_id: *counterparty_node_id, msg }, + InteractiveTxMessageSend::TxComplete(msg) => events::MessageSendEvent::SendTxComplete { + node_id: *counterparty_node_id, msg }, + }; + peer_state.pending_msg_events.push(msg_send_event); + } + Ok(()) + }, + Err(err) => { + Err(MsgHandleErrInternal::from_chan_no_close(err, post_chan_id)) + }, + } + } else { + Err(MsgHandleErrInternal::send_err_msg_no_close("Internal consistency error: splice_ack while not renegotiating".to_string(), post_chan_id)) + } + } } - Ok(NotifyOption::SkipPersistHandleEvents) } /// Process pending events from the [`chain::Watch`], returning whether any events were processed. @@ -8415,6 +9839,10 @@ where } } ChannelPhase::UnfundedInboundV1(_) => {}, + #[cfg(any(dual_funding, splicing))] + ChannelPhase::UnfundedOutboundV2(_chan) => { + todo!("dual_funding"); + } } }; @@ -9374,7 +10802,7 @@ where PersistenceNotifierGuard::optionally_notify_skipping_background_events( self, || -> NotifyOption { NotifyOption::DoPersist }); self.do_chain_event(Some(height), |channel| channel.transactions_confirmed(&block_hash, height, txdata, self.chain_hash, &self.node_signer, &self.default_configuration, &&WithChannelContext::from(&self.logger, &channel.context)) - .map(|(a, b)| (a, Vec::new(), b))); + .map(|(a, b, c)| (a, b, Vec::new(), c))); let last_best_block_height = self.best_block.read().unwrap().height; if height < last_best_block_height { @@ -9445,9 +10873,9 @@ where self.do_chain_event(None, |channel| { if let Some(funding_txo) = channel.context.get_funding_txo() { if funding_txo.txid == *txid { - channel.funding_transaction_unconfirmed(&&WithChannelContext::from(&self.logger, &channel.context)).map(|()| (None, Vec::new(), None)) - } else { Ok((None, Vec::new(), None)) } - } else { Ok((None, Vec::new(), None)) } + channel.funding_transaction_unconfirmed(&&WithChannelContext::from(&self.logger, &channel.context)).map(|()| (None, None, Vec::new(), None)) + } else { Ok((None, None, Vec::new(), None)) } + } else { Ok((None, None, Vec::new(), None)) } }); } } @@ -9466,7 +10894,7 @@ where /// Calls a function which handles an on-chain event (blocks dis/connected, transactions /// un/confirmed, etc) on each channel, handling any resulting errors or messages generated by /// the function. - fn do_chain_event) -> Result<(Option, Vec<(HTLCSource, PaymentHash)>, Option), ClosureReason>> + fn do_chain_event) -> Result<(Option, Option, Vec<(HTLCSource, PaymentHash)>, Option), ClosureReason>> (&self, height_opt: Option, f: FN) { // Note that we MUST NOT end up calling methods on self.chain_monitor here - we're called // during initialization prior to the chain_monitor being fully configured in some cases. @@ -9482,99 +10910,127 @@ where let pending_msg_events = &mut peer_state.pending_msg_events; peer_state.channel_by_id.retain(|_, phase| { - match phase { + let channel = match phase { // Retain unfunded channels. - ChannelPhase::UnfundedOutboundV1(_) | ChannelPhase::UnfundedInboundV1(_) => true, + ChannelPhase::UnfundedOutboundV1(_) | ChannelPhase::UnfundedInboundV1(_) => { return true }, // TODO(dual_funding): Combine this match arm with above. #[cfg(any(dual_funding, splicing))] - ChannelPhase::UnfundedOutboundV2(_) | ChannelPhase::UnfundedInboundV2(_) => true, - ChannelPhase::Funded(channel) => { - let res = f(channel); - if let Ok((channel_ready_opt, mut timed_out_pending_htlcs, announcement_sigs)) = res { - for (source, payment_hash) in timed_out_pending_htlcs.drain(..) { - let (failure_code, data) = self.get_htlc_inbound_temp_fail_err_and_data(0x1000|14 /* expiry_too_soon */, &channel); - timed_out_htlcs.push((source, payment_hash, HTLCFailReason::reason(failure_code, data), - HTLCDestination::NextHopChannel { node_id: Some(channel.context.get_counterparty_node_id()), channel_id: channel.context.channel_id() })); - } - let logger = WithChannelContext::from(&self.logger, &channel.context); - if let Some(channel_ready) = channel_ready_opt { - send_channel_ready!(self, pending_msg_events, channel, channel_ready); - if channel.context.is_usable() { - log_trace!(logger, "Sending channel_ready with private initial channel_update for our counterparty on channel {}", channel.context.channel_id()); - if let Ok(msg) = self.get_channel_update_for_unicast(channel) { - pending_msg_events.push(events::MessageSendEvent::SendChannelUpdate { - node_id: channel.context.get_counterparty_node_id(), - msg, - }); - } - } else { - log_trace!(logger, "Sending channel_ready WITHOUT channel_update for {}", channel.context.channel_id()); - } - } - - { - let mut pending_events = self.pending_events.lock().unwrap(); - emit_channel_ready_event!(pending_events, channel); - } - - if let Some(announcement_sigs) = announcement_sigs { - log_trace!(logger, "Sending announcement_signatures for channel {}", channel.context.channel_id()); - pending_msg_events.push(events::MessageSendEvent::SendAnnouncementSignatures { + ChannelPhase::UnfundedOutboundV2(_) | ChannelPhase::UnfundedInboundV2(_) => { return true }, + ChannelPhase::Funded(chan) => { chan } + // Both post and pre exist + #[cfg(splicing)] + ChannelPhase::RefundingV2((_, post_chans)) => { + if let Some(funded) = post_chans.get_funded_channel_mut() { + funded + } else { + // no funded + todo!("splicing"); + } + }, + }; + let res = f(channel); + if let Ok((channel_ready_opt, splice_locked_opt, mut timed_out_pending_htlcs, announcement_sigs)) = res { + for (source, payment_hash) in timed_out_pending_htlcs.drain(..) { + let (failure_code, data) = self.get_htlc_inbound_temp_fail_err_and_data(0x1000|14 /* expiry_too_soon */, &channel); + timed_out_htlcs.push((source, payment_hash, HTLCFailReason::reason(failure_code, data), + HTLCDestination::NextHopChannel { node_id: Some(channel.context.get_counterparty_node_id()), channel_id: channel.context.channel_id() })); + } + let logger = WithChannelContext::from(&self.logger, &channel.context); + if let Some(channel_ready) = channel_ready_opt { + send_channel_ready!(self, pending_msg_events, channel, channel_ready); + if channel.context.is_usable() { + log_trace!(logger, "Sending channel_ready with private initial channel_update for our counterparty on channel {}", channel.context.channel_id()); + if let Ok(msg) = self.get_channel_update_for_unicast(channel) { + pending_msg_events.push(events::MessageSendEvent::SendChannelUpdate { node_id: channel.context.get_counterparty_node_id(), - msg: announcement_sigs, + msg, }); - if let Some(height) = height_opt { - if let Some(announcement) = channel.get_signed_channel_announcement(&self.node_signer, self.chain_hash, height, &self.default_configuration) { - pending_msg_events.push(events::MessageSendEvent::BroadcastChannelAnnouncement { - msg: announcement, - // Note that announcement_signatures fails if the channel cannot be announced, - // so get_channel_update_for_broadcast will never fail by the time we get here. - update_msg: Some(self.get_channel_update_for_broadcast(channel).unwrap()), - }); - } - } } - if channel.is_our_channel_ready() { - if let Some(real_scid) = channel.context.get_short_channel_id() { - // If we sent a 0conf channel_ready, and now have an SCID, we add it - // to the short_to_chan_info map here. Note that we check whether we - // can relay using the real SCID at relay-time (i.e. - // enforce option_scid_alias then), and if the funding tx is ever - // un-confirmed we force-close the channel, ensuring short_to_chan_info - // is always consistent. - let mut short_to_chan_info = self.short_to_chan_info.write().unwrap(); - let scid_insert = short_to_chan_info.insert(real_scid, (channel.context.get_counterparty_node_id(), channel.context.channel_id())); - assert!(scid_insert.is_none() || scid_insert.unwrap() == (channel.context.get_counterparty_node_id(), channel.context.channel_id()), - "SCIDs should never collide - ensure you weren't behind by a full {} blocks when creating channels", - fake_scid::MAX_SCID_BLOCKS_FROM_NOW); + } else { + log_trace!(logger, "Sending channel_ready WITHOUT channel_update for {}", channel.context.channel_id()); + } + } + #[cfg(not(splicing))] + debug_assert!(splice_locked_opt.is_none()); + #[cfg(splicing)] + { + if let Some(splice_locked) = splice_locked_opt { + send_splice_locked!(self, pending_msg_events, channel, splice_locked); + if channel.context.is_usable() { + log_trace!(logger, "Sending splice_locked with private initial channel_update for our counterparty on channel {}", channel.context.channel_id()); + if let Ok(msg) = self.get_channel_update_for_unicast(channel) { + pending_msg_events.push(events::MessageSendEvent::SendChannelUpdate { + node_id: channel.context.get_counterparty_node_id(), + msg, + }); } + } else { + log_trace!(logger, "Sending splice_locked WITHOUT channel_update for {}", channel.context.channel_id()); } - } else if let Err(reason) = res { - update_maps_on_chan_removal!(self, &channel.context); - // It looks like our counterparty went on-chain or funding transaction was - // reorged out of the main chain. Close the channel. - let reason_message = format!("{}", reason); - failed_channels.push(channel.context.force_shutdown(true, reason)); - if let Ok(update) = self.get_channel_update_for_broadcast(&channel) { - let mut pending_broadcast_messages = self.pending_broadcast_messages.lock().unwrap(); - pending_broadcast_messages.push(events::MessageSendEvent::BroadcastChannelUpdate { - msg: update + } + } + + { + let mut pending_events = self.pending_events.lock().unwrap(); + emit_channel_ready_event!(pending_events, channel); + } + + if let Some(announcement_sigs) = announcement_sigs { + log_trace!(logger, "Sending announcement_signatures for channel {}", channel.context.channel_id()); + pending_msg_events.push(events::MessageSendEvent::SendAnnouncementSignatures { + node_id: channel.context.get_counterparty_node_id(), + msg: announcement_sigs, + }); + if let Some(height) = height_opt { + if let Some(announcement) = channel.get_signed_channel_announcement(&self.node_signer, self.chain_hash, height, &self.default_configuration) { + pending_msg_events.push(events::MessageSendEvent::BroadcastChannelAnnouncement { + msg: announcement, + // Note that announcement_signatures fails if the channel cannot be announced, + // so get_channel_update_for_broadcast will never fail by the time we get here. + update_msg: Some(self.get_channel_update_for_broadcast(channel).unwrap()), }); } - pending_msg_events.push(events::MessageSendEvent::HandleError { - node_id: channel.context.get_counterparty_node_id(), - action: msgs::ErrorAction::DisconnectPeer { - msg: Some(msgs::ErrorMessage { - channel_id: channel.context.channel_id(), - data: reason_message, - }) - }, - }); - return false; } - true } + if channel.is_our_channel_ready() { + if let Some(real_scid) = channel.context.get_short_channel_id() { + // If we sent a 0conf channel_ready, and now have an SCID, we add it + // to the short_to_chan_info map here. Note that we check whether we + // can relay using the real SCID at relay-time (i.e. + // enforce option_scid_alias then), and if the funding tx is ever + // un-confirmed we force-close the channel, ensuring short_to_chan_info + // is always consistent. + let mut short_to_chan_info = self.short_to_chan_info.write().unwrap(); + let scid_insert = short_to_chan_info.insert(real_scid, (channel.context.get_counterparty_node_id(), channel.context.channel_id())); + assert!(scid_insert.is_none() || scid_insert.unwrap() == (channel.context.get_counterparty_node_id(), channel.context.channel_id()), + "SCIDs should never collide - ensure you weren't behind by a full {} blocks when creating channels", + fake_scid::MAX_SCID_BLOCKS_FROM_NOW); + } + } + } else if let Err(reason) = res { + update_maps_on_chan_removal!(self, &channel.context); + // It looks like our counterparty went on-chain or funding transaction was + // reorged out of the main chain. Close the channel. + let reason_message = format!("{}", reason); + failed_channels.push(channel.context.force_shutdown(true, reason)); + if let Ok(update) = self.get_channel_update_for_broadcast(&channel) { + let mut pending_broadcast_messages = self.pending_broadcast_messages.lock().unwrap(); + pending_broadcast_messages.push(events::MessageSendEvent::BroadcastChannelUpdate { + msg: update + }); + } + pending_msg_events.push(events::MessageSendEvent::HandleError { + node_id: channel.context.get_counterparty_node_id(), + action: msgs::ErrorAction::DisconnectPeer { + msg: Some(msgs::ErrorMessage { + channel_id: channel.context.channel_id(), + data: reason_message, + }) + }, + }); + return false; } + true }); } } @@ -9725,7 +11181,7 @@ where // open_channel message - pre-funded channels are never written so there should be no // change to the contents. let _persistence_guard = PersistenceNotifierGuard::optionally_notify(self, || { - let res = self.internal_open_channel(counterparty_node_id, msg); + let res = self.internal_open_channel(counterparty_node_id, OpenChannelMessage::V1(msg.clone())); let persist = match &res { Err(e) if e.closes_channel() => { debug_assert!(false, "We shouldn't close a new channel"); @@ -9738,10 +11194,23 @@ where }); } + #[cfg(any(dual_funding, splicing))] fn handle_open_channel_v2(&self, counterparty_node_id: &PublicKey, msg: &msgs::OpenChannelV2) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Dual-funded channels not supported".to_owned(), - msg.common_fields.temporary_channel_id.clone())), *counterparty_node_id); + // Note that we never need to persist the updated ChannelManager for an inbound + // open_channel message - pre-funded channels are never written so there should be no + // change to the contents. + let _persistence_guard = PersistenceNotifierGuard::optionally_notify(self, || { + let res = self.internal_open_channel(counterparty_node_id, OpenChannelMessage::V2(msg.clone())); + let persist = match &res { + Err(e) if e.closes_channel() => { + debug_assert!(false, "We shouldn't close a new channel"); + NotifyOption::DoPersist + }, + _ => NotifyOption::SkipPersistHandleEvents, + }; + let _ = handle_error!(self, res, *counterparty_node_id); + persist + }); } fn handle_accept_channel(&self, counterparty_node_id: &PublicKey, msg: &msgs::AcceptChannel) { @@ -9754,10 +11223,13 @@ where }); } + #[cfg(any(dual_funding, splicing))] fn handle_accept_channel_v2(&self, counterparty_node_id: &PublicKey, msg: &msgs::AcceptChannelV2) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Dual-funded channels not supported".to_owned(), - msg.common_fields.temporary_channel_id.clone())), *counterparty_node_id); + // Note that we never need to persist the updated ChannelManager for an inbound + // accept_channel2 message - pre-funded channels are never written so there should be no + // change to the contents. + let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self); + let _ = handle_error!(self, self.internal_accept_channel_v2(counterparty_node_id, msg), *counterparty_node_id); } fn handle_funding_created(&self, counterparty_node_id: &PublicKey, msg: &msgs::FundingCreated) { @@ -9793,24 +11265,32 @@ where } #[cfg(splicing)] - fn handle_splice(&self, counterparty_node_id: &PublicKey, msg: &msgs::Splice) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Splicing not supported".to_owned(), - msg.channel_id.clone())), *counterparty_node_id); + fn handle_splice_init(&self, counterparty_node_id: &PublicKey, msg: &msgs::SpliceInit) { + let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self); + let _ = handle_error!(self, self.internal_splice_init(counterparty_node_id, msg), *counterparty_node_id); } #[cfg(splicing)] fn handle_splice_ack(&self, counterparty_node_id: &PublicKey, msg: &msgs::SpliceAck) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Splicing not supported (splice_ack)".to_owned(), - msg.channel_id.clone())), *counterparty_node_id); + let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self); + let _ = handle_error!(self, self.internal_splice_ack(counterparty_node_id, msg), *counterparty_node_id); } #[cfg(splicing)] fn handle_splice_locked(&self, counterparty_node_id: &PublicKey, msg: &msgs::SpliceLocked) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Splicing not supported (splice_locked)".to_owned(), - msg.channel_id.clone())), *counterparty_node_id); + // Note that we never need to persist the updated ChannelManager for an inbound + // splice_locked message - while the channel's state will change, any splice_locked message + // will ultimately be re-sent on startup and the `ChannelMonitor` won't be updated so we + // will not force-close the channel on startup. + let _persistence_guard = PersistenceNotifierGuard::optionally_notify(self, || { + let res = self.internal_splice_locked(counterparty_node_id, msg); + let persist = match &res { + Err(e) if e.closes_channel() => NotifyOption::DoPersist, + _ => NotifyOption::SkipPersistHandleEvents, + }; + let _ = handle_error!(self, res, *counterparty_node_id); + persist + }); } fn handle_shutdown(&self, counterparty_node_id: &PublicKey, msg: &msgs::Shutdown) { @@ -9955,11 +11435,14 @@ where } &mut chan.context }, - // We retain UnfundedOutboundV1 channel for some time in case - // peer unexpectedly disconnects, and intends to reconnect again. - ChannelPhase::UnfundedOutboundV1(_) => { - return true; - }, + // If we get disconnected and haven't yet committed to a funding + // transaction, we can replay the `open_channel` on reconnection, so don't + // bother dropping the channel here. However, if we already committed to + // the funding transaction we don't yet support replaying the funding + // handshake (and bailing if the peer rejects it), so we force-close in + // that case. + ChannelPhase::UnfundedOutboundV1(chan) if chan.is_resumable() => return true, + ChannelPhase::UnfundedOutboundV1(chan) => &mut chan.context, // Unfunded inbound channels will always be removed. ChannelPhase::UnfundedInboundV1(chan) => { &mut chan.context @@ -9972,6 +11455,8 @@ where ChannelPhase::UnfundedInboundV2(chan) => { &mut chan.context }, + #[cfg(splicing)] + ChannelPhase::RefundingV2(_) => todo!("splicing"), }; // Clean up for removal. update_maps_on_chan_removal!(self, &context); @@ -9997,7 +11482,7 @@ where // Quiescence &events::MessageSendEvent::SendStfu { .. } => false, // Splicing - &events::MessageSendEvent::SendSplice { .. } => false, + &events::MessageSendEvent::SendSpliceInit { .. } => false, &events::MessageSendEvent::SendSpliceAck { .. } => false, &events::MessageSendEvent::SendSpliceLocked { .. } => false, // Interactive Transaction Construction @@ -10147,12 +11632,15 @@ where // TODO(dual_funding): Combine this match arm with above once #[cfg(any(dual_funding, splicing))] is removed. #[cfg(any(dual_funding, splicing))] - ChannelPhase::UnfundedInboundV2(channel) => { + ChannelPhase::UnfundedInboundV2(_) => { // Since unfunded inbound channel maps are cleared upon disconnecting a peer, // they are not persisted and won't be recovered after a crash. // Therefore, they shouldn't exist at this point. debug_assert!(false); }, + + #[cfg(splicing)] + ChannelPhase::RefundingV2(_) => todo!("splicing"), } } } @@ -10262,6 +11750,8 @@ where None | Some(ChannelPhase::UnfundedInboundV1(_) | ChannelPhase::Funded(_)) => (), #[cfg(any(dual_funding, splicing))] Some(ChannelPhase::UnfundedInboundV2(_)) => (), + #[cfg(splicing)] + Some(ChannelPhase::RefundingV2(_)) => todo!("splicing"), } } @@ -10282,55 +11772,88 @@ where Some(vec![self.chain_hash]) } + #[cfg(any(dual_funding, splicing))] fn handle_tx_add_input(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxAddInput) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Dual-funded channels not supported".to_owned(), - msg.channel_id.clone())), *counterparty_node_id); + // Note that we never need to persist the updated ChannelManager for an inbound + // tx_add_input message - interactive transaction construction does not need to + // be persisted before any signatures are exchanged. + let _persistence_guard = PersistenceNotifierGuard::optionally_notify(self, || { + let _ = handle_error!(self, self.internal_tx_add_input(counterparty_node_id, msg), *counterparty_node_id); + NotifyOption::SkipPersistHandleEvents + }); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_add_output(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxAddOutput) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Dual-funded channels not supported".to_owned(), - msg.channel_id.clone())), *counterparty_node_id); + // Note that we never need to persist the updated ChannelManager for an inbound + // tx_add_input message - interactive transaction construction does not need to + // be persisted before any signatures are exchanged. + let _persistence_guard = PersistenceNotifierGuard::optionally_notify(self, || { + let _ = handle_error!(self, self.internal_tx_add_output(counterparty_node_id, msg), *counterparty_node_id); + NotifyOption::SkipPersistHandleEvents + }); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_remove_input(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxRemoveInput) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Dual-funded channels not supported".to_owned(), - msg.channel_id.clone())), *counterparty_node_id); + // Note that we never need to persist the updated ChannelManager for an inbound + // tx_add_input message - interactive transaction construction does not need to + // be persisted before any signatures are exchanged. + let _persistence_guard = PersistenceNotifierGuard::optionally_notify(self, || { + let _ = handle_error!(self, self.internal_tx_remove_input(counterparty_node_id, msg), *counterparty_node_id); + NotifyOption::SkipPersistHandleEvents + }); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_remove_output(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxRemoveOutput) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Dual-funded channels not supported".to_owned(), - msg.channel_id.clone())), *counterparty_node_id); + // Note that we never need to persist the updated ChannelManager for an inbound + // tx_add_input message - interactive transaction construction does not need to + // be persisted before any signatures are exchanged. + let _persistence_guard = PersistenceNotifierGuard::optionally_notify(self, || { + let _ = handle_error!(self, self.internal_tx_remove_output(counterparty_node_id, msg), *counterparty_node_id); + NotifyOption::SkipPersistHandleEvents + }); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_complete(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxComplete) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Dual-funded channels not supported".to_owned(), - msg.channel_id.clone())), *counterparty_node_id); + // Note that we never need to persist the updated ChannelManager for an inbound + // tx_add_input message - interactive transaction construction does not need to + // be persisted before any signatures are exchanged. + let _persistence_guard = PersistenceNotifierGuard::optionally_notify(self, || { + let _ = handle_error!(self, self.internal_tx_complete(counterparty_node_id, msg), *counterparty_node_id); + NotifyOption::SkipPersistHandleEvents + }); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_signatures(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxSignatures) { - let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( - "Dual-funded channels not supported".to_owned(), - msg.channel_id.clone())), *counterparty_node_id); + let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self); + let _ = handle_error!(self, self.internal_tx_signatures(counterparty_node_id, msg), *counterparty_node_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_init_rbf(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxInitRbf) { let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( "Dual-funded channels not supported".to_owned(), msg.channel_id.clone())), *counterparty_node_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_ack_rbf(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxAckRbf) { let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( "Dual-funded channels not supported".to_owned(), msg.channel_id.clone())), *counterparty_node_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_abort(&self, counterparty_node_id: &PublicKey, msg: &msgs::TxAbort) { + // log TxAbort, data field + log_debug!(&self.logger, "Received a TxAbort message"); + if let Ok(s) = str::from_utf8(&msg.data) { + log_debug!(&self.logger, " data: {}", s); + }; let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close( "Dual-funded channels not supported".to_owned(), msg.channel_id.clone())), *counterparty_node_id); @@ -10548,6 +12071,7 @@ pub fn provided_init_features(config: &UserConfig) -> InitFeatures { if config.channel_handshake_config.negotiate_anchors_zero_fee_htlc_tx { features.set_anchors_zero_fee_htlc_tx_optional(); } + features.set_dual_fund_optional(); features } @@ -12370,10 +13894,14 @@ where #[cfg(test)] mod tests { + #[cfg(any(dual_funding, splicing))] + use bitcoin::Witness; use bitcoin::hashes::Hash; use bitcoin::hashes::sha256::Hash as Sha256; use bitcoin::secp256k1::{PublicKey, Secp256k1, SecretKey}; use core::sync::atomic::Ordering; + #[cfg(any(dual_funding, splicing))] + use crate::chain::chaininterface::ConfirmationTarget; use crate::events::{Event, HTLCDestination, MessageSendEvent, MessageSendEventsProvider, ClosureReason}; use crate::ln::types::{ChannelId, PaymentPreimage, PaymentHash, PaymentSecret}; use crate::ln::channelmanager::{create_recv_pending_htlc_info, HTLCForwardInfo, inbound_payment, PaymentId, PaymentSendFailure, RecipientOnionFields, InterceptId}; @@ -13771,6 +15299,161 @@ mod tests { expect_pending_htlcs_forwardable!(nodes[0]); } + + // Dual-funding: V2 Channel Establishment Tests + #[test] + #[cfg(any(dual_funding, splicing))] + fn test_v2_channel_establishment_only_initiator_contributes() { + let chanmon_cfgs = create_chanmon_cfgs(2); + let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); + let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]); + let nodes = create_network(2, &node_cfgs, &node_chanmgrs); + + // Create a funding input for the new channel along with its previous transaction. + let funding_inputs = vec![create_dual_funding_utxo_with_prev_tx(&nodes[0], 100_000)]; + let funding_satoshis = 50_000; + + // nodes[0] creates a dual-funded channel as initiator. + nodes[0].node.create_dual_funded_channel( + nodes[1].node.get_our_node_id(), funding_satoshis, funding_inputs, + Some(ConfirmationTarget::NonAnchorChannelFee), 42, None, + ).unwrap(); + let open_channel_v2_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannelV2, nodes[1].node.get_our_node_id()); + + assert_eq!(nodes[0].node.list_channels().len(), 1); + + // Since `manually_accept_inbound_channels` is false by default, nodes[1]'s node will accept the dual- + // funded channel immediately without allowing nodes[1] to contribute any inputs. + nodes[1].node.handle_open_channel_v2(&nodes[0].node.get_our_node_id(), &open_channel_v2_msg); + let accept_channel_v2_msg = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannelV2, nodes[0].node.get_our_node_id()); + + nodes[0].node.handle_accept_channel_v2(&nodes[1].node.get_our_node_id(), &accept_channel_v2_msg); + + /* Note: FundingInputsContributionReady event no longer used + // nodes[0] should get an event notifying her that channel establishment is awaiting funding inputs + // and that she should provide them. + if let Event::FundingInputsContributionReady { + channel_id, + counterparty_node_id, + holder_funding_satoshis, + counterparty_funding_satoshis, + .. + } = get_event!(nodes[0], Event::FundingInputsContributionReady) { + assert_eq!(holder_funding_satoshis, funding_satoshis); + assert_eq!(counterparty_funding_satoshis, 0); + nodes[0].node.contribute_funding_inputs(&channel_id, &counterparty_node_id, funding_inputs).unwrap(); + } else { panic!(); } + */ + let tx_add_input_msg = get_event_msg!(&nodes[0], MessageSendEvent::SendTxAddInput, nodes[1].node.get_our_node_id()); + + nodes[1].node.handle_tx_add_input(&nodes[0].node.get_our_node_id(), &tx_add_input_msg); + let tx_complete_msg = get_event_msg!(nodes[1], MessageSendEvent::SendTxComplete, nodes[0].node.get_our_node_id()); + + let input_value = tx_add_input_msg.prevtx.as_transaction().output[tx_add_input_msg.prevtx_out as usize].value; + assert_eq!(input_value, 100_000); + + // We have two outputs being added: the P2WSH funding output, and the P2PKH change output + + nodes[0].node.handle_tx_complete(&nodes[1].node.get_our_node_id(), &tx_complete_msg); + let tx_add_output_msg = get_event_msg!(&nodes[0], MessageSendEvent::SendTxAddOutput, nodes[1].node.get_our_node_id()); + + nodes[1].node.handle_tx_add_output(&nodes[0].node.get_our_node_id(), &tx_add_output_msg); + let tx_complete_msg = get_event_msg!(&nodes[1], MessageSendEvent::SendTxComplete, nodes[0].node.get_our_node_id()); + + nodes[0].node.handle_tx_complete(&nodes[1].node.get_our_node_id(), &tx_complete_msg); + let tx_add_output_msg = get_event_msg!(&nodes[0], MessageSendEvent::SendTxAddOutput, nodes[1].node.get_our_node_id()); + + nodes[1].node.handle_tx_add_output(&nodes[0].node.get_our_node_id(), &tx_add_output_msg); + let tx_complete_msg = get_event_msg!(nodes[1], MessageSendEvent::SendTxComplete, nodes[0].node.get_our_node_id()); + + nodes[0].node.handle_tx_complete(&nodes[1].node.get_our_node_id(), &tx_complete_msg); + let msg_events = nodes[0].node.get_and_clear_pending_msg_events(); + assert_eq!(msg_events.len(), 2); + let tx_complete_msg = match msg_events[0] { + MessageSendEvent::SendTxComplete { ref node_id, ref msg } => { + assert_eq!(*node_id, nodes[1].node.get_our_node_id()); + (*msg).clone() + }, + _ => panic!("Unexpected event"), + }; + let msg_commitment_signed_from_0 = match msg_events[1] { + MessageSendEvent::UpdateHTLCs { ref node_id, ref updates } => { + assert_eq!(*node_id, nodes[1].node.get_our_node_id()); + updates.commitment_signed.clone() + }, + _ => panic!("Unexpected event"), + }; + if let Event::FundingTransactionReadyForSigning { + channel_id, + counterparty_node_id, + mut unsigned_transaction, + .. + } = get_event!(nodes[0], Event::FundingTransactionReadyForSigning) { + assert_eq!(counterparty_node_id, nodes[1].node.get_our_node_id()); + + // placeholder signature + let mut witness = Witness::new(); + witness.push([7; 72]); + unsigned_transaction.input[0].witness = witness; + + nodes[0].node.funding_transaction_signed(&channel_id, &counterparty_node_id, unsigned_transaction).unwrap(); + } else { panic!(); } + + nodes[1].node.handle_tx_complete(&nodes[0].node.get_our_node_id(), &tx_complete_msg); + let msg_events = nodes[1].node.get_and_clear_pending_msg_events(); + // First messsage is commitment_signed, second is tx_signatures (see below for more) + assert_eq!(msg_events.len(), 1); + let msg_commitment_signed_from_1 = match msg_events[0] { + MessageSendEvent::UpdateHTLCs { ref node_id, ref updates } => { + assert_eq!(*node_id, nodes[0].node.get_our_node_id()); + updates.commitment_signed.clone() + }, + _ => panic!("Unexpected event {:?}", msg_events[0]), + }; + + // Handle the initial commitment_signed exchange. Order is not important here. + nodes[1].node.handle_commitment_signed(&nodes[0].node.get_our_node_id(), &msg_commitment_signed_from_0); + nodes[0].node.handle_commitment_signed(&nodes[1].node.get_our_node_id(), &msg_commitment_signed_from_1); + check_added_monitors(&nodes[0], 1); + check_added_monitors(&nodes[1], 1); + + // The initiator is the only party that contributed any inputs so they should definitely be the one to send tx_signatures + // only after receiving tx_signatures from the non-initiator in this case. + let msg_events = nodes[0].node.get_and_clear_pending_msg_events(); + assert!(msg_events.is_empty()); + let tx_signatures_from_1 = get_event_msg!(nodes[1], MessageSendEvent::SendTxSignatures, nodes[0].node.get_our_node_id()); + + nodes[0].node.handle_tx_signatures(&nodes[1].node.get_our_node_id(), &tx_signatures_from_1); + let events_0 = nodes[0].node.get_and_clear_pending_events(); + assert_eq!(events_0.len(), 1); + match events_0[0] { + Event::ChannelPending{ ref counterparty_node_id, .. } => { + assert_eq!(*counterparty_node_id, nodes[1].node.get_our_node_id()); + }, + _ => panic!("Unexpected event"), + } + let tx_signatures_from_0 = get_event_msg!(nodes[0], MessageSendEvent::SendTxSignatures, nodes[1].node.get_our_node_id()); + nodes[1].node.handle_tx_signatures(&nodes[0].node.get_our_node_id(), &tx_signatures_from_0); + let events_1 = nodes[1].node.get_and_clear_pending_events(); + assert_eq!(events_1.len(), 1); + match events_1[0] { + Event::ChannelPending{ ref counterparty_node_id, .. } => { + assert_eq!(*counterparty_node_id, nodes[0].node.get_our_node_id()); + }, + _ => panic!("Unexpected event"), + } + + let tx = { + let tx_0 = &nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap()[0]; + let tx_1 = &nodes[1].tx_broadcaster.txn_broadcasted.lock().unwrap()[0]; + assert_eq!(tx_0, tx_1); + tx_0.clone() + }; + + let (channel_ready, _) = create_chan_between_nodes_with_value_confirm(&nodes[0], &nodes[1], &tx); + let (announcement, nodes_0_update, nodes_1_update) = create_chan_between_nodes_with_value_b(&nodes[0], &nodes[1], &channel_ready); + update_nodes_with_chan_announce(&nodes, 0, 1, &announcement, &nodes_0_update, &nodes_1_update); + } } #[cfg(ldk_bench)] diff --git a/lightning/src/ln/features.rs b/lightning/src/ln/features.rs index ff91654a3f7..d0dd5660b57 100644 --- a/lightning/src/ln/features.rs +++ b/lightning/src/ln/features.rs @@ -49,6 +49,9 @@ //! (see [BOLT-4](https://github.com/lightning/bolts/blob/master/04-onion-routing.md#route-blinding) for more information). //! - `ShutdownAnySegwit` - requires/supports that future segwit versions are allowed in `shutdown` //! (see [BOLT-2](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md) for more information). +//! - `DualFund` - requires/supports V2 channel establishment +//! (see [BOLT-2](https://github.com/lightning/bolts/pull/851/files) for more information). +// TODO: update link //! - `OnionMessages` - requires/supports forwarding onion messages //! (see [BOLT-7](https://github.com/lightning/bolts/pull/759/files) for more information). // TODO: update link @@ -150,7 +153,7 @@ mod sealed { // Byte 2 BasicMPP | Wumbo | AnchorsNonzeroFeeHtlcTx | AnchorsZeroFeeHtlcTx, // Byte 3 - RouteBlinding | ShutdownAnySegwit | Taproot, + RouteBlinding | ShutdownAnySegwit | DualFund | Taproot, // Byte 4 OnionMessages, // Byte 5 @@ -168,7 +171,7 @@ mod sealed { // Byte 2 BasicMPP | Wumbo | AnchorsNonzeroFeeHtlcTx | AnchorsZeroFeeHtlcTx, // Byte 3 - RouteBlinding | ShutdownAnySegwit | Taproot, + RouteBlinding | ShutdownAnySegwit | DualFund | Taproot, // Byte 4 OnionMessages, // Byte 5 @@ -410,6 +413,9 @@ mod sealed { define_feature!(27, ShutdownAnySegwit, [InitContext, NodeContext], "Feature flags for `opt_shutdown_anysegwit`.", set_shutdown_any_segwit_optional, set_shutdown_any_segwit_required, supports_shutdown_anysegwit, requires_shutdown_anysegwit); + define_feature!(29, DualFund, [InitContext, NodeContext], + "Feature flags for `option_dual_fund`.", set_dual_fund_optional, set_dual_fund_required, + supports_dual_fund, requires_dual_fund); define_feature!(31, Taproot, [InitContext, NodeContext, ChannelTypeContext], "Feature flags for `option_taproot`.", set_taproot_optional, set_taproot_required, supports_taproot, requires_taproot); diff --git a/lightning/src/ln/functional_test_utils.rs b/lightning/src/ln/functional_test_utils.rs index c5361318fca..f8f9e3d0681 100644 --- a/lightning/src/ln/functional_test_utils.rs +++ b/lightning/src/ln/functional_test_utils.rs @@ -37,13 +37,15 @@ use crate::util::ser::{ReadableArgs, Writeable}; use bitcoin::blockdata::block::{Block, Header, Version}; use bitcoin::blockdata::locktime::absolute::LockTime; -use bitcoin::blockdata::transaction::{Transaction, TxIn, TxOut}; +use bitcoin::blockdata::script::ScriptBuf; +use bitcoin::blockdata::transaction::{Sequence, Transaction, TxIn, TxOut}; +use bitcoin::blockdata::witness::Witness; use bitcoin::hash_types::{BlockHash, TxMerkleNode}; use bitcoin::hashes::sha256::Hash as Sha256; use bitcoin::hashes::Hash as _; use bitcoin::network::constants::Network; use bitcoin::pow::CompactTarget; -use bitcoin::secp256k1::{PublicKey, SecretKey}; +use bitcoin::secp256k1::{PublicKey, Secp256k1, SecretKey}; use alloc::rc::Rc; use core::cell::RefCell; @@ -697,7 +699,7 @@ pub fn get_revoke_commit_msgs>(node: & assert_eq!(node_id, recipient); (*msg).clone() }, - _ => panic!("Unexpected event"), + _ => panic!("Unexpected event {:?}", events[0]), }, match events[1] { MessageSendEvent::UpdateHTLCs { ref node_id, ref updates } => { assert_eq!(node_id, recipient); @@ -734,7 +736,7 @@ macro_rules! get_event_msg { assert_eq!(*node_id, $node_id); (*msg).clone() }, - _ => panic!("Unexpected event"), + _ => panic!("Unexpected event {:?}", events[0]), } } } @@ -761,7 +763,20 @@ pub fn get_err_msg(node: &Node, recipient: &PublicKey) -> msgs::ErrorMessage { } } -/// Get a specific event from the pending events queue. +/// Assert that an event is of specific type. +#[macro_export] +macro_rules! assert_event_type { + ($ev: expr, $event_type: path) => { + { + match $ev { + $event_type { .. } => {}, + _ => panic!("Unexpected event {:?}", $ev), + } + } + } +} + +/// Get a single specific event from the pending events queue. #[macro_export] macro_rules! get_event { ($node: expr, $event_type: path) => { @@ -769,12 +784,8 @@ macro_rules! get_event { let mut events = $node.node.get_and_clear_pending_events(); assert_eq!(events.len(), 1); let ev = events.pop().unwrap(); - match ev { - $event_type { .. } => { - ev - }, - _ => panic!("Unexpected event"), - } + assert_event_type!(ev, $event_type); + ev } } } @@ -882,7 +893,7 @@ pub fn remove_first_msg_event_to_node(msg_node_id: &PublicKey, msg_events: &mut MessageSendEvent::SendStfu { node_id, .. } => { node_id == msg_node_id }, - MessageSendEvent::SendSplice { node_id, .. } => { + MessageSendEvent::SendSpliceInit { node_id, .. } => { node_id == msg_node_id }, MessageSendEvent::SendSpliceAck { node_id, .. } => { @@ -1133,6 +1144,37 @@ pub fn create_coinbase_funding_transaction<'a, 'b, 'c>(node: &Node<'a, 'b, 'c>, internal_create_funding_transaction(node, expected_counterparty_node_id, expected_chan_value, expected_user_chan_id, true) } +/// Create a to-be-contributed custom input for a dual-funded transaction +pub fn create_dual_funding_utxo_with_prev_tx<'a, 'b, 'c>( + node: &Node<'a, 'b, 'c>, value_satoshis: u64, +) -> (TxIn, Transaction) { + let dummy_secret_key = SecretKey::from_slice(&[2; 32]).unwrap(); + create_custom_dual_funding_input_with_pubkey(node, value_satoshis, &PublicKey::from_secret_key(&Secp256k1::new(), &dummy_secret_key)) +} + +/// Create a to-be-contributed custom input for a dual-funded transaction, from the specified pubkey +pub fn create_custom_dual_funding_input_with_pubkey<'a, 'b, 'c>( + node: &Node<'a, 'b, 'c>, value_satoshis: u64, custom_input_pubkey: &PublicKey, +) -> (TxIn, Transaction) { + let chan_id = *node.network_chan_count.borrow(); + + let input_pubkeyhash = bitcoin::key::PublicKey::new(*custom_input_pubkey).wpubkey_hash().unwrap(); + let tx = Transaction { version: chan_id as i32, lock_time: LockTime::ZERO, input: vec![], + output: vec![TxOut { + value: value_satoshis, script_pubkey: ScriptBuf::new_v0_p2wpkh(&input_pubkeyhash), + }]}; + let funding_input = TxIn { + previous_output: OutPoint { + txid: tx.txid(), + index: 0, + }.into_bitcoin_outpoint(), + script_sig: ScriptBuf::new(), + sequence: Sequence::ZERO, + witness: Witness::new(), + }; + (funding_input, tx) +} + fn internal_create_funding_transaction<'a, 'b, 'c>(node: &Node<'a, 'b, 'c>, expected_counterparty_node_id: &PublicKey, expected_chan_value: u64, expected_user_chan_id: u128, coinbase: bool) -> (ChannelId, Transaction, OutPoint) { @@ -1207,6 +1249,31 @@ pub fn sign_funding_transaction<'a, 'b, 'c>(node_a: &Node<'a, 'b, 'c>, node_b: & tx } +/// #SPLICING +pub fn create_splice_in_transaction(current_funding_outpoint: bitcoin::OutPoint, post_channel_value_satoshis: u64, output_script: ScriptBuf, version: i32) -> (Transaction, OutPoint) { + let tx = Transaction { + version, + lock_time: LockTime::ZERO, + // TODO: witness! must not be empty + input: vec![ + TxIn { + previous_output: current_funding_outpoint, + script_sig: ScriptBuf::new(), + sequence: Sequence::ENABLE_RBF_NO_LOCKTIME, + witness: Witness::new(), + } + ], + output: vec![ + TxOut { + value: post_channel_value_satoshis, + script_pubkey: output_script, + } + ], + }; + let funding_outpoint = OutPoint { txid: tx.txid(), index: 0 }; + (tx, funding_outpoint) +} + // Receiver must have been initialized with manually_accept_inbound_channels set to true. pub fn open_zero_conf_channel<'a, 'b, 'c, 'd>(initiator: &'a Node<'b, 'c, 'd>, receiver: &'a Node<'b, 'c, 'd>, initiator_config: Option) -> (bitcoin::Transaction, ChannelId) { let initiator_channels = initiator.node.list_usable_channels().len(); diff --git a/lightning/src/ln/functional_tests.rs b/lightning/src/ln/functional_tests.rs index e068e86b4c6..426f95f50c8 100644 --- a/lightning/src/ln/functional_tests.rs +++ b/lightning/src/ln/functional_tests.rs @@ -61,6 +61,38 @@ use crate::ln::chan_utils::CommitmentTransaction; use super::channel::UNFUNDED_CHANNEL_AGE_LIMIT_TICKS; +#[test] +fn test_channel_resumption_fail_post_funding() { + // If we fail to exchange funding with a peer prior to it disconnecting we'll resume the + // channel open on reconnect, however if we do exchange funding we do not currently support + // replaying it and here test that the channel closes. + let chanmon_cfgs = create_chanmon_cfgs(2); + let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); + let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]); + let nodes = create_network(2, &node_cfgs, &node_chanmgrs); + + nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 0, 42, None, None).unwrap(); + let open_chan = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id()); + nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_chan); + let accept_chan = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id()); + nodes[0].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &accept_chan); + + let (temp_chan_id, tx, funding_output) = + create_funding_transaction(&nodes[0], &nodes[1].node.get_our_node_id(), 1_000_000, 42); + let new_chan_id = ChannelId::v1_from_funding_outpoint(funding_output); + nodes[0].node.funding_transaction_generated(&temp_chan_id, &nodes[1].node.get_our_node_id(), tx).unwrap(); + + nodes[0].node.peer_disconnected(&nodes[1].node.get_our_node_id()); + check_closed_events(&nodes[0], &[ExpectedCloseEvent::from_id_reason(new_chan_id, true, ClosureReason::DisconnectedPeer)]); + + // After ddf75afd16 we'd panic on reconnection if we exchanged funding info, so test that + // explicitly here. + nodes[0].node.peer_connected(&nodes[1].node.get_our_node_id(), &msgs::Init { + features: nodes[1].node.init_features(), networks: None, remote_network_address: None + }, true).unwrap(); + assert_eq!(nodes[0].node.get_and_clear_pending_msg_events(), Vec::new()); +} + #[test] fn test_insane_channel_opens() { // Stand up a network of 2 nodes @@ -750,6 +782,7 @@ fn test_update_fee_that_funder_cannot_afford() { channel_id: chan.2, signature: res.0, htlc_signatures: res.1, + batch: None, #[cfg(taproot)] partial_signature_with_nonce: None, }; @@ -1498,6 +1531,7 @@ fn test_fee_spike_violation_fails_htlc() { channel_id: chan.2, signature: res.0, htlc_signatures: res.1, + batch: None, #[cfg(taproot)] partial_signature_with_nonce: None, }; @@ -2433,11 +2467,11 @@ fn channel_monitor_network_test() { #[test] fn test_justice_tx_htlc_timeout() { // Test justice txn built on revoked HTLC-Timeout tx, against both sides - let mut alice_config = UserConfig::default(); + let mut alice_config = test_default_channel_config(); alice_config.channel_handshake_config.announced_channel = true; alice_config.channel_handshake_limits.force_announced_channel_preference = false; alice_config.channel_handshake_config.our_to_self_delay = 6 * 24 * 5; - let mut bob_config = UserConfig::default(); + let mut bob_config = test_default_channel_config(); bob_config.channel_handshake_config.announced_channel = true; bob_config.channel_handshake_limits.force_announced_channel_preference = false; bob_config.channel_handshake_config.our_to_self_delay = 6 * 24 * 3; @@ -2496,11 +2530,11 @@ fn test_justice_tx_htlc_timeout() { #[test] fn test_justice_tx_htlc_success() { // Test justice txn built on revoked HTLC-Success tx, against both sides - let mut alice_config = UserConfig::default(); + let mut alice_config = test_default_channel_config(); alice_config.channel_handshake_config.announced_channel = true; alice_config.channel_handshake_limits.force_announced_channel_preference = false; alice_config.channel_handshake_config.our_to_self_delay = 6 * 24 * 5; - let mut bob_config = UserConfig::default(); + let mut bob_config = test_default_channel_config(); bob_config.channel_handshake_config.announced_channel = true; bob_config.channel_handshake_limits.force_announced_channel_preference = false; bob_config.channel_handshake_config.our_to_self_delay = 6 * 24 * 3; @@ -3734,10 +3768,10 @@ fn test_peer_disconnected_before_funding_broadcasted() { nodes[0].node.timer_tick_occurred(); } - // Ensure that the channel is closed with `ClosureReason::HolderForceClosed` - // when the peers are disconnected and do not reconnect before the funding - // transaction is broadcasted. - check_closed_event!(&nodes[0], 2, ClosureReason::HolderForceClosed, true + // Ensure that the channel is closed with `ClosureReason::DisconnectedPeer` and a + // `DiscardFunding` event when the peers are disconnected and do not reconnect before the + // funding transaction is broadcasted. + check_closed_event!(&nodes[0], 2, ClosureReason::DisconnectedPeer, true , [nodes[1].node.get_our_node_id()], 1000000); check_closed_event!(&nodes[1], 1, ClosureReason::DisconnectedPeer, false , [nodes[0].node.get_our_node_id()], 1000000); @@ -7222,7 +7256,7 @@ fn test_user_configurable_csv_delay() { } } else { assert!(false); } - // We test msg.to_self_delay <= config.their_to_self_delay is enforced in Chanel::accept_channel() + // We test msg.common_fields.to_self_delay <= config.their_to_self_delay is enforced in Chanel::accept_channel() nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1000000, 1000000, 42, None, None).unwrap(); nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id())); let mut accept_channel = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id()); @@ -7240,7 +7274,7 @@ fn test_user_configurable_csv_delay() { } else { panic!(); } check_closed_event!(nodes[0], 1, ClosureReason::ProcessingError { err: reason_msg }, [nodes[1].node.get_our_node_id()], 1000000); - // We test msg.to_self_delay <= config.their_to_self_delay is enforced in InboundV1Channel::new() + // We test msg.common_fields.to_self_delay <= config.their_to_self_delay is enforced in InboundV1Channel::new() nodes[1].node.create_channel(nodes[0].node.get_our_node_id(), 1000000, 1000000, 42, None, None).unwrap(); let mut open_channel = get_event_msg!(nodes[1], MessageSendEvent::SendOpenChannel, nodes[0].node.get_our_node_id()); open_channel.common_fields.to_self_delay = 200; @@ -9872,7 +9906,7 @@ enum ExposureEvent { AtUpdateFeeOutbound, } -fn do_test_max_dust_htlc_exposure(dust_outbound_balance: bool, exposure_breach_event: ExposureEvent, on_holder_tx: bool, multiplier_dust_limit: bool) { +fn do_test_max_dust_htlc_exposure(dust_outbound_balance: bool, exposure_breach_event: ExposureEvent, on_holder_tx: bool, multiplier_dust_limit: bool, apply_excess_fee: bool) { // Test that we properly reject dust HTLC violating our `max_dust_htlc_exposure_msat` // policy. // @@ -9887,12 +9921,33 @@ fn do_test_max_dust_htlc_exposure(dust_outbound_balance: bool, exposure_breach_e let chanmon_cfgs = create_chanmon_cfgs(2); let mut config = test_default_channel_config(); + + // We hard-code the feerate values here but they're re-calculated furter down and asserted. + // If the values ever change below these constants should simply be updated. + const AT_FEE_OUTBOUND_HTLCS: u64 = 20; + let nondust_htlc_count_in_limit = + if exposure_breach_event == ExposureEvent::AtUpdateFeeOutbound { + AT_FEE_OUTBOUND_HTLCS + } else { 0 }; + let initial_feerate = if apply_excess_fee { 253 * 2 } else { 253 }; + let expected_dust_buffer_feerate = initial_feerate + 2530; + let mut commitment_tx_cost = commit_tx_fee_msat(initial_feerate - 253, nondust_htlc_count_in_limit, &ChannelTypeFeatures::empty()); + commitment_tx_cost += + if on_holder_tx { + htlc_success_tx_weight(&ChannelTypeFeatures::empty()) + } else { + htlc_timeout_tx_weight(&ChannelTypeFeatures::empty()) + } * (initial_feerate as u64 - 253) / 1000 * nondust_htlc_count_in_limit; + { + let mut feerate_lock = chanmon_cfgs[0].fee_estimator.sat_per_kw.lock().unwrap(); + *feerate_lock = initial_feerate; + } config.channel_config.max_dust_htlc_exposure = if multiplier_dust_limit { // Default test fee estimator rate is 253 sat/kw, so we set the multiplier to 5_000_000 / 253 // to get roughly the same initial value as the default setting when this test was // originally written. - MaxDustHTLCExposure::FeeRateMultiplier(5_000_000 / 253) - } else { MaxDustHTLCExposure::FixedLimitMsat(5_000_000) }; // initial default setting value + MaxDustHTLCExposure::FeeRateMultiplier((5_000_000 + commitment_tx_cost) / 253) + } else { MaxDustHTLCExposure::FixedLimitMsat(5_000_000 + commitment_tx_cost) }; let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(config), None]); let mut nodes = create_network(2, &node_cfgs, &node_chanmgrs); @@ -9936,6 +9991,11 @@ fn do_test_max_dust_htlc_exposure(dust_outbound_balance: bool, exposure_breach_e let (announcement, as_update, bs_update) = create_chan_between_nodes_with_value_b(&nodes[0], &nodes[1], &channel_ready); update_nodes_with_chan_announce(&nodes, 0, 1, &announcement, &as_update, &bs_update); + { + let mut feerate_lock = chanmon_cfgs[0].fee_estimator.sat_per_kw.lock().unwrap(); + *feerate_lock = 253; + } + // Fetch a route in advance as we will be unable to once we're unable to send. let (mut route, payment_hash, _, payment_secret) = get_route_and_payment_hash!(nodes[0], nodes[1], 1000); @@ -9945,8 +10005,9 @@ fn do_test_max_dust_htlc_exposure(dust_outbound_balance: bool, exposure_breach_e let chan_lock = per_peer_state.get(&nodes[1].node.get_our_node_id()).unwrap().lock().unwrap(); let chan = chan_lock.channel_by_id.get(&channel_id).unwrap(); (chan.context().get_dust_buffer_feerate(None) as u64, - chan.context().get_max_dust_htlc_exposure_msat(&LowerBoundedFeeEstimator(nodes[0].fee_estimator))) + chan.context().get_max_dust_htlc_exposure_msat(253)) }; + assert_eq!(dust_buffer_feerate, expected_dust_buffer_feerate as u64); let dust_outbound_htlc_on_holder_tx_msat: u64 = (dust_buffer_feerate * htlc_timeout_tx_weight(&channel_type_features) / 1000 + open_channel.common_fields.dust_limit_satoshis - 1) * 1000; let dust_outbound_htlc_on_holder_tx: u64 = max_dust_htlc_exposure_msat / dust_outbound_htlc_on_holder_tx_msat; @@ -9956,8 +10017,13 @@ fn do_test_max_dust_htlc_exposure(dust_outbound_balance: bool, exposure_breach_e let dust_inbound_htlc_on_holder_tx_msat: u64 = (dust_buffer_feerate * htlc_success_tx_weight(&channel_type_features) / 1000 + open_channel.common_fields.dust_limit_satoshis - if multiplier_dust_limit { 3 } else { 2 }) * 1000; let dust_inbound_htlc_on_holder_tx: u64 = max_dust_htlc_exposure_msat / dust_inbound_htlc_on_holder_tx_msat; + // This test was written with a fixed dust value here, which we retain, but assert that it is, + // indeed, dust on both transactions. let dust_htlc_on_counterparty_tx: u64 = 4; - let dust_htlc_on_counterparty_tx_msat: u64 = max_dust_htlc_exposure_msat / dust_htlc_on_counterparty_tx; + let dust_htlc_on_counterparty_tx_msat: u64 = 1_250_000; + let calcd_dust_htlc_on_counterparty_tx_msat: u64 = (dust_buffer_feerate * htlc_timeout_tx_weight(&channel_type_features) / 1000 + open_channel.common_fields.dust_limit_satoshis - if multiplier_dust_limit { 3 } else { 2 }) * 1000; + assert!(dust_htlc_on_counterparty_tx_msat < dust_inbound_htlc_on_holder_tx_msat); + assert!(dust_htlc_on_counterparty_tx_msat < calcd_dust_htlc_on_counterparty_tx_msat); if on_holder_tx { if dust_outbound_balance { @@ -10027,7 +10093,7 @@ fn do_test_max_dust_htlc_exposure(dust_outbound_balance: bool, exposure_breach_e // Outbound dust balance: 5200 sats nodes[0].logger.assert_log("lightning::ln::channel", format!("Cannot accept value that would put our exposure to dust HTLCs at {} over the limit {} on counterparty commitment tx", - dust_htlc_on_counterparty_tx_msat * (dust_htlc_on_counterparty_tx - 1) + dust_htlc_on_counterparty_tx_msat + 4, + dust_htlc_on_counterparty_tx_msat * dust_htlc_on_counterparty_tx + commitment_tx_cost + 4, max_dust_htlc_exposure_msat), 1); } } else if exposure_breach_event == ExposureEvent::AtUpdateFeeOutbound { @@ -10035,7 +10101,7 @@ fn do_test_max_dust_htlc_exposure(dust_outbound_balance: bool, exposure_breach_e // For the multiplier dust exposure limit, since it scales with feerate, // we need to add a lot of HTLCs that will become dust at the new feerate // to cross the threshold. - for _ in 0..20 { + for _ in 0..AT_FEE_OUTBOUND_HTLCS { let (_, payment_hash, payment_secret) = get_payment_preimage_hash(&nodes[1], Some(1_000), None); nodes[0].node.send_payment_with_route(&route, payment_hash, RecipientOnionFields::secret_only(payment_secret), PaymentId(payment_hash.0)).unwrap(); @@ -10054,27 +10120,123 @@ fn do_test_max_dust_htlc_exposure(dust_outbound_balance: bool, exposure_breach_e added_monitors.clear(); } -fn do_test_max_dust_htlc_exposure_by_threshold_type(multiplier_dust_limit: bool) { - do_test_max_dust_htlc_exposure(true, ExposureEvent::AtHTLCForward, true, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(false, ExposureEvent::AtHTLCForward, true, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(false, ExposureEvent::AtHTLCReception, true, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(false, ExposureEvent::AtHTLCReception, false, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(true, ExposureEvent::AtHTLCForward, false, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(true, ExposureEvent::AtHTLCReception, false, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(true, ExposureEvent::AtHTLCReception, true, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(false, ExposureEvent::AtHTLCForward, false, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(true, ExposureEvent::AtUpdateFeeOutbound, true, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(true, ExposureEvent::AtUpdateFeeOutbound, false, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(false, ExposureEvent::AtUpdateFeeOutbound, false, multiplier_dust_limit); - do_test_max_dust_htlc_exposure(false, ExposureEvent::AtUpdateFeeOutbound, true, multiplier_dust_limit); +fn do_test_max_dust_htlc_exposure_by_threshold_type(multiplier_dust_limit: bool, apply_excess_fee: bool) { + do_test_max_dust_htlc_exposure(true, ExposureEvent::AtHTLCForward, true, multiplier_dust_limit, apply_excess_fee); + do_test_max_dust_htlc_exposure(false, ExposureEvent::AtHTLCForward, true, multiplier_dust_limit, apply_excess_fee); + do_test_max_dust_htlc_exposure(false, ExposureEvent::AtHTLCReception, true, multiplier_dust_limit, apply_excess_fee); + do_test_max_dust_htlc_exposure(false, ExposureEvent::AtHTLCReception, false, multiplier_dust_limit, apply_excess_fee); + do_test_max_dust_htlc_exposure(true, ExposureEvent::AtHTLCForward, false, multiplier_dust_limit, apply_excess_fee); + do_test_max_dust_htlc_exposure(true, ExposureEvent::AtHTLCReception, false, multiplier_dust_limit, apply_excess_fee); + do_test_max_dust_htlc_exposure(true, ExposureEvent::AtHTLCReception, true, multiplier_dust_limit, apply_excess_fee); + do_test_max_dust_htlc_exposure(false, ExposureEvent::AtHTLCForward, false, multiplier_dust_limit, apply_excess_fee); + if !multiplier_dust_limit && !apply_excess_fee { + // Because non-dust HTLC transaction fees are included in the dust exposure, trying to + // increase the fee to hit a higher dust exposure with a + // `MaxDustHTLCExposure::FeeRateMultiplier` is no longer super practical, so we skip these + // in the `multiplier_dust_limit` case. + do_test_max_dust_htlc_exposure(true, ExposureEvent::AtUpdateFeeOutbound, true, multiplier_dust_limit, apply_excess_fee); + do_test_max_dust_htlc_exposure(true, ExposureEvent::AtUpdateFeeOutbound, false, multiplier_dust_limit, apply_excess_fee); + do_test_max_dust_htlc_exposure(false, ExposureEvent::AtUpdateFeeOutbound, false, multiplier_dust_limit, apply_excess_fee); + do_test_max_dust_htlc_exposure(false, ExposureEvent::AtUpdateFeeOutbound, true, multiplier_dust_limit, apply_excess_fee); + } } #[test] fn test_max_dust_htlc_exposure() { - do_test_max_dust_htlc_exposure_by_threshold_type(false); - do_test_max_dust_htlc_exposure_by_threshold_type(true); + do_test_max_dust_htlc_exposure_by_threshold_type(false, false); + do_test_max_dust_htlc_exposure_by_threshold_type(false, true); + do_test_max_dust_htlc_exposure_by_threshold_type(true, false); + do_test_max_dust_htlc_exposure_by_threshold_type(true, true); +} + +#[test] +fn test_nondust_htlc_fees_are_dust() { + // Test that the transaction fees paid in nondust HTLCs count towards our dust limit + let chanmon_cfgs = create_chanmon_cfgs(3); + let node_cfgs = create_node_cfgs(3, &chanmon_cfgs); + + let mut config = test_default_channel_config(); + // Set the dust limit to the default value + config.channel_config.max_dust_htlc_exposure = + MaxDustHTLCExposure::FeeRateMultiplier(10_000); + // Make sure the HTLC limits don't get in the way + config.channel_handshake_limits.min_max_accepted_htlcs = 400; + config.channel_handshake_config.our_max_accepted_htlcs = 400; + config.channel_handshake_config.our_htlc_minimum_msat = 1; + + let node_chanmgrs = create_node_chanmgrs(3, &node_cfgs, &[Some(config), Some(config), Some(config)]); + let nodes = create_network(3, &node_cfgs, &node_chanmgrs); + + // Create a channel from 1 -> 0 but immediately push all of the funds towards 0 + let chan_id_1 = create_announced_chan_between_nodes(&nodes, 1, 0).2; + while nodes[1].node.list_channels()[0].next_outbound_htlc_limit_msat > 0 { + send_payment(&nodes[1], &[&nodes[0]], nodes[1].node.list_channels()[0].next_outbound_htlc_limit_msat); + } + + // First get the channel one HTLC_VALUE HTLC away from the dust limit by sending dust HTLCs + // repeatedly until we run out of space. + const HTLC_VALUE: u64 = 1_000_000; // Doesn't matter, tune until the test passes + let payment_preimage = route_payment(&nodes[0], &[&nodes[1]], HTLC_VALUE).0; + + while nodes[0].node.list_channels()[0].next_outbound_htlc_minimum_msat == 0 { + route_payment(&nodes[0], &[&nodes[1]], HTLC_VALUE); + } + assert_ne!(nodes[0].node.list_channels()[0].next_outbound_htlc_limit_msat, 0, + "We don't want to run out of ability to send because of some non-dust limit"); + assert!(nodes[0].node.list_channels()[0].pending_outbound_htlcs.len() < 10, + "We should be able to fill our dust limit without too many HTLCs"); + + let dust_limit = nodes[0].node.list_channels()[0].next_outbound_htlc_minimum_msat; + claim_payment(&nodes[0], &[&nodes[1]], payment_preimage); + assert_ne!(nodes[0].node.list_channels()[0].next_outbound_htlc_minimum_msat, 0, + "Make sure we are able to send once we clear one HTLC"); + + // At this point we have somewhere between dust_limit and dust_limit * 2 left in our dust + // exposure limit, and we want to max that out using non-dust HTLCs. + let commitment_tx_per_htlc_cost = + htlc_success_tx_weight(&ChannelTypeFeatures::empty()) * 253; + let max_htlcs_remaining = dust_limit * 2 / commitment_tx_per_htlc_cost; + assert!(max_htlcs_remaining < 30, + "We should be able to fill our dust limit without too many HTLCs"); + for i in 0..max_htlcs_remaining + 1 { + assert_ne!(i, max_htlcs_remaining); + if nodes[0].node.list_channels()[0].next_outbound_htlc_limit_msat < dust_limit { + // We found our limit, and it was less than max_htlcs_remaining! + // At this point we can only send dust HTLCs as any non-dust HTLCs will overuse our + // remaining dust exposure. + break; + } + route_payment(&nodes[0], &[&nodes[1]], dust_limit * 2); + } + + // At this point non-dust HTLCs are no longer accepted from node 0 -> 1, we also check that + // such HTLCs can't be routed over the same channel either. + create_announced_chan_between_nodes(&nodes, 2, 0); + let (route, payment_hash, _, payment_secret) = + get_route_and_payment_hash!(nodes[2], nodes[1], dust_limit * 2); + let onion = RecipientOnionFields::secret_only(payment_secret); + nodes[2].node.send_payment_with_route(&route, payment_hash, onion, PaymentId([0; 32])).unwrap(); + check_added_monitors(&nodes[2], 1); + let send = SendEvent::from_node(&nodes[2]); + + nodes[0].node.handle_update_add_htlc(&nodes[2].node.get_our_node_id(), &send.msgs[0]); + commitment_signed_dance!(nodes[0], nodes[2], send.commitment_msg, false, true); + + expect_pending_htlcs_forwardable!(nodes[0]); + check_added_monitors(&nodes[0], 1); + let node_id_1 = nodes[1].node.get_our_node_id(); + expect_htlc_handling_failed_destinations!( + nodes[0].node.get_and_clear_pending_events(), + &[HTLCDestination::NextHopChannel { node_id: Some(node_id_1), channel_id: chan_id_1 }] + ); + + let fail = get_htlc_update_msgs(&nodes[0], &nodes[2].node.get_our_node_id()); + nodes[2].node.handle_update_fail_htlc(&nodes[0].node.get_our_node_id(), &fail.update_fail_htlcs[0]); + commitment_signed_dance!(nodes[2], nodes[0], fail.commitment_signed, false); + expect_payment_failed_conditions(&nodes[2], payment_hash, false, PaymentFailedConditions::new()); } + #[test] fn test_non_final_funding_tx() { let chanmon_cfgs = create_chanmon_cfgs(2); diff --git a/lightning/src/ln/functional_tests_splice.rs b/lightning/src/ln/functional_tests_splice.rs new file mode 100644 index 00000000000..b04571e6146 --- /dev/null +++ b/lightning/src/ln/functional_tests_splice.rs @@ -0,0 +1,1382 @@ +// This file is Copyright its original authors, visible in version control +// history. +// +// This file is licensed under the Apache License, Version 2.0 or the MIT license +// , at your option. +// You may not use this file except in accordance with one or both of these +// licenses. + +//! Tests that test standing up a network of ChannelManagers, creating channels, sending +//! payments/messages between them, and often checking the resulting ChannelMonitors are able to +//! claim outputs on-chain. + +use crate::events::{Event, MessageSendEvent, MessageSendEventsProvider}; +use crate::ln::ChannelId; +use crate::ln::channel::ChannelPhase; +use crate::ln::functional_test_utils::*; +use crate::ln::msgs::ChannelMessageHandler; +use crate::util::ser::Writeable; +use crate::util::config::{ChannelHandshakeConfig, UserConfig}; +use crate::prelude::*; +use crate::chain::chaininterface::{ConfirmationTarget, FeeEstimator}; + +use bitcoin::{Transaction, TxOut, Witness}; +use bitcoin::blockdata::opcodes; +use bitcoin::blockdata::script::{Builder, ScriptBuf}; +use bitcoin::hash_types::Txid; +use bitcoin::secp256k1::{Message, PublicKey, Secp256k1, SecretKey}; +use bitcoin::secp256k1::ecdsa::Signature; +use bitcoin::sighash::{EcdsaSighashType, SighashCache}; + +use hex::DisplayHex; +use core::default::Default; + + +// Create a 2-of-2 multisig redeem script. Return the script, and the two keys in the order they appear in the script. +fn create_multisig_redeem_script(key1: &PublicKey, key2: &PublicKey) -> (ScriptBuf, PublicKey, PublicKey) { + let (smaller_key, larger_key) = if key1.serialize() < key2.serialize() { + (key1, key2) + } else { + (key2, key1) + }; + let script = Builder::new() + .push_opcode(opcodes::all::OP_PUSHNUM_2) + .push_slice(&smaller_key.serialize()) + .push_slice(&larger_key.serialize()) + .push_opcode(opcodes::all::OP_PUSHNUM_2) + .push_opcode(opcodes::all::OP_CHECKMULTISIG) + .into_script(); + (script, smaller_key.clone(), larger_key.clone()) +} + +// Create an output script for a 2-of-2 multisig. +fn create_multisig_output_script(key1: &PublicKey, key2: &PublicKey) -> ScriptBuf { + let (redeem_script, _k1, _k2) = create_multisig_redeem_script(key1, key2); + Builder::new() + .push_opcode(opcodes::all::OP_PUSHBYTES_0) + .push_slice(&AsRef::<[u8; 32]>::as_ref(&redeem_script.wscript_hash())) + .into_script() +} + +// Verify a 2-of-2 multisig redeem script. Return the same keys, but in the order as they appear in the script +fn verify_multisig_redeem_script(script: &Vec, exp_key_1: &PublicKey, exp_key_2: &PublicKey) -> (PublicKey, PublicKey) { + let (exp_script,exp_smaller_key, exp_larger_key) = create_multisig_redeem_script(exp_key_1, exp_key_2); + assert_eq!(script.as_hex().to_string(), exp_script.as_bytes().as_hex().to_string()); + (exp_smaller_key, exp_larger_key) +} + +// Verify a 2-of-2 multisig output script. +fn verify_multisig_output_script(script: &ScriptBuf, exp_key_1: &PublicKey, exp_key_2: &PublicKey) { + let exp_script = create_multisig_output_script(exp_key_1, exp_key_2); + assert_eq!(script.to_hex_string(), exp_script.to_hex_string()); +} + +// Get the funding key of a node towards another node +fn get_funding_key(node: &Node, counterparty_node: &Node, channel_id: &ChannelId) -> PublicKey { + let per_peer_state = node.node.per_peer_state.read().unwrap(); + let chan_lock = per_peer_state.get(&counterparty_node.node.get_our_node_id()).unwrap().lock().unwrap(); + let local_chan = chan_lock.channel_by_id.get(&channel_id).map( + |phase| match phase { + ChannelPhase::Funded(chan) => Some(chan), + ChannelPhase::RefundingV2((_, chans)) => chans.get_funded_channel(), + _ => None, + } + ).flatten().unwrap(); + local_chan.get_signer().as_ref().pubkeys().funding_pubkey +} + +/// Verify the funding output of a funding tx +fn verify_funding_output(funding_txo: &TxOut, funding_key_1: &PublicKey, funding_key_2: &PublicKey) { + let act_script = &funding_txo.script_pubkey; + verify_multisig_output_script(&act_script, funding_key_1, funding_key_2); +} + +/// Do checks on a funding tx +fn verify_funding_tx(funding_tx: &Transaction, value: u64, funding_key_1: &PublicKey, funding_key_2: &PublicKey) { + // find the output with the given value + let mut funding_output_opt: Option<&TxOut> = None; + for o in &funding_tx.output { + if o.value == value { + funding_output_opt = Some(o); + } + } + if funding_output_opt.is_none() { + panic!("Funding output not found, no output with value {}", value); + } + verify_funding_output(funding_output_opt.unwrap(), funding_key_1, funding_key_2) +} + +/// Simple end-to-end open channel flow, with close, and verification checks. +/// The steps are mostly on ChannelManager level. +#[test] +fn test_channel_open_and_close() { + // Set up a network of 2 nodes + let cfg = UserConfig { + channel_handshake_config: ChannelHandshakeConfig { + announced_channel: true, + ..Default::default() + }, + ..Default::default() + }; + let chanmon_cfgs = create_chanmon_cfgs(2); + let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); + let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, Some(cfg)]); + let nodes = create_network(2, &node_cfgs, &node_chanmgrs); + + // Initiator and Acceptor nodes. Order matters, we want the case when initiator pubkey is larger. + let initiator_node_index = 0; + let initiator_node = &nodes[initiator_node_index]; + let acceptor_node = &nodes[1]; + + // Instantiate channel parameters where we push the maximum msats given our funding satoshis + let channel_value_sat = 100000; // same as funding satoshis + let push_msat = 0; + + let expected_temporary_channel_id = "2f64bdc25fb91c69b6f15b6fc10b32eb773471e433681dc956d9267a4dda8c2b"; + let expected_funded_channel_id = "74c52ab4f11296d62b66a6dba9513b04a3e7fb5a09a30cee22fce7294ab55b7e"; + + // Have node0 initiate a channel to node1 with aforementioned parameters + let channel_id_temp1 = initiator_node.node.create_channel(acceptor_node.node.get_our_node_id(), channel_value_sat, push_msat, 42, None, None).unwrap(); + assert_eq!(channel_id_temp1.to_string(), expected_temporary_channel_id); + + // Extract the channel open message from node0 to node1 + let open_channel_message = get_event_msg!(initiator_node, MessageSendEvent::SendOpenChannel, acceptor_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_open_channel(&initiator_node.node.get_our_node_id(), &open_channel_message.clone()); + // Extract the accept channel message from node1 to node0 + let accept_channel_message = get_event_msg!(acceptor_node, MessageSendEvent::SendAcceptChannel, initiator_node.node.get_our_node_id()); + let _res = initiator_node.node.handle_accept_channel(&acceptor_node.node.get_our_node_id(), &accept_channel_message.clone()); + // Note: FundingGenerationReady emitted, checked and used below + let (channel_id_temp2, funding_tx, _funding_output) = create_funding_transaction(&initiator_node, &acceptor_node.node.get_our_node_id(), channel_value_sat, 42); + assert_eq!(channel_id_temp2.to_string(), expected_temporary_channel_id); + assert_eq!(funding_tx.encode().len(), 55); + let expected_funding_tx = "0000000000010001a08601000000000022002034c0cc0ad0dd5fe61dcf7ef58f995e3d34f8dbd24aa2a6fae68fefe102bf025c00000000"; + assert_eq!(&funding_tx.encode().as_hex().to_string(), expected_funding_tx); + + // Funding transation created, provide it + let _res = initiator_node.node.funding_transaction_generated(&channel_id_temp1, &acceptor_node.node.get_our_node_id(), funding_tx.clone()).unwrap(); + + let funding_created_message = get_event_msg!(initiator_node, MessageSendEvent::SendFundingCreated, acceptor_node.node.get_our_node_id()); + assert_eq!(funding_created_message.temporary_channel_id.to_string(), expected_temporary_channel_id); + + let _res = acceptor_node.node.handle_funding_created(&initiator_node.node.get_our_node_id(), &funding_created_message); + + let funding_signed_message = get_event_msg!(acceptor_node, MessageSendEvent::SendFundingSigned, initiator_node.node.get_our_node_id()); + let _res = initiator_node.node.handle_funding_signed(&acceptor_node.node.get_our_node_id(), &funding_signed_message); + // Take new channel ID + let channel_id2 = funding_signed_message.channel_id; + assert_eq!(channel_id2.to_string(), expected_funded_channel_id); + + // Check that funding transaction has been broadcasted + assert_eq!(chanmon_cfgs[initiator_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap().len(), 1); + let broadcasted_funding_tx = chanmon_cfgs[initiator_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap()[0].clone(); + assert_eq!(broadcasted_funding_tx.encode().len(), 55); + assert_eq!(broadcasted_funding_tx.txid(), funding_tx.txid()); + assert_eq!(broadcasted_funding_tx.encode(), funding_tx.encode()); + assert_eq!(&broadcasted_funding_tx.encode().as_hex().to_string(), expected_funding_tx); + // // Check that funding transaction has been broadcasted on the acceptor side too + // assert_eq!(chanmon_cfgs[acceptor_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap().len(), 1); + // let broadcasted_funding_tx_acc = chanmon_cfgs[acceptor_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap()[0].clone(); + // assert_eq!(broadcasted_funding_tx_acc.encode().len(), 55); + // assert_eq!(broadcasted_funding_tx_acc.txid(), funding_tx.txid()); + // assert_eq!(&broadcasted_funding_tx_acc.encode().as_hex().to_string(), expected_funding_tx); + + check_added_monitors!(initiator_node, 1); + let _ev = get_event!(initiator_node, Event::ChannelPending); + check_added_monitors!(acceptor_node, 1); + let _ev = get_event!(acceptor_node, Event::ChannelPending); + + // Simulate confirmation of the funding tx + confirm_transaction(&initiator_node, &broadcasted_funding_tx); + let channel_ready_message = get_event_msg!(initiator_node, MessageSendEvent::SendChannelReady, acceptor_node.node.get_our_node_id()); + + confirm_transaction(&acceptor_node, &broadcasted_funding_tx); + let channel_ready_message2 = get_event_msg!(acceptor_node, MessageSendEvent::SendChannelReady, initiator_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_channel_ready(&initiator_node.node.get_our_node_id(), &channel_ready_message); + let _ev = get_event!(acceptor_node, Event::ChannelReady); + let _announcement_signatures = get_event_msg!(acceptor_node, MessageSendEvent::SendAnnouncementSignatures, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_channel_ready(&acceptor_node.node.get_our_node_id(), &channel_ready_message2); + let _ev = get_event!(initiator_node, Event::ChannelReady); + let _announcement_signatures = get_event_msg!(initiator_node, MessageSendEvent::SendAnnouncementSignatures, acceptor_node.node.get_our_node_id()); + + // check channel capacity and other parameters + assert_eq!(initiator_node.node.list_channels().len(), 1); + let channel = &initiator_node.node.list_channels()[0]; + { + assert_eq!(channel.channel_id.to_string(), expected_funded_channel_id); + assert!(channel.is_usable); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, 1000 * channel_value_sat); + assert_eq!(channel.funding_txo.unwrap().txid, funding_tx.txid()); + assert_eq!(channel.confirmations.unwrap(), 10); + } + // do checks on the acceptor node as well (capacity, etc.) + assert_eq!(acceptor_node.node.list_channels().len(), 1); + { + let channel = &acceptor_node.node.list_channels()[0]; + assert_eq!(channel.channel_id.to_string(), expected_funded_channel_id); + assert!(channel.is_usable); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, 0); + assert_eq!(channel.funding_txo.unwrap().txid, funding_tx.txid()); + assert_eq!(channel.confirmations.unwrap(), 10); + } + + // Verify the funding transaction + let initiator_funding_key = get_funding_key(&initiator_node, &acceptor_node, &channel.channel_id); + let acceptor_funding_key = get_funding_key(&acceptor_node, &initiator_node, &channel.channel_id); + + verify_funding_tx(&broadcasted_funding_tx, channel_value_sat, &initiator_funding_key, &acceptor_funding_key); + + // Channel is ready now for normal operation + + // close channel, cooperatively + initiator_node.node.close_channel(&channel_id2, &acceptor_node.node.get_our_node_id()).unwrap(); + let node0_shutdown_message = get_event_msg!(initiator_node, MessageSendEvent::SendShutdown, acceptor_node.node.get_our_node_id()); + acceptor_node.node.handle_shutdown(&initiator_node.node.get_our_node_id(), &node0_shutdown_message); + let nodes_1_shutdown = get_event_msg!(acceptor_node, MessageSendEvent::SendShutdown, initiator_node.node.get_our_node_id()); + initiator_node.node.handle_shutdown(&acceptor_node.node.get_our_node_id(), &nodes_1_shutdown); + let _ = get_event_msg!(initiator_node, MessageSendEvent::SendClosingSigned, acceptor_node.node.get_our_node_id()); +} + +/// End-to-end V2 open channel flow, with close, and verification checks. +/// The steps are mostly on ChannelManager level. +#[cfg(any(dual_funding, splicing))] +#[test] +fn test_channel_open_v2_and_close() { + // Set up a network of 2 nodes + let cfg = UserConfig { + channel_handshake_config: ChannelHandshakeConfig { + announced_channel: true, + ..Default::default() + }, + ..Default::default() + }; + let chanmon_cfgs = create_chanmon_cfgs(2); + let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); + let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, Some(cfg)]); + let nodes = create_network(2, &node_cfgs, &node_chanmgrs); + + // Initiator and Acceptor nodes. Order matters, we want the case when initiator pubkey is larger. + let initiator_node_index = 0; + let acceptor_node_index = 1; + let initiator_node = &nodes[initiator_node_index]; + let acceptor_node = &nodes[acceptor_node_index]; + + // Instantiate channel parameters where we push the maximum msats given our funding satoshis + let channel_value_sat = 100000; // same as funding satoshis + + let expected_temporary_channel_id = "b1a3942f261316385476c86d7f454062ceb06d2e37675f08c2fac76b8c3ddc5e"; + let expected_funded_channel_id = "0df1425050bb045209e23459ebb5f9c8f6f219dafb85e2ec59d5fe841f1c4463"; + + let extra_funding_input_sats = channel_value_sat + 35_000; + let custom_input_secret_key = SecretKey::from_slice(&[2; 32]).unwrap(); + let funding_inputs = vec![create_custom_dual_funding_input_with_pubkey(&initiator_node, extra_funding_input_sats, &PublicKey::from_secret_key(&Secp256k1::new(), &custom_input_secret_key))]; + // Have node0 initiate a channel to node1 with aforementioned parameters + let channel_id_temp1 = initiator_node.node.create_dual_funded_channel(acceptor_node.node.get_our_node_id(), channel_value_sat, funding_inputs, None, 42, None).unwrap(); + assert_eq!(channel_id_temp1.to_string(), expected_temporary_channel_id); + + // Extract the channel open message from node0 to node1 + let open_channel2_message = get_event_msg!(initiator_node, MessageSendEvent::SendOpenChannelV2, acceptor_node.node.get_our_node_id()); + assert_eq!(initiator_node.node.list_channels().len(), 1); + + let _res = acceptor_node.node.handle_open_channel_v2(&initiator_node.node.get_our_node_id(), &open_channel2_message.clone()); + // Extract the accept channel message from node1 to node0 + let accept_channel2_message = get_event_msg!(acceptor_node, MessageSendEvent::SendAcceptChannelV2, initiator_node.node.get_our_node_id()); + assert_eq!(accept_channel2_message.common_fields.temporary_channel_id.to_string(), expected_temporary_channel_id); + + let _res = initiator_node.node.handle_accept_channel_v2(&acceptor_node.node.get_our_node_id(), &accept_channel2_message.clone()); + + // Note: FundingInputsContributionReady event is no longer used + // Note: contribute_funding_inputs() call is no longer used + + // let events = acceptor_node.node.get_and_clear_pending_events(); + // println!("acceptor_node events: {}", events.len()); + // assert_eq!(events.len(), 0); + + // initiator_node will generate a TxAddInput message to kickstart the interactive transaction construction protocol + let tx_add_input_msg = get_event_msg!(&initiator_node, MessageSendEvent::SendTxAddInput, acceptor_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_tx_add_input(&initiator_node.node.get_our_node_id(), &tx_add_input_msg); + let tx_complete_msg = get_event_msg!(acceptor_node, MessageSendEvent::SendTxComplete, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_tx_complete(&acceptor_node.node.get_our_node_id(), &tx_complete_msg); + + // First output, the funding tx + let tx_add_output_msg = get_event_msg!(&initiator_node, MessageSendEvent::SendTxAddOutput, acceptor_node.node.get_our_node_id()); + assert!(tx_add_output_msg.script.is_v0_p2wsh()); + assert_eq!(tx_add_output_msg.sats, channel_value_sat); + + let _res = acceptor_node.node.handle_tx_add_output(&initiator_node.node.get_our_node_id(), &tx_add_output_msg); + let tx_complete_msg = get_event_msg!(&acceptor_node, MessageSendEvent::SendTxComplete, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_tx_complete(&acceptor_node.node.get_our_node_id(), &tx_complete_msg); + let tx_add_output_msg = get_event_msg!(&initiator_node, MessageSendEvent::SendTxAddOutput, acceptor_node.node.get_our_node_id()); + // Second output, change + let _actual_change_output = tx_add_output_msg.sats; + assert!(tx_add_output_msg.script.is_v0_p2wpkh()); + + let _res = acceptor_node.node.handle_tx_add_output(&initiator_node.node.get_our_node_id(), &tx_add_output_msg); + let tx_complete_msg = get_event_msg!(acceptor_node, MessageSendEvent::SendTxComplete, initiator_node.node.get_our_node_id()); + + initiator_node.node.handle_tx_complete(&acceptor_node.node.get_our_node_id(), &tx_complete_msg); + let msg_events = initiator_node.node.get_and_clear_pending_msg_events(); + assert_eq!(msg_events.len(), 2); + let tx_complete_msg = match msg_events[0] { + MessageSendEvent::SendTxComplete { ref node_id, ref msg } => { + assert_eq!(*node_id, acceptor_node.node.get_our_node_id()); + (*msg).clone() + }, + _ => panic!("Unexpected event"), + }; + let msg_commitment_signed_from_0 = match msg_events[1] { + MessageSendEvent::UpdateHTLCs { ref node_id, ref updates } => { + assert_eq!(*node_id, acceptor_node.node.get_our_node_id()); + updates.commitment_signed.clone() + }, + _ => panic!("Unexpected event"), + }; + let channel_id1 = if let Event::FundingTransactionReadyForSigning { + channel_id, + counterparty_node_id, + mut unsigned_transaction, + .. + } = get_event!(initiator_node, Event::FundingTransactionReadyForSigning) { + assert_eq!(channel_id.to_string(), expected_funded_channel_id); + assert_eq!(counterparty_node_id, acceptor_node.node.get_our_node_id()); + + // placeholder signature + let mut witness = Witness::new(); + witness.push([7; 72]); + unsigned_transaction.input[0].witness = witness; + + let _res = initiator_node.node.funding_transaction_signed(&channel_id, &counterparty_node_id, unsigned_transaction).unwrap(); + channel_id + } else { panic!(); }; + + let _res = acceptor_node.node.handle_tx_complete(&initiator_node.node.get_our_node_id(), &tx_complete_msg); + let msg_events = acceptor_node.node.get_and_clear_pending_msg_events(); + // First messsage is commitment_signed, second is tx_signatures (see below for more) + assert_eq!(msg_events.len(), 1); + let msg_commitment_signed_from_1 = match msg_events[0] { + MessageSendEvent::UpdateHTLCs { ref node_id, ref updates } => { + assert_eq!(*node_id, initiator_node.node.get_our_node_id()); + updates.commitment_signed.clone() + }, + _ => panic!("Unexpected event {:?}", msg_events[0]), + }; + + // Handle the initial commitment_signed exchange. Order is not important here. + acceptor_node.node.handle_commitment_signed(&initiator_node.node.get_our_node_id(), &msg_commitment_signed_from_0); + initiator_node.node.handle_commitment_signed(&acceptor_node.node.get_our_node_id(), &msg_commitment_signed_from_1); + check_added_monitors(&initiator_node, 1); + check_added_monitors(&acceptor_node, 1); + + // The initiator is the only party that contributed any inputs so they should definitely be the one to send tx_signatures + // only after receiving tx_signatures from the non-initiator in this case. + let msg_events = initiator_node.node.get_and_clear_pending_msg_events(); + assert!(msg_events.is_empty()); + let tx_signatures_from_1 = get_event_msg!(acceptor_node, MessageSendEvent::SendTxSignatures, nodes[0].node.get_our_node_id()); + + let _res = initiator_node.node.handle_tx_signatures(&acceptor_node.node.get_our_node_id(), &tx_signatures_from_1); + let events_0 = initiator_node.node.get_and_clear_pending_events(); + assert_eq!(events_0.len(), 1); + match events_0[0] { + Event::ChannelPending{ ref channel_id, ref counterparty_node_id, ref is_splice, .. } => { + assert_eq!(channel_id.to_string(), expected_funded_channel_id); + assert_eq!(*counterparty_node_id, acceptor_node.node.get_our_node_id()); + assert!(!is_splice); + }, + _ => panic!("Unexpected event"), + } + let tx_signatures_from_0 = get_event_msg!(initiator_node, MessageSendEvent::SendTxSignatures, nodes[1].node.get_our_node_id()); + let _res = acceptor_node.node.handle_tx_signatures(&initiator_node.node.get_our_node_id(), &tx_signatures_from_0); + let events_1 = acceptor_node.node.get_and_clear_pending_events(); + assert_eq!(events_1.len(), 1); + match events_1[0] { + Event::ChannelPending{ ref channel_id, ref counterparty_node_id, ref is_splice, .. } => { + assert_eq!(channel_id.to_string(), expected_funded_channel_id); + assert_eq!(*counterparty_node_id, initiator_node.node.get_our_node_id()); + assert!(!is_splice); + }, + _ => panic!("Unexpected event"), + } + + // Check that funding transaction has been broadcasted + assert_eq!(chanmon_cfgs[initiator_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap().len(), 1); + let broadcasted_funding_tx = chanmon_cfgs[initiator_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap()[0].clone(); + let expected_funding_tx = "020000000001019c76affec45612929f824230eacc67dc7b3db1072c39d0e62f4f557a34e141fc000000000000000000021c88000000000000160014d5a9aa98b89acc215fc3d23d6fec0ad59ca3665fa08601000000000022002034c0cc0ad0dd5fe61dcf7ef58f995e3d34f8dbd24aa2a6fae68fefe102bf025c014807070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070700000000"; + assert_eq!(broadcasted_funding_tx.encode().len(), 201); + assert_eq!(&broadcasted_funding_tx.encode().as_hex().to_string(), expected_funding_tx); + // Check that funding transaction has been broadcasted on the acceptor side as well + assert_eq!(chanmon_cfgs[acceptor_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap().len(), 1); + let broadcasted_funding_tx_acc = chanmon_cfgs[acceptor_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap()[0].clone(); + assert_eq!(broadcasted_funding_tx_acc.encode().len(), 201); + assert_eq!(&broadcasted_funding_tx_acc.encode().as_hex().to_string(), expected_funding_tx); + + // check fees + let total_input = extra_funding_input_sats; + assert_eq!(broadcasted_funding_tx.output.len(), 2); + let total_output = broadcasted_funding_tx.output[0].value + broadcasted_funding_tx.output[1].value; + assert!(total_input > total_output); + let fee = total_input - total_output; + let target_fee_rate = chanmon_cfgs[0].fee_estimator.get_est_sat_per_1000_weight(ConfirmationTarget::NonAnchorChannelFee); // target is irrelevant + assert_eq!(target_fee_rate, 253); + assert_eq!(broadcasted_funding_tx.weight().to_wu(), 576); + let expected_minimum_fee = (broadcasted_funding_tx.weight().to_wu() as f64 * target_fee_rate as f64 / 1000 as f64).ceil() as u64; + let expected_maximum_fee = expected_minimum_fee * 3; + assert!(fee >= expected_minimum_fee); + assert!(fee <= expected_maximum_fee); + + // Simulate confirmation of the funding tx + confirm_transaction(&initiator_node, &broadcasted_funding_tx); + let channel_ready_message = get_event_msg!(initiator_node, MessageSendEvent::SendChannelReady, acceptor_node.node.get_our_node_id()); + + confirm_transaction(&acceptor_node, &broadcasted_funding_tx); + let channel_ready_message2 = get_event_msg!(acceptor_node, MessageSendEvent::SendChannelReady, initiator_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_channel_ready(&initiator_node.node.get_our_node_id(), &channel_ready_message); + let _ev = get_event!(acceptor_node, Event::ChannelReady); + let _announcement_signatures = get_event_msg!(acceptor_node, MessageSendEvent::SendAnnouncementSignatures, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_channel_ready(&acceptor_node.node.get_our_node_id(), &channel_ready_message2); + let _ev = get_event!(initiator_node, Event::ChannelReady); + let _announcement_signatures = get_event_msg!(initiator_node, MessageSendEvent::SendAnnouncementSignatures, acceptor_node.node.get_our_node_id()); + + // check channel capacity and other parameters + assert_eq!(initiator_node.node.list_channels().len(), 1); + let channel = &initiator_node.node.list_channels()[0]; + { + assert_eq!(channel.channel_id.to_string(), expected_funded_channel_id); + assert!(channel.is_usable); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, 1000 * channel_value_sat); + assert_eq!(channel.confirmations.unwrap(), 10); + } + // do checks on the acceptor node as well (capacity, etc.) + assert_eq!(acceptor_node.node.list_channels().len(), 1); + { + let channel = &acceptor_node.node.list_channels()[0]; + assert_eq!(channel.channel_id.to_string(), expected_funded_channel_id); + assert!(channel.is_usable); + assert!(channel.is_channel_ready); + assert_eq!(channel.balance_msat, 0); + assert_eq!(channel.confirmations.unwrap(), 10); + } + + // Verify the funding transaction + let initiator_funding_key = get_funding_key(&initiator_node, &acceptor_node, &channel_id1); + let acceptor_funding_key = get_funding_key(&acceptor_node, &initiator_node, &channel_id1); + + verify_funding_tx(&broadcasted_funding_tx, channel_value_sat, &initiator_funding_key, &acceptor_funding_key); + + // Channel is ready now for normal operation + + // close channel, cooperatively + initiator_node.node.close_channel(&channel_id1, &acceptor_node.node.get_our_node_id()).unwrap(); + let node0_shutdown_message = get_event_msg!(initiator_node, MessageSendEvent::SendShutdown, acceptor_node.node.get_our_node_id()); + acceptor_node.node.handle_shutdown(&initiator_node.node.get_our_node_id(), &node0_shutdown_message); + let nodes_1_shutdown = get_event_msg!(acceptor_node, MessageSendEvent::SendShutdown, initiator_node.node.get_our_node_id()); + initiator_node.node.handle_shutdown(&acceptor_node.node.get_our_node_id(), &nodes_1_shutdown); + let _ = get_event_msg!(initiator_node, MessageSendEvent::SendClosingSigned, acceptor_node.node.get_our_node_id()); +} + +fn verify_signature(msg: &Vec, sig: &Vec, pubkey: &PublicKey) -> Result<(), String> { + let m = Message::from_slice(&msg).unwrap(); + let s = Signature::from_der(&sig).unwrap(); + let ctx = Secp256k1::new(); + match ctx.verify_ecdsa(&m, &s, &pubkey) { + Ok(_) => Ok(()), + Err(e) => Err(format!("Signature verification failed! err {} msg {} sig {} pk {}", e, &msg.as_hex(), &sig.as_hex(), &pubkey.serialize().as_hex())), + } +} + +/// #SPLICING +/// Verify the previous funding input on a splicing funding transaction +fn verify_splice_funding_input(splice_tx: &Transaction, prev_funding_txid: &Txid, prev_funding_value: u64, funding_key_1: &PublicKey, funding_key_2: &PublicKey) { + // check that the previous funding tx is an input + let mut prev_fund_input_idx: Option = None; + for idx in 0..splice_tx.input.len() { + if splice_tx.input[idx].previous_output.txid == *prev_funding_txid { + prev_fund_input_idx = Some(idx); + } + } + if prev_fund_input_idx.is_none() { + panic!("Splice tx should contain the pervious funding tx as input! {} {}", prev_funding_txid, splice_tx.encode().as_hex()); + } + let prev_fund_input = &splice_tx.input[prev_fund_input_idx.unwrap()]; + let witness = &prev_fund_input.witness.to_vec(); + let witness_count = witness.len(); + let expected_witness_count = 4; + if witness_count != expected_witness_count { + panic!("Prev funding tx input should have {} witness elements! {} {}", expected_witness_count, witness_count, prev_fund_input_idx.unwrap()); + } + if witness[0].len() != 0 { + panic!("First multisig witness should be empty! {}", witness[0].len()); + } + // check witness 1&2, signatures + let wit1_sig = &witness[1]; + let wit2_sig = &witness[2]; + if wit1_sig.len() < 70 || wit1_sig.len() > 72 || wit2_sig.len() < 70 || wit2_sig.len() > 72 { + panic!("Witness entries 2&3 should be signatures! {} {}", wit1_sig.as_hex(), wit2_sig.as_hex()); + } + if wit1_sig[wit1_sig.len()-1] != 1 || wit2_sig[wit2_sig.len()-1] != 1 { + panic!("Witness entries 2&3 should be signatures with SIGHASHALL! {} {}", wit1_sig.as_hex(), wit2_sig.as_hex()); + } + let (script_key1, script_key2) = verify_multisig_redeem_script(&witness[3], funding_key_1, funding_key_2); + let redeemscript = ScriptBuf::from(witness[3].to_vec()); + // check signatures, sigs are in same order as keys + let sighash = &SighashCache::new(splice_tx).segwit_signature_hash(prev_fund_input_idx.unwrap(), &redeemscript, prev_funding_value, EcdsaSighashType::All).unwrap()[..].to_vec(); + let sig1 = wit1_sig[0..(wit1_sig.len()-1)].to_vec(); + let sig2 = wit2_sig[0..(wit2_sig.len()-1)].to_vec(); + if let Err(e1) = verify_signature(sighash, &sig1, &script_key1) { + panic!("Sig 1 check fails {}", e1); + } + if let Err(e2) = verify_signature(sighash, &sig2, &script_key2) { + panic!("Sig 2 check fails {}", e2); + } +} + +/// #SPLICING +/// Do checks on a splice funding tx +fn verify_splice_funding_tx(splice_tx: &Transaction, prev_funding_txid: &Txid, funding_value: u64, prev_funding_value: u64, funding_key_1: &PublicKey, funding_key_2: &PublicKey) { + verify_splice_funding_input(splice_tx, prev_funding_txid, prev_funding_value, funding_key_1, funding_key_2); + verify_funding_tx(splice_tx, funding_value, funding_key_1, funding_key_2); +} + +/// Splicing test, simple splice-in flow. Starts with opening a V1 channel first. +/// Builds on test_channel_open_simple() +#[test] +fn test_v1_splice_in() { + // Set up a network of 2 nodes + let cfg = UserConfig { + channel_handshake_config: ChannelHandshakeConfig { + announced_channel: true, + ..Default::default() + }, + ..Default::default() + }; + let chanmon_cfgs = create_chanmon_cfgs(2); + let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); + let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, Some(cfg)]); + let nodes = create_network(2, &node_cfgs, &node_chanmgrs); + + // Initiator and Acceptor nodes + let initiator_node_index = 1; + let initiator_node = &nodes[initiator_node_index]; + let acceptor_node = &nodes[0]; + + // Instantiate channel parameters where we push the maximum msats given our funding satoshis + let channel_value_sat = 100000; // same as funding satoshis + let push_msat = 0; + + let expected_funded_channel_id = "c95d1eb6f3d0c5c74a398a6e9c2c0721afde1e30d23a70867362e9d6d8d04281"; + + // Have node0 initiate a channel to node1 with aforementioned parameters + let channel_id_temp1 = initiator_node.node.create_channel(acceptor_node.node.get_our_node_id(), channel_value_sat, push_msat, 42, None, None).unwrap(); + + // Extract the channel open message from node0 to node1 + let open_channel_message = get_event_msg!(initiator_node, MessageSendEvent::SendOpenChannel, acceptor_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_open_channel(&initiator_node.node.get_our_node_id(), &open_channel_message.clone()); + // Extract the accept channel message from node1 to node0 + let accept_channel_message = get_event_msg!(acceptor_node, MessageSendEvent::SendAcceptChannel, initiator_node.node.get_our_node_id()); + let _res = initiator_node.node.handle_accept_channel(&acceptor_node.node.get_our_node_id(), &accept_channel_message.clone()); + // Note: FundingGenerationReady emitted, checked and used below + let (_channel_id_temp2, funding_tx, _funding_output) = create_funding_transaction(&initiator_node, &acceptor_node.node.get_our_node_id(), channel_value_sat, 42); + + // Funding transation created, provide it + let _res = initiator_node.node.funding_transaction_generated(&channel_id_temp1, &acceptor_node.node.get_our_node_id(), funding_tx.clone()).unwrap(); + + let funding_created_message = get_event_msg!(initiator_node, MessageSendEvent::SendFundingCreated, acceptor_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_funding_created(&initiator_node.node.get_our_node_id(), &funding_created_message); + + assert_eq!(initiator_node.node.list_channels().len(), 1); + { + let channel = &initiator_node.node.list_channels()[0]; + assert!(!channel.is_channel_ready); + } + // do checks on the acceptor node as well (capacity, etc.) + assert_eq!(acceptor_node.node.list_channels().len(), 1); + { + let channel = &acceptor_node.node.list_channels()[0]; + assert!(!channel.is_channel_ready); + } + + let funding_signed_message = get_event_msg!(acceptor_node, MessageSendEvent::SendFundingSigned, initiator_node.node.get_our_node_id()); + let _res = initiator_node.node.handle_funding_signed(&acceptor_node.node.get_our_node_id(), &funding_signed_message); + // Take new channel ID + let channel_id2 = funding_signed_message.channel_id; + assert_eq!(channel_id2.to_string(), expected_funded_channel_id); + + // Check that funding transaction has been broadcasted + assert_eq!(chanmon_cfgs[initiator_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap().len(), 1); + let broadcasted_funding_tx = chanmon_cfgs[initiator_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap()[0].clone(); + + check_added_monitors!(initiator_node, 1); + let _ev = get_event!(initiator_node, Event::ChannelPending); + check_added_monitors!(acceptor_node, 1); + let _ev = get_event!(acceptor_node, Event::ChannelPending); + + // Simulate confirmation of the funding tx + confirm_transaction(&initiator_node, &broadcasted_funding_tx); + let channel_ready_message = get_event_msg!(initiator_node, MessageSendEvent::SendChannelReady, acceptor_node.node.get_our_node_id()); + + confirm_transaction(&acceptor_node, &broadcasted_funding_tx); + let channel_ready_message2 = get_event_msg!(acceptor_node, MessageSendEvent::SendChannelReady, initiator_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_channel_ready(&initiator_node.node.get_our_node_id(), &channel_ready_message); + let _ev = get_event!(acceptor_node, Event::ChannelReady); + let _announcement_signatures = get_event_msg!(acceptor_node, MessageSendEvent::SendAnnouncementSignatures, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_channel_ready(&acceptor_node.node.get_our_node_id(), &channel_ready_message2); + let _ev = get_event!(initiator_node, Event::ChannelReady); + let _announcement_signatures = get_event_msg!(initiator_node, MessageSendEvent::SendAnnouncementSignatures, acceptor_node.node.get_our_node_id()); + + // check channel capacity and other parameters + assert_eq!(initiator_node.node.list_channels().len(), 1); + { + let channel = &initiator_node.node.list_channels()[0]; + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, 1000 * channel_value_sat); + assert_eq!(channel.funding_txo.unwrap().txid, funding_tx.txid()); + assert_eq!(channel.confirmations.unwrap(), 10); + } + // do checks on the acceptor node as well (capacity, etc.) + assert_eq!(acceptor_node.node.list_channels().len(), 1); + { + let channel = &acceptor_node.node.list_channels()[0]; + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, 0); + assert_eq!(channel.funding_txo.unwrap().txid, funding_tx.txid()); + assert_eq!(channel.confirmations.unwrap(), 10); + } + + // ==== Channel is now ready for normal operation + + // === Start of Splicing + println!("Start of Splicing ..., channel_id {}", channel_id2); + + // Amount being added to the channel through the splice-in + let splice_in_sats: u64 = 20000; + let _post_splice_channel_value = channel_value_sat + splice_in_sats; + let funding_feerate_perkw = 1024; // TODO + let locktime = 0; // TODO + + // Initiate splice-in (on node0) + let res_error = initiator_node.node.splice_channel(&channel_id2, &acceptor_node.node.get_our_node_id(), splice_in_sats as i64, Vec::new(), funding_feerate_perkw, locktime); + assert!(res_error.is_err()); + assert_eq!(format!("{:?}", res_error.err().unwrap())[..53].to_string(), "Misuse error: Channel ID would change during splicing".to_string()); + + // no change + assert_eq!(initiator_node.node.list_channels().len(), 1); + { + let channel = &initiator_node.node.list_channels()[0]; + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, 1000 * channel_value_sat); + assert_eq!(channel.funding_txo.unwrap().txid, funding_tx.txid()); + } + + // === End of Splicing + + // === Close channel, cooperatively + initiator_node.node.close_channel(&channel_id2, &acceptor_node.node.get_our_node_id()).unwrap(); + let node0_shutdown_message = get_event_msg!(initiator_node, MessageSendEvent::SendShutdown, acceptor_node.node.get_our_node_id()); + acceptor_node.node.handle_shutdown(&initiator_node.node.get_our_node_id(), &node0_shutdown_message); + let nodes_1_shutdown = get_event_msg!(acceptor_node, MessageSendEvent::SendShutdown, initiator_node.node.get_our_node_id()); + initiator_node.node.handle_shutdown(&acceptor_node.node.get_our_node_id(), &nodes_1_shutdown); + let _ = get_event_msg!(initiator_node, MessageSendEvent::SendClosingSigned, acceptor_node.node.get_our_node_id()); +} + +// TODO: Test with 2nd splice (open, splice, splice) + +/// Generic test: Open a V2 channel, optionally do a payment, perform a splice-in, +/// optionally do a payment, +/// The steps are on ChannelManager level. +/// Builds on test_channel_open_v2_and_close() +fn test_splice_in_with_optional_payments( + do_payment_pre_splice: bool, + do_payment_post_splice: bool, + do_payment_pending_splice: bool, + expected_pre_funding_txid: &str, + expected_splice_funding_txid: &str, + expected_post_funding_tx: &str, + expect_inputs_in_reverse: bool, +) { + // Set up a network of 2 nodes + let cfg = UserConfig { + channel_handshake_config: ChannelHandshakeConfig { + announced_channel: true, + ..Default::default() + }, + ..Default::default() + }; + let chanmon_cfgs = create_chanmon_cfgs(2); + let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); + let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, Some(cfg)]); + let nodes = create_network(2, &node_cfgs, &node_chanmgrs); + + // Initiator and Acceptor nodes. Order matters, we want the case when initiator pubkey is larger. + let initiator_node_index = 0; + let acceptor_node_index = 1; + let initiator_node = &nodes[initiator_node_index]; + let acceptor_node = &nodes[acceptor_node_index]; + + // Instantiate channel parameters where we push the maximum msats given our funding satoshis + let channel_value_sat = 100000; // same as funding satoshis + + let expected_temporary_channel_id = "b1a3942f261316385476c86d7f454062ceb06d2e37675f08c2fac76b8c3ddc5e"; + let expected_funded_channel_id = "0df1425050bb045209e23459ebb5f9c8f6f219dafb85e2ec59d5fe841f1c4463"; + + let extra_funding_input_sats = channel_value_sat + 35_000; + let custom_input_secret_key = SecretKey::from_slice(&[2; 32]).unwrap(); + let custom_input_pubkey = PublicKey::from_secret_key(&Secp256k1::new(), &custom_input_secret_key); + let funding_inputs = vec![create_custom_dual_funding_input_with_pubkey(&initiator_node, extra_funding_input_sats, &custom_input_pubkey)]; + // Have node0 initiate a channel to node1 with aforementioned parameters + let channel_id_temp1 = initiator_node.node.create_dual_funded_channel(acceptor_node.node.get_our_node_id(), channel_value_sat, funding_inputs, None, 42, None).unwrap(); + assert_eq!(channel_id_temp1.to_string(), expected_temporary_channel_id); + + // Extract the channel open message from node0 to node1 + let open_channel2_message = get_event_msg!(initiator_node, MessageSendEvent::SendOpenChannelV2, acceptor_node.node.get_our_node_id()); + assert_eq!(initiator_node.node.list_channels().len(), 1); + + let _res = acceptor_node.node.handle_open_channel_v2(&initiator_node.node.get_our_node_id(), &open_channel2_message.clone()); + // Extract the accept channel message from node1 to node0 + let accept_channel2_message = get_event_msg!(acceptor_node, MessageSendEvent::SendAcceptChannelV2, initiator_node.node.get_our_node_id()); + assert_eq!(accept_channel2_message.common_fields.temporary_channel_id.to_string(), expected_temporary_channel_id); + + let _res = initiator_node.node.handle_accept_channel_v2(&acceptor_node.node.get_our_node_id(), &accept_channel2_message.clone()); + + // Note: FundingInputsContributionReady event is no longer used + // Note: contribute_funding_inputs() call is no longer used + + // initiator_node will generate a TxAddInput message to kickstart the interactive transaction construction protocol + let tx_add_input_msg = get_event_msg!(&initiator_node, MessageSendEvent::SendTxAddInput, acceptor_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_tx_add_input(&initiator_node.node.get_our_node_id(), &tx_add_input_msg); + let tx_complete_msg = get_event_msg!(acceptor_node, MessageSendEvent::SendTxComplete, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_tx_complete(&acceptor_node.node.get_our_node_id(), &tx_complete_msg); + + // First output, the new funding tx + let tx_add_output_msg = get_event_msg!(&initiator_node, MessageSendEvent::SendTxAddOutput, acceptor_node.node.get_our_node_id()); + assert_eq!(tx_add_output_msg.sats, channel_value_sat); + + let _res = acceptor_node.node.handle_tx_add_output(&initiator_node.node.get_our_node_id(), &tx_add_output_msg); + let tx_complete_msg = get_event_msg!(&acceptor_node, MessageSendEvent::SendTxComplete, initiator_node.node.get_our_node_id()); + + // Second output, change + let _res = initiator_node.node.handle_tx_complete(&acceptor_node.node.get_our_node_id(), &tx_complete_msg); + let tx_add_output2_msg = get_event_msg!(&initiator_node, MessageSendEvent::SendTxAddOutput, acceptor_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_tx_add_output(&initiator_node.node.get_our_node_id(), &tx_add_output2_msg); + let tx_complete_msg = get_event_msg!(acceptor_node, MessageSendEvent::SendTxComplete, initiator_node.node.get_our_node_id()); + + initiator_node.node.handle_tx_complete(&acceptor_node.node.get_our_node_id(), &tx_complete_msg); + let msg_events = initiator_node.node.get_and_clear_pending_msg_events(); + assert_eq!(msg_events.len(), 2); + assert_event_type!(msg_events[0], MessageSendEvent::SendTxComplete); + assert_event_type!(msg_events[1], MessageSendEvent::UpdateHTLCs); + let msg_commitment_signed_from_0 = match msg_events[1] { + MessageSendEvent::UpdateHTLCs { ref updates, .. } => { + updates.commitment_signed.clone() + }, + _ => panic!("Unexpected event"), + }; + let channel_id1 = if let Event::FundingTransactionReadyForSigning { + channel_id, + counterparty_node_id, + mut unsigned_transaction, + .. + } = get_event!(initiator_node, Event::FundingTransactionReadyForSigning) { + assert_eq!(channel_id.to_string(), expected_funded_channel_id); + // Placeholder for signature on the contributed input + let mut witness = Witness::new(); + witness.push([7; 72]); + unsigned_transaction.input[0].witness = witness; + let _res = initiator_node.node.funding_transaction_signed(&channel_id, &counterparty_node_id, unsigned_transaction).unwrap(); + channel_id + } else { panic!(); }; + + let _res = acceptor_node.node.handle_tx_complete(&initiator_node.node.get_our_node_id(), &tx_complete_msg); + let msg_events = acceptor_node.node.get_and_clear_pending_msg_events(); + // First messsage is commitment_signed, second is tx_signatures (see below for more) + assert_eq!(msg_events.len(), 1); + let msg_commitment_signed_from_1 = match msg_events[0] { + MessageSendEvent::UpdateHTLCs { ref updates, .. } => { + updates.commitment_signed.clone() + }, + _ => panic!("Unexpected event {:?}", msg_events[0]), + }; + + // Handle the initial commitment_signed exchange. Order is not important here. + acceptor_node.node.handle_commitment_signed(&initiator_node.node.get_our_node_id(), &msg_commitment_signed_from_0); + initiator_node.node.handle_commitment_signed(&acceptor_node.node.get_our_node_id(), &msg_commitment_signed_from_1); + check_added_monitors(&initiator_node, 1); + check_added_monitors(&acceptor_node, 1); + + // The initiator is the only party that contributed any inputs so they should definitely be the one to send tx_signatures + // only after receiving tx_signatures from the non-initiator in this case. + let msg_events = initiator_node.node.get_and_clear_pending_msg_events(); + assert!(msg_events.is_empty()); + let tx_signatures_from_1 = get_event_msg!(acceptor_node, MessageSendEvent::SendTxSignatures, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_tx_signatures(&acceptor_node.node.get_our_node_id(), &tx_signatures_from_1); + get_event!(initiator_node, Event::ChannelPending); + let tx_signatures_from_0 = get_event_msg!(initiator_node, MessageSendEvent::SendTxSignatures, acceptor_node.node.get_our_node_id()); + let _res = acceptor_node.node.handle_tx_signatures(&initiator_node.node.get_our_node_id(), &tx_signatures_from_0); + get_event!(acceptor_node, Event::ChannelPending); + + // Check that funding transaction has been broadcasted + assert_eq!(chanmon_cfgs[initiator_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap().len(), 1); + let broadcasted_funding_tx = chanmon_cfgs[initiator_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap()[0].clone(); + assert_eq!(broadcasted_funding_tx.encode().len(), 201); + assert_eq!(broadcasted_funding_tx.txid().to_string(), expected_pre_funding_txid); + + // Simulate confirmation of the funding tx + confirm_transaction(&initiator_node, &broadcasted_funding_tx); + let channel_ready_message1 = get_event_msg!(initiator_node, MessageSendEvent::SendChannelReady, acceptor_node.node.get_our_node_id()); + + confirm_transaction(&acceptor_node, &broadcasted_funding_tx); + let channel_ready_message2 = get_event_msg!(acceptor_node, MessageSendEvent::SendChannelReady, initiator_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_channel_ready(&initiator_node.node.get_our_node_id(), &channel_ready_message1); + let _ev = get_event!(acceptor_node, Event::ChannelReady); + let _announcement_signatures2 = get_event_msg!(acceptor_node, MessageSendEvent::SendAnnouncementSignatures, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_channel_ready(&acceptor_node.node.get_our_node_id(), &channel_ready_message2); + let _ev = get_event!(initiator_node, Event::ChannelReady); + let _announcement_signatures1 = get_event_msg!(initiator_node, MessageSendEvent::SendAnnouncementSignatures, acceptor_node.node.get_our_node_id()); + + // let (announcement1, update1, update2) = create_chan_between_nodes_with_value_b(&initiator_node, &acceptor_node, &(channel_ready_message1, announcement_signatures1)); + // `update_nodes_with_chan_announce`(&nodes, initiator_node_index, acceptor_node_index, &announcement1, &update1, &update2); + + // Expected balances + let mut exp_balance1 = 1000 * channel_value_sat; + let mut exp_balance2 = 0; + + // check channel capacity and other parameters + assert_eq!(initiator_node.node.list_channels().len(), 1); + { + let channel = &initiator_node.node.list_channels()[0]; + assert_eq!(channel.channel_id.to_string(), expected_funded_channel_id); + assert!(channel.is_usable); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, exp_balance1); + assert_eq!(channel.confirmations.unwrap(), 10); + assert!(channel.funding_txo.is_some()); + } + // do checks on the acceptor node as well (capacity, etc.) + assert_eq!(acceptor_node.node.list_channels().len(), 1); + { + let channel = &acceptor_node.node.list_channels()[0]; + assert_eq!(channel.channel_id.to_string(), expected_funded_channel_id); + assert!(channel.is_usable); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, exp_balance2); + assert_eq!(channel.confirmations.unwrap(), 10); + assert!(channel.funding_txo.is_some()); + } + + // === Channel is now ready for normal operation + + if do_payment_pre_splice { + // === Send a payment + let payment1_amount_msat = 6001_000; + println!("Send a payment, amount {}", payment1_amount_msat); + + let _payment_res = send_payment(&initiator_node, &[acceptor_node], payment1_amount_msat); + + // adjust balances + exp_balance1 -= payment1_amount_msat; + exp_balance2 += payment1_amount_msat; + } + + assert_eq!(initiator_node.node.list_channels().len(), 1); + { + let channel = &initiator_node.node.list_channels()[0]; + assert!(channel.is_usable); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, exp_balance1); + assert!(channel.funding_txo.is_some()); + } + assert_eq!(acceptor_node.node.list_channels().len(), 1); + { + let channel = &acceptor_node.node.list_channels()[0]; + assert!(channel.is_usable); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, exp_balance2); + assert!(channel.funding_txo.is_some()); + } + + // === Start of Splicing + println!("Start of Splicing ..., channel_id {}", channel_id1); + + // Amount being added to the channel through the splice-in + let splice_in_sats: u64 = 20000; + let post_splice_channel_value = channel_value_sat + splice_in_sats; + let funding_feerate_perkw = 1024; // TODO + let locktime = 0; // TODO + + // Initiate splice-in (on node0) + let extra_splice_funding_input_sats = 35_000; + let funding_inputs = vec![create_custom_dual_funding_input_with_pubkey(&initiator_node, extra_splice_funding_input_sats, &custom_input_pubkey)]; + let _res = initiator_node.node.splice_channel(&channel_id1, &acceptor_node.node.get_our_node_id(), splice_in_sats as i64, funding_inputs, funding_feerate_perkw, locktime).unwrap(); + // Extract the splice message from node0 to node1 + let splice_msg = get_event_msg!(initiator_node, MessageSendEvent::SendSpliceInit, acceptor_node.node.get_our_node_id()); + + let _res = acceptor_node.node.handle_splice_init(&initiator_node.node.get_our_node_id(), &splice_msg); + // Extract the splice_ack message from node1 to node0 + let splice_ack_msg = get_event_msg!(acceptor_node, MessageSendEvent::SendSpliceAck, initiator_node.node.get_our_node_id()); + + // still pre-splice channel: capacity not updated, channel usable, and funding tx set + assert_eq!(acceptor_node.node.list_channels().len(), 1); + { + let channel = &acceptor_node.node.list_channels()[0]; + assert_eq!(channel.channel_id.to_string(), expected_funded_channel_id); + assert!(channel.is_usable); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, exp_balance2); + assert!(channel.funding_txo.is_some()); + assert!(channel.confirmations.unwrap() > 0); + } + + let _res = initiator_node.node.handle_splice_ack(&acceptor_node.node.get_our_node_id(), &splice_ack_msg); + + // Note: SpliceAckedInputsContributionReady event no longer used + + // still pre-splice channel: capacity not updated, channel usable, and funding tx set + assert_eq!(initiator_node.node.list_channels().len(), 1); + { + let channel = &initiator_node.node.list_channels()[0]; + assert_eq!(channel.channel_id.to_string(), expected_funded_channel_id); + assert!(channel.is_usable); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, channel_value_sat); + assert_eq!(channel.balance_msat, exp_balance1); + assert!(channel.funding_txo.is_some()); + assert!(channel.confirmations.unwrap() > 0); + } + + exp_balance1 += 1000 * splice_in_sats; // increase in balance + + // Note: contribute_funding_inputs() call is no longer used + + // First input + let tx_add_input_msg = get_event_msg!(&initiator_node, MessageSendEvent::SendTxAddInput, acceptor_node.node.get_our_node_id()); + let exp_value = if expect_inputs_in_reverse { extra_splice_funding_input_sats } else { channel_value_sat }; + assert_eq!(tx_add_input_msg.prevtx.0.output[tx_add_input_msg.prevtx_out as usize].value, exp_value); + + let _res = acceptor_node.node.handle_tx_add_input(&initiator_node.node.get_our_node_id(), &tx_add_input_msg); + let tx_complete_msg = get_event_msg!(acceptor_node, MessageSendEvent::SendTxComplete, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_tx_complete(&acceptor_node.node.get_our_node_id(), &tx_complete_msg); + // Second input + let exp_value = if expect_inputs_in_reverse { channel_value_sat } else { extra_splice_funding_input_sats }; + let tx_add_input2_msg = get_event_msg!(&initiator_node, MessageSendEvent::SendTxAddInput, acceptor_node.node.get_our_node_id()); + assert_eq!(tx_add_input2_msg.prevtx.0.output[tx_add_input2_msg.prevtx_out as usize].value, exp_value); + + let _res = acceptor_node.node.handle_tx_add_input(&initiator_node.node.get_our_node_id(), &tx_add_input2_msg); + let tx_complete_msg = get_event_msg!(acceptor_node, MessageSendEvent::SendTxComplete, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_tx_complete(&acceptor_node.node.get_our_node_id(), &tx_complete_msg); + + // TxAddOutput for the change output + let tx_add_output_msg = get_event_msg!(&initiator_node, MessageSendEvent::SendTxAddOutput, acceptor_node.node.get_our_node_id()); + assert!(tx_add_output_msg.script.is_v0_p2wpkh()); + assert_eq!(tx_add_output_msg.sats, 14093); // extra_splice_input_sats - splice_in_sats + + let _res = acceptor_node.node.handle_tx_add_output(&initiator_node.node.get_our_node_id(), &tx_add_output_msg); + let tx_complete_msg = get_event_msg!(&acceptor_node, MessageSendEvent::SendTxComplete, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_tx_complete(&acceptor_node.node.get_our_node_id(), &tx_complete_msg); + // TxAddOutput for the splice funding + let tx_add_output2_msg = get_event_msg!(&initiator_node, MessageSendEvent::SendTxAddOutput, acceptor_node.node.get_our_node_id()); + assert!(tx_add_output2_msg.script.is_v0_p2wsh()); + assert_eq!(tx_add_output2_msg.sats, post_splice_channel_value); + + let _res = acceptor_node.node.handle_tx_add_output(&initiator_node.node.get_our_node_id(), &tx_add_output2_msg); + let tx_complete_msg = get_event_msg!(acceptor_node, MessageSendEvent::SendTxComplete, initiator_node.node.get_our_node_id()); + + let _res = initiator_node.node.handle_tx_complete(&acceptor_node.node.get_our_node_id(), &tx_complete_msg); + + let msg_events = initiator_node.node.get_and_clear_pending_msg_events(); + assert_eq!(msg_events.len(), 2); + let tx_complete_msg = match msg_events[0] { + MessageSendEvent::SendTxComplete { ref node_id, ref msg } => { + assert_eq!(*node_id, acceptor_node.node.get_our_node_id()); + (*msg).clone() + }, + _ => panic!("Unexpected event"), + }; + let msg_commitment_signed_from_0 = match msg_events[1] { + MessageSendEvent::UpdateHTLCs { ref node_id, ref updates } => { + assert_eq!(*node_id, acceptor_node.node.get_our_node_id()); + updates.commitment_signed.clone() + }, + _ => panic!("Unexpected event"), + }; + let (input_idx_prev_fund, input_idx_second_input) = if expect_inputs_in_reverse { (0, 1) } else { (1, 0) }; + if let Event::FundingTransactionReadyForSigning { + channel_id, + counterparty_node_id, + mut unsigned_transaction, + .. + } = get_event!(initiator_node, Event::FundingTransactionReadyForSigning) { + assert_eq!(channel_id.to_string(), expected_funded_channel_id); + assert_eq!(counterparty_node_id, acceptor_node.node.get_our_node_id()); + assert_eq!(unsigned_transaction.input.len(), 2); + // Note: input order may vary (based on SerialId) + // This is the previous funding tx input, already signed (partially) + assert_eq!(unsigned_transaction.input[input_idx_prev_fund].previous_output.txid.to_string(), expected_pre_funding_txid); + assert_eq!(unsigned_transaction.input[input_idx_prev_fund].witness.len(), 4); + // This is the extra input, not yet signed + assert_eq!(unsigned_transaction.input[input_idx_second_input].witness.len(), 0); + + // Placeholder for signature on the contributed input + let mut witness1 = Witness::new(); + witness1.push([7; 72]); + unsigned_transaction.input[input_idx_second_input].witness = witness1; + + let _res = initiator_node.node.funding_transaction_signed(&channel_id, &counterparty_node_id, unsigned_transaction).unwrap(); + } else { panic!(); } + + // check new funding tx + assert_eq!(initiator_node.node.list_channels().len(), 1); + { + let channel = &initiator_node.node.list_channels()[0]; + assert!(!channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, post_splice_channel_value); + assert_eq!(channel.funding_txo.unwrap().txid.to_string(), expected_splice_funding_txid); + assert_eq!(channel.confirmations.unwrap(), 0); + } + + let _res = acceptor_node.node.handle_tx_complete(&initiator_node.node.get_our_node_id(), &tx_complete_msg); + let msg_events = acceptor_node.node.get_and_clear_pending_msg_events(); + // First messsage is commitment_signed, second is tx_signatures (see below for more) + assert_eq!(msg_events.len(), 1); + let msg_commitment_signed_from_1 = match msg_events[0] { + MessageSendEvent::UpdateHTLCs { ref node_id, ref updates } => { + assert_eq!(*node_id, initiator_node.node.get_our_node_id()); + let res = updates.commitment_signed.clone(); + res + }, + _ => panic!("Unexpected event {:?}", msg_events[0]), + }; + + // check new funding tx (acceptor side) + assert_eq!(acceptor_node.node.list_channels().len(), 1); + { + let channel = &acceptor_node.node.list_channels()[0]; + assert!(!channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, post_splice_channel_value); + assert_eq!(channel.funding_txo.unwrap().txid.to_string(), expected_splice_funding_txid); + assert_eq!(channel.confirmations.unwrap(), 0); + } + + // Handle the initial commitment_signed exchange. Order is not important here. + let _res = initiator_node.node.handle_commitment_signed(&acceptor_node.node.get_our_node_id(), &msg_commitment_signed_from_1); + check_added_monitors(&initiator_node, 1); + + // The initiator is the only party that contributed any inputs so they should definitely be the one to send tx_signatures + // only after receiving tx_signatures from the non-initiator in this case. + let msg_events = initiator_node.node.get_and_clear_pending_msg_events(); + assert!(msg_events.is_empty()); + + let _res = acceptor_node.node.handle_commitment_signed(&initiator_node.node.get_our_node_id(), &msg_commitment_signed_from_0); + check_added_monitors(&acceptor_node, 1); + + let msg_events = acceptor_node.node.get_and_clear_pending_msg_events(); + assert_eq!(msg_events.len(), 1); + let tx_signatures_1 = match msg_events[0] { + MessageSendEvent::SendTxSignatures { ref node_id, ref msg } => { + assert_eq!(*node_id, initiator_node.node.get_our_node_id()); + // Here we only get the signature for the shared input + assert_eq!(msg.witnesses.len(), 0); + assert!(msg.shared_input_signature.is_some()); + msg + }, + _ => panic!("Unexpected event {:?}", msg_events[0]), + }; + + let _res = initiator_node.node.handle_tx_signatures(&acceptor_node.node.get_our_node_id(), &tx_signatures_1); + + let events = initiator_node.node.get_and_clear_pending_events(); + assert_eq!(events.len(), 1); + match events[0] { + Event::ChannelPending { channel_id, former_temporary_channel_id, counterparty_node_id, funding_txo, is_splice, .. } => { + assert_eq!(channel_id.to_string(), expected_funded_channel_id); + // TODO check if former_temporary_channel_id should be set to empty in this case (or previous non-temp channel id?) + assert_eq!(former_temporary_channel_id.unwrap().to_string(), expected_temporary_channel_id); + assert_eq!(counterparty_node_id, acceptor_node.node.get_our_node_id()); + assert_eq!(funding_txo.txid.to_string(), expected_splice_funding_txid); + assert!(is_splice); + } + _ => panic!("ChannelPending event missing, {:?}", events[0]), + }; + let msg_events = initiator_node.node.get_and_clear_pending_msg_events(); + assert_eq!(msg_events.len(), 1); + let tx_signatures_0 = match msg_events[0] { + MessageSendEvent::SendTxSignatures { ref node_id, ref msg } => { + assert_eq!(*node_id, acceptor_node.node.get_our_node_id()); + // Here we get the witnesses for the two inputs: + // - the custom input, and + // - the previous funding tx, also in the tlvs + assert_eq!(msg.witnesses.len(), 2); + assert_eq!(msg.witnesses[input_idx_prev_fund].len(), 4); + assert_eq!(msg.witnesses[input_idx_second_input].len(), 1); + assert!(msg.shared_input_signature.is_some()); + msg + }, + _ => panic!("Unexpected event {:?}", msg_events[0]), + }; + + let _res = acceptor_node.node.handle_tx_signatures(&initiator_node.node.get_our_node_id(), &tx_signatures_0); + + let events = acceptor_node.node.get_and_clear_pending_events(); + assert_eq!(events.len(), 1); + match events[0] { + Event::ChannelPending { channel_id, former_temporary_channel_id, counterparty_node_id, funding_txo, is_splice, .. } => { + assert_eq!(channel_id.to_string(), expected_funded_channel_id); + // TODO check if former_temporary_channel_id should be set to empty in this case (or previous non-temp channel id?) + assert_eq!(former_temporary_channel_id.unwrap().to_string(), expected_temporary_channel_id); + assert_eq!(counterparty_node_id, initiator_node.node.get_our_node_id()); + assert_eq!(funding_txo.txid.to_string(), expected_splice_funding_txid); + assert!(is_splice); + } + _ => panic!("ChannelPending event missing, {:?}", events[0]), + }; + + // Check that funding transaction has been broadcasted + assert_eq!(chanmon_cfgs[initiator_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap().len(), 2); + let broadcasted_splice_tx = chanmon_cfgs[initiator_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap()[1].clone(); + assert_eq!(broadcasted_splice_tx.encode().as_hex().to_string(), expected_post_funding_tx); + let initiator_funding_key = get_funding_key(&initiator_node, &acceptor_node, &channel_id1); + let acceptor_funding_key = get_funding_key(&acceptor_node, &initiator_node, &channel_id1); + verify_splice_funding_tx(&broadcasted_splice_tx, &broadcasted_funding_tx.txid(), post_splice_channel_value, channel_value_sat, &initiator_funding_key, &acceptor_funding_key); + + // Check that funding transaction has been broadcasted on acceptor side as well + assert_eq!(chanmon_cfgs[acceptor_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap().len(), 2); + let broadcasted_splice_tx_acc = chanmon_cfgs[acceptor_node_index].tx_broadcaster.txn_broadcasted.lock().unwrap()[1].clone(); + assert_eq!(broadcasted_splice_tx_acc.encode().as_hex().to_string(), expected_post_funding_tx); + + // check fees + let total_input = channel_value_sat + extra_splice_funding_input_sats; + assert_eq!(broadcasted_splice_tx.output.len(), 2); + let total_output = broadcasted_splice_tx.output[0].value + broadcasted_splice_tx.output[1].value; + assert!(total_input > total_output); + let fee = total_input - total_output; + let target_fee_rate = chanmon_cfgs[0].fee_estimator.get_est_sat_per_1000_weight(ConfirmationTarget::NonAnchorChannelFee); // target is irrelevant + assert_eq!(target_fee_rate, 253); + assert_eq!(broadcasted_splice_tx.weight().to_wu(), 958); + let expected_minimum_fee = (broadcasted_splice_tx.weight().to_wu() as f64 * target_fee_rate as f64 / 1000 as f64).ceil() as u64; + let expected_maximum_fee = expected_minimum_fee * 5; // TODO lower tolerance, e.g. 3 + assert!(fee >= expected_minimum_fee); + assert!(fee <= expected_maximum_fee); + + // The splice is pending: it is committed to, new funding transaction has been broadcast but not yet locked + println!("Splice is pending"); + + if do_payment_pending_splice { + // === Send another payment + // TODO + let payment3_amount_msat = 3003_000; + println!("Send another payment, amount {}", payment3_amount_msat); + + let _payment_res = send_payment(&initiator_node, &[acceptor_node], payment3_amount_msat); + + // adjust balances + exp_balance1 -= payment3_amount_msat; + exp_balance2 += payment3_amount_msat; + } + + println!("Confirming splice transaction..."); + + // Splice_locked: make the steps not in the natural order, to test the path when + // splice_locked is received before sending splice_locked (this path had a bug, 2024.06.). + // Receive splice_locked before seeing the confirmation of the new funding tx + // Simulate confirmation of the funding tx + confirm_transaction(&initiator_node, &broadcasted_splice_tx); + // Send splice_locked from initiator to acceptor, process it there + let splice_locked_message = get_event_msg!(initiator_node, MessageSendEvent::SendSpliceLocked, acceptor_node.node.get_our_node_id()); + let _res = acceptor_node.node.handle_splice_locked(&initiator_node.node.get_our_node_id(), &splice_locked_message); + + confirm_transaction(&acceptor_node, &broadcasted_splice_tx); + let events = acceptor_node.node.get_and_clear_pending_events(); + assert_eq!(events.len(), 1); + match events[0] { + Event::ChannelReady { channel_id, counterparty_node_id, is_splice, .. } => { + assert_eq!(channel_id.to_string(), expected_funded_channel_id); + assert_eq!(counterparty_node_id, initiator_node.node.get_our_node_id()); + assert!(!is_splice); // TODO this is incorrect, it should be true. Due to ordering it is emitted after splice complete + } + _ => panic!("ChannelReady event missing, {:?}", events[0]), + }; + + // Acceptor is now ready to send SpliceLocked and ChannelUpdate + let msg_events = acceptor_node.node.get_and_clear_pending_msg_events(); + assert_eq!(msg_events.len(), 2); + let splice_locked_message2 = match msg_events[0] { + MessageSendEvent::SendSpliceLocked { ref node_id, ref msg } => { + assert_eq!(*node_id, initiator_node.node.get_our_node_id()); + msg + }, + _ => panic!("Unexpected event {:?}", msg_events[0]), + }; + let _channel_update = match msg_events[1] { + MessageSendEvent::SendChannelUpdate { ref msg, .. } => { msg }, + _ => panic!("Unexpected event {:?}", msg_events[1]), + }; + + let _res = initiator_node.node.handle_splice_locked(&acceptor_node.node.get_our_node_id(), &splice_locked_message2); + let events = initiator_node.node.get_and_clear_pending_events(); + assert_eq!(events.len(), 1); + match events[0] { + Event::ChannelReady { channel_id, counterparty_node_id, is_splice, .. } => { + assert_eq!(channel_id.to_string(), expected_funded_channel_id); + assert_eq!(counterparty_node_id, acceptor_node.node.get_our_node_id()); + assert!(is_splice); + } + _ => panic!("ChannelReady event missing, {:?}", events[0]), + }; + + let _channel_update = get_event_msg!(initiator_node, MessageSendEvent::SendChannelUpdate, acceptor_node.node.get_our_node_id()); + + // check new channel capacity and other parameters + assert_eq!(initiator_node.node.list_channels().len(), 1); + { + let channel = &initiator_node.node.list_channels()[0]; + assert_eq!(channel.channel_id.to_string(), expected_funded_channel_id); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, post_splice_channel_value); + assert_eq!(channel.balance_msat, exp_balance1); + assert_eq!(channel.funding_txo.unwrap().txid, broadcasted_splice_tx_acc.txid()); + assert_eq!(channel.confirmations.unwrap(), 10); + } + + // do the checks on acceptor side as well + assert_eq!(acceptor_node.node.list_channels().len(), 1); + { + let channel = &acceptor_node.node.list_channels()[0]; + assert_eq!(channel.channel_id.to_string(), expected_funded_channel_id); + assert!(channel.is_channel_ready); + assert_eq!(channel.channel_value_satoshis, post_splice_channel_value); + assert_eq!(channel.balance_msat, exp_balance2); + assert_eq!(channel.funding_txo.unwrap().txid, broadcasted_splice_tx_acc.txid()); + assert_eq!(channel.confirmations.unwrap(), 10); + } + + let events = initiator_node.node.get_and_clear_pending_events(); + if events.len() > 0 { + panic!("Unexpected event {:?}", events[0]); + } + assert_eq!(events.len(), 0); + let events = acceptor_node.node.get_and_clear_pending_events(); + if events.len() > 0 { + panic!("Unexpected event {:?}", events[0]); + } + assert_eq!(events.len(), 0); + + // === End of Splicing + + if do_payment_post_splice { + // === Send another payment + let payment2_amount_msat = 3002_000; + println!("Send another payment, amount {}", payment2_amount_msat); + + let _payment_res = send_payment(&initiator_node, &[acceptor_node], payment2_amount_msat); + + // adjust balances + exp_balance1 -= payment2_amount_msat; + exp_balance2 += payment2_amount_msat; + } + + // check changed balances + assert_eq!(initiator_node.node.list_channels().len(), 1); + { + let channel = &initiator_node.node.list_channels()[0]; + assert_eq!(channel.channel_value_satoshis, post_splice_channel_value); + assert_eq!(channel.balance_msat, exp_balance1); + } + // do checks on the acceptor node as well + assert_eq!(acceptor_node.node.list_channels().len(), 1); + { + let channel = &acceptor_node.node.list_channels()[0]; + assert_eq!(channel.channel_value_satoshis, post_splice_channel_value); + assert_eq!(channel.balance_msat, exp_balance2); + } + + // === Close channel, cooperatively + initiator_node.node.close_channel(&channel_id1, &acceptor_node.node.get_our_node_id()).unwrap(); + let node0_shutdown_message = get_event_msg!(initiator_node, MessageSendEvent::SendShutdown, acceptor_node.node.get_our_node_id()); + acceptor_node.node.handle_shutdown(&initiator_node.node.get_our_node_id(), &node0_shutdown_message); + let nodes_1_shutdown = get_event_msg!(acceptor_node, MessageSendEvent::SendShutdown, initiator_node.node.get_our_node_id()); + initiator_node.node.handle_shutdown(&acceptor_node.node.get_our_node_id(), &nodes_1_shutdown); + let _ = get_event_msg!(initiator_node, MessageSendEvent::SendClosingSigned, acceptor_node.node.get_our_node_id()); +} + +/// Splicing test, simple splice-in flow. Starts with opening a V2 channel first. +#[test] +fn test_v2_splice_in() { + test_splice_in_with_optional_payments( + false, false, false, + "951459a816fd3e1105bd8b623b004c5fdf640e82c306f473b50c42097610dcdf", + "e83b07b825b61fb54ec3129b4f9aa0b6fb2752bf16907d4b5def4753d1e6662c", + "02000000000102a29ca934f2f9e07815e35099881dc8c0de1847ce0f00154de3d66c0133384b79000000000000000000dfdc107609420cb573f406c3820e64df5f4c003b628bbd05113efd16a859149501000000000000000002c0d401000000000022002034c0cc0ad0dd5fe61dcf7ef58f995e3d34f8dbd24aa2a6fae68fefe102bf025c0d37000000000000160014d5a9aa98b89acc215fc3d23d6fec0ad59ca3665f0148070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707040047304402202262f62a07d13f0b65142ca4a891a12387749a65320e84d4a2cda4997eac71e9022070ff453bd2c49b67da48bfff541d8c1cdbfce13670641fe9dfa26bfc567b1a3e0147304402200b8553f0651c962e8356f1e59e07b7f2744194375779a6c6f9df100fce4042ef02206c53fb9671f812e9b6359b2f01eb9e50eca0b248da1203b7f6acd1e73fea4304014752210307a78def56cba9fc4db22a25928181de538ee59ba1a475ae113af7790acd0db32103c21e841cbc0b48197d060c71e116c185fa0ac281b7d0aa5924f535154437ca3b52ae00000000", + false, + ) +} + +/// Splicing & payment test: splicing after a payment +#[test] +fn test_payment_splice_in() { + test_splice_in_with_optional_payments( + true, false, false, + "951459a816fd3e1105bd8b623b004c5fdf640e82c306f473b50c42097610dcdf", + "ab06c66b663fdcaa43509c7f50acf96df8483117e24e014874e02ae8c265a84e", + "02000000000102dfdc107609420cb573f406c3820e64df5f4c003b628bbd05113efd16a8591495010000000000000000a29ca934f2f9e07815e35099881dc8c0de1847ce0f00154de3d66c0133384b7900000000000000000002c0d401000000000022002034c0cc0ad0dd5fe61dcf7ef58f995e3d34f8dbd24aa2a6fae68fefe102bf025c0d37000000000000160014d5a9aa98b89acc215fc3d23d6fec0ad59ca3665f04004730440220496589c8ab19a2cea70f9204634aa45a642a2a8ba5fb8952a04f4998719f397c02204642179dedd6bf2627f1cd53d2f32832af32fc28504b636fd86a098bfc992acb01473044022050d84dcf82005d21989f0595cd1d38b5e85beb1ab5843ac1323afeeecae33c960220649f00b713ccd15c868d7ebab19f978ae5bd9e25c06ee2849f10a42997a2f8b2014752210307a78def56cba9fc4db22a25928181de538ee59ba1a475ae113af7790acd0db32103c21e841cbc0b48197d060c71e116c185fa0ac281b7d0aa5924f535154437ca3b52ae014807070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070700000000", + true, + ) +} + +/// Splicing & payment test: splicing after a payment, payment after splicing. +#[test] +fn test_payment_splice_in_payment() { + test_splice_in_with_optional_payments( + true, true, false, + "951459a816fd3e1105bd8b623b004c5fdf640e82c306f473b50c42097610dcdf", + "ab06c66b663fdcaa43509c7f50acf96df8483117e24e014874e02ae8c265a84e", + "02000000000102dfdc107609420cb573f406c3820e64df5f4c003b628bbd05113efd16a8591495010000000000000000a29ca934f2f9e07815e35099881dc8c0de1847ce0f00154de3d66c0133384b7900000000000000000002c0d401000000000022002034c0cc0ad0dd5fe61dcf7ef58f995e3d34f8dbd24aa2a6fae68fefe102bf025c0d37000000000000160014d5a9aa98b89acc215fc3d23d6fec0ad59ca3665f04004730440220496589c8ab19a2cea70f9204634aa45a642a2a8ba5fb8952a04f4998719f397c02204642179dedd6bf2627f1cd53d2f32832af32fc28504b636fd86a098bfc992acb01473044022050d84dcf82005d21989f0595cd1d38b5e85beb1ab5843ac1323afeeecae33c960220649f00b713ccd15c868d7ebab19f978ae5bd9e25c06ee2849f10a42997a2f8b2014752210307a78def56cba9fc4db22a25928181de538ee59ba1a475ae113af7790acd0db32103c21e841cbc0b48197d060c71e116c185fa0ac281b7d0aa5924f535154437ca3b52ae014807070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070700000000", + true, + ) +} + +/// Splicing & payment test: payment while the splice is pending (has been negotiated but didn't lock yet) +/// HTLC update in the 'middle' of splicing (before splice locked). +/// Open a V2 channel, initiate a splice-in, do a payment before the splice is locked +/// Disabled: still does not work well TODO(splicing) +//#[test] // TODO +fn test_payment_while_splice_pending() { + test_splice_in_with_optional_payments( + false, false, true, + "951459a816fd3e1105bd8b623b004c5fdf640e82c306f473b50c42097610dcdf", + "e83b07b825b61fb54ec3129b4f9aa0b6fb2752bf16907d4b5def4753d1e6662c", + "02000000000102a29ca934f2f9e07815e35099881dc8c0de1847ce0f00154de3d66c0133384b79000000000000000000dfdc107609420cb573f406c3820e64df5f4c003b628bbd05113efd16a859149501000000000000000002c0d401000000000022002034c0cc0ad0dd5fe61dcf7ef58f995e3d34f8dbd24aa2a6fae68fefe102bf025c0d37000000000000160014d5a9aa98b89acc215fc3d23d6fec0ad59ca3665f0148070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707070707040047304402202262f62a07d13f0b65142ca4a891a12387749a65320e84d4a2cda4997eac71e9022070ff453bd2c49b67da48bfff541d8c1cdbfce13670641fe9dfa26bfc567b1a3e0147304402200b8553f0651c962e8356f1e59e07b7f2744194375779a6c6f9df100fce4042ef02206c53fb9671f812e9b6359b2f01eb9e50eca0b248da1203b7f6acd1e73fea4304014752210307a78def56cba9fc4db22a25928181de538ee59ba1a475ae113af7790acd0db32103c21e841cbc0b48197d060c71e116c185fa0ac281b7d0aa5924f535154437ca3b52ae00000000", + false, + ) +} diff --git a/lightning/src/ln/interactivetxs.rs b/lightning/src/ln/interactivetxs.rs index 60341620887..2f1426fd9b9 100644 --- a/lightning/src/ln/interactivetxs.rs +++ b/lightning/src/ln/interactivetxs.rs @@ -14,16 +14,19 @@ use core::ops::Deref; use bitcoin::blockdata::constants::WITNESS_SCALE_FACTOR; use bitcoin::consensus::Encodable; use bitcoin::policy::MAX_STANDARD_TX_WEIGHT; +use bitcoin::secp256k1::PublicKey; +use bitcoin::secp256k1::ecdsa::Signature; use bitcoin::{ absolute::LockTime as AbsoluteLockTime, OutPoint, ScriptBuf, Sequence, Transaction, TxIn, - TxOut, Weight, + TxOut, Txid, Weight, Witness, }; use crate::chain::chaininterface::fee_for_weight; use crate::events::bump_transaction::{BASE_INPUT_WEIGHT, EMPTY_SCRIPT_SIG_WEIGHT}; +use crate::events::MessageSendEvent; use crate::ln::channel::TOTAL_BITCOIN_SUPPLY_SATOSHIS; use crate::ln::msgs; -use crate::ln::msgs::SerialId; +use crate::ln::msgs::{CommitmentSigned, SerialId, TxSignatures}; use crate::ln::types::ChannelId; use crate::sign::{EntropySource, P2TR_KEY_PATH_WITNESS_WEIGHT, P2WPKH_WITNESS_WEIGHT}; use crate::util::ser::TransactionU16LenLimited; @@ -42,7 +45,7 @@ const MAX_INPUTS_OUTPUTS_COUNT: usize = 252; /// The total weight of the common fields whose fee is paid by the initiator of the interactive /// transaction construction protocol. -const TX_COMMON_FIELDS_WEIGHT: u64 = (4 /* version */ + 4 /* locktime */ + 1 /* input count */ + +pub(crate) const TX_COMMON_FIELDS_WEIGHT: u64 = (4 /* version */ + 4 /* locktime */ + 1 /* input count */ + 1 /* output count */) * WITNESS_SCALE_FACTOR as u64 + 2 /* segwit marker + flag */; // BOLT 3 - Lower bounds for input weights @@ -99,6 +102,39 @@ pub(crate) enum AbortReason { InvalidTx, } +impl AbortReason { + pub fn into_tx_abort_msg(self, channel_id: ChannelId) -> msgs::TxAbort { + let msg = match self { + AbortReason::InvalidStateTransition => "State transition was invalid", + AbortReason::UnexpectedCounterpartyMessage => "Unexpected message", + AbortReason::ReceivedTooManyTxAddInputs => "Too many `tx_add_input`s received", + AbortReason::ReceivedTooManyTxAddOutputs => "Too many `tx_add_output`s received", + AbortReason::IncorrectInputSequenceValue => { + "Input has a sequence value greater than 0xFFFFFFFD" + }, + AbortReason::IncorrectSerialIdParity => "Parity for `serial_id` was incorrect", + AbortReason::SerialIdUnknown => "The `serial_id` is unknown", + AbortReason::DuplicateSerialId => "The `serial_id` already exists", + AbortReason::PrevTxOutInvalid => "Invalid previous transaction output", + AbortReason::ExceededMaximumSatsAllowed => { + "Output amount exceeded total bitcoin supply" + }, + AbortReason::ExceededNumberOfInputsOrOutputs => "Too many inputs or outputs", + AbortReason::TransactionTooLarge => "Transaction weight is too large", + AbortReason::BelowDustLimit => "Output amount is below the dust limit", + AbortReason::InvalidOutputScript => "The output script is non-standard", + AbortReason::InsufficientFees => "Insufficient fees paid", + AbortReason::OutputsValueExceedsInputsValue => { + "Total value of outputs exceeds total value of inputs" + }, + AbortReason::InvalidTx => "The transaction is invalid", + } + .to_string(); + + msgs::TxAbort { channel_id, data: msg.into_bytes() } + } +} + #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct InteractiveTxInput { serial_id: SerialId, @@ -112,6 +148,12 @@ pub(crate) struct InteractiveTxOutput { tx_out: TxOut, } +impl InteractiveTxOutput { + pub fn tx_out(&self) -> &TxOut { + &self.tx_out + } +} + #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct ConstructedTransaction { holder_is_initiator: bool, @@ -146,17 +188,25 @@ impl ConstructedTransaction { }) .fold(0u64, |value, (_, output)| value.saturating_add(output.tx_out.value)); + let remote_inputs_value_satoshis = context.remote_inputs_value(); + let remote_outputs_value_satoshis = context.remote_outputs_value(); + let mut inputs: Vec = context.inputs.into_values().collect(); + let mut outputs: Vec = context.outputs.into_values().collect(); + // Inputs and outputs must be sorted by serial_id + inputs.sort_unstable_by_key(|InteractiveTxInput { serial_id, .. }| *serial_id); + outputs.sort_unstable_by_key(|InteractiveTxOutput { serial_id, .. }| *serial_id); + Self { holder_is_initiator: context.holder_is_initiator, local_inputs_value_satoshis, local_outputs_value_satoshis, - remote_inputs_value_satoshis: context.remote_inputs_value(), - remote_outputs_value_satoshis: context.remote_outputs_value(), + remote_inputs_value_satoshis, + remote_outputs_value_satoshis, - inputs: context.inputs.into_values().collect(), - outputs: context.outputs.into_values().collect(), + inputs, + outputs, lock_time: context.tx_locktime, } @@ -182,11 +232,7 @@ impl ConstructedTransaction { } pub fn into_unsigned_tx(self) -> Transaction { - // Inputs and outputs must be sorted by serial_id - let ConstructedTransaction { mut inputs, mut outputs, .. } = self; - - inputs.sort_unstable_by_key(|InteractiveTxInput { serial_id, .. }| *serial_id); - outputs.sort_unstable_by_key(|InteractiveTxOutput { serial_id, .. }| *serial_id); + let ConstructedTransaction { inputs, outputs, .. } = self; let input: Vec = inputs.into_iter().map(|InteractiveTxInput { input, .. }| input).collect(); @@ -195,10 +241,177 @@ impl ConstructedTransaction { Transaction { version: 2, lock_time: self.lock_time, input, output } } + + pub fn outputs(&self) -> impl Iterator { + self.outputs.iter() + } + + /// Return all outputs and their index that match the given script pubkey and optionally value. + pub fn find_output_by_script(&self, script_pubkey: &ScriptBuf, value: Option) -> Vec<(usize, TxOut)> { + self.outputs.iter() + .enumerate() + .filter(|(_idx, outp)| + outp.tx_out().script_pubkey == *script_pubkey + && (if let Some(v) = value { outp.tx_out().value == v } else { true }) // optionally match value as well + ) + .map(|(idx, outp)| (idx, outp.tx_out().clone())) + .collect() + } + + pub fn inputs(&self) -> impl Iterator { + self.inputs.iter() + } + + pub fn txid(&self) -> Txid { + self.clone().into_unsigned_tx().txid() + } + + pub fn add_local_witnesses(&mut self, witnesses: Vec) { + self.inputs + .iter_mut() + .filter(|InteractiveTxInput { serial_id, .. }| { + !is_serial_id_valid_for_counterparty(self.holder_is_initiator, serial_id) + }) + .map(|InteractiveTxInput { input, .. }| input) + .zip(witnesses.into_iter()) + .for_each(|(input, witness)| input.witness = witness); + } + + pub fn add_remote_witnesses(&mut self, witnesses: Vec) { + self.inputs + .iter_mut() + .filter(|InteractiveTxInput { serial_id, .. }| { + is_serial_id_valid_for_counterparty(self.holder_is_initiator, serial_id) + }) + .map(|InteractiveTxInput { input, .. }| input) + .zip(witnesses.into_iter()) + .for_each(|(input, witness)| input.witness = witness); + } +} + +#[derive(Debug, Clone, PartialEq)] +pub(crate) struct InteractiveTxSigningSession { + pub unsigned_tx: ConstructedTransaction, + holder_sends_tx_signatures_first: bool, + sent_commitment_signed: Option, + received_commitment_signed: Option, + holder_tx_signatures: Option, + counterparty_tx_signatures: Option, + /// #SPLICING + pub(crate) shared_signature: Option, +} + +impl InteractiveTxSigningSession { + pub fn received_commitment_signed( + &mut self, commitment_signed: CommitmentSigned, + ) -> Option { + self.received_commitment_signed = Some(commitment_signed); + + self.get_tx_signatures() + } + + pub fn received_tx_signatures( + &mut self, tx_signatures: TxSignatures, + ) -> (Option, Option) { + if self.counterparty_tx_signatures.is_some() { + return (None, None); + }; + self.counterparty_tx_signatures = Some(tx_signatures.clone()); + self.unsigned_tx.add_remote_witnesses(tx_signatures.witnesses.clone()); + + (self.get_tx_signatures(), self.get_finalized_funding_tx()) + } + + pub fn provide_holder_witnesses( + &mut self, channel_id: ChannelId, witnesses: Vec, shared_signature: Option, + ) -> (Option, Option) { + self.unsigned_tx.add_local_witnesses(witnesses.clone()); + self.holder_tx_signatures = Some(TxSignatures { + channel_id, + tx_hash: self.unsigned_tx.txid(), + witnesses: witnesses.into_iter().map(|witness| witness).collect(), + shared_input_signature: shared_signature, + }); + + (self.get_tx_signatures(), self.get_finalized_funding_tx()) + } + + /// Decide if we need to send `TxSignatures` at this stage or not + fn get_tx_signatures(&self) -> Option { + if self.holder_tx_signatures.is_none() { + return None; // no local signature yet + } + if self.received_commitment_signed.is_none() { + return None; // no counterparty commitment received yet + } + if + (!self.holder_sends_tx_signatures_first && self.counterparty_tx_signatures.is_some()) || + (self.holder_sends_tx_signatures_first && self.counterparty_tx_signatures.is_none()) + { + self.holder_tx_signatures.clone() + } else { + None + } + } + + /// Decide if we have the funding transaction signed from both parties + fn get_finalized_funding_tx(&self) -> Option { + if self.holder_tx_signatures.is_none() { + return None; // no local signature yet + } + if self.counterparty_tx_signatures.is_none() { + return None; // no counterparty signature received yet + } + Some(self.finalize_funding_tx()) + } + + pub fn remote_inputs_count(&self) -> usize { + self.unsigned_tx + .inputs + .iter() + .filter(|InteractiveTxInput { serial_id, .. }| { + is_serial_id_valid_for_counterparty(self.unsigned_tx.holder_is_initiator, serial_id) + }) + .count() + } + + pub fn local_inputs_count(&self) -> usize { + self.unsigned_tx + .inputs + .iter() + .filter(|InteractiveTxInput { serial_id, .. }| { + !is_serial_id_valid_for_counterparty( + self.unsigned_tx.holder_is_initiator, + serial_id, + ) + }) + .count() + } + + fn finalize_funding_tx(&self) -> Transaction { + let lock_time = self.unsigned_tx.lock_time; + let input = self.unsigned_tx.inputs.iter().cloned() + .map(|InteractiveTxInput { input, .. }| input).collect(); + let output = self.unsigned_tx.outputs.iter().cloned() + .map(|InteractiveTxOutput { tx_out, .. }| tx_out).collect(); + + Transaction { + version: 2, + lock_time, + input, + output, + } + } + + pub fn counterparty_tx_signatures(&self) -> Option { + self.counterparty_tx_signatures.clone() + } } #[derive(Debug)] struct NegotiationContext { + holder_node_id: PublicKey, + counterparty_node_id: PublicKey, holder_is_initiator: bool, received_tx_add_input_count: u16, received_tx_add_output_count: u16, @@ -278,6 +491,30 @@ impl NegotiationContext { ) } + fn local_inputs_value(&self) -> u64 { + self.inputs + .iter() + .filter(|(serial_id, _)| !self.is_serial_id_valid_for_counterparty(serial_id)) + .fold(0u64, |acc, (_, InteractiveTxInput { prev_output, .. })| { + acc.saturating_add(prev_output.value) + }) + } + + fn should_holder_send_tx_signatures_first(&self) -> bool { + // There is a strict ordering for `tx_signatures` exchange to prevent deadlocks. + let holder_inputs_value = self.local_inputs_value(); + let counterparty_inputs_value = self.remote_inputs_value(); + + if holder_inputs_value == counterparty_inputs_value { + // If the amounts are the same then the peer with the lowest pubkey lexicographically sends its + // tx_signatures first + self.holder_node_id < self.counterparty_node_id + } else { + // Otherwise the peer with the lowest contributed input value sends its tx_signatures first. + holder_inputs_value < counterparty_inputs_value + } + } + fn received_tx_add_input(&mut self, msg: &msgs::TxAddInput) -> Result<(), AbortReason> { // The interactive-txs spec calls for us to fail negotiation if the `prevtx` we receive is // invalid. However, we would not need to account for this explicit negotiation failure @@ -527,7 +764,7 @@ impl NegotiationContext { Ok(()) } - fn validate_tx(self) -> Result { + fn validate_tx(self) -> Result<(ConstructedTransaction, bool), AbortReason> { // The receiving node: // MUST fail the negotiation if: @@ -549,13 +786,15 @@ impl NegotiationContext { // - the peer's paid feerate does not meet or exceed the agreed feerate (based on the minimum fee). self.check_counterparty_fees(remote_inputs_value.saturating_sub(remote_outputs_value))?; + let holder_sends_tx_signatures_first = self.should_holder_send_tx_signatures_first(); + let constructed_tx = ConstructedTransaction::new(self); if constructed_tx.weight().to_wu() > MAX_STANDARD_TX_WEIGHT as u64 { return Err(AbortReason::TransactionTooLarge); } - Ok(constructed_tx) + Ok((constructed_tx, holder_sends_tx_signatures_first)) } } @@ -643,7 +882,7 @@ define_state!( ReceivedTxComplete, "We have received a `tx_complete` message and the counterparty is awaiting ours." ); -define_state!(NegotiationComplete, ConstructedTransaction, "We have exchanged consecutive `tx_complete` messages with the counterparty and the transaction negotiation is complete."); +define_state!(NegotiationComplete, InteractiveTxSigningSession, "We have exchanged consecutive `tx_complete` messages with the counterparty and the transaction negotiation is complete."); define_state!( NegotiationAborted, AbortReason, @@ -685,8 +924,17 @@ macro_rules! define_state_transitions { impl StateTransition for $tx_complete_state { fn transition(self, _data: &msgs::TxComplete) -> StateTransitionResult { let context = self.into_negotiation_context(); - let tx = context.validate_tx()?; - Ok(NegotiationComplete(tx)) + let (tx, holder_sends_tx_signatures_first) = context.validate_tx()?; + let signing_session = InteractiveTxSigningSession { + holder_sends_tx_signatures_first, + unsigned_tx: tx, + sent_commitment_signed: None, + received_commitment_signed: None, + holder_tx_signatures: None, + counterparty_tx_signatures: None, + shared_signature: None, + }; + Ok(NegotiationComplete(signing_session)) } } @@ -754,9 +1002,14 @@ macro_rules! define_state_machine_transitions { } impl StateMachine { - fn new(feerate_sat_per_kw: u32, is_initiator: bool, tx_locktime: AbsoluteLockTime) -> Self { + fn new( + holder_node_id: PublicKey, counterparty_node_id: PublicKey, feerate_sat_per_kw: u32, + is_initiator: bool, tx_locktime: AbsoluteLockTime, + ) -> Self { let context = NegotiationContext { tx_locktime, + holder_node_id, + counterparty_node_id, holder_is_initiator: is_initiator, received_tx_add_input_count: 0, received_tx_add_output_count: 0, @@ -836,6 +1089,39 @@ pub(crate) enum InteractiveTxMessageSend { TxComplete(msgs::TxComplete), } +impl InteractiveTxMessageSend { + pub fn into_msg_send_event(self, counterparty_node_id: &PublicKey) -> MessageSendEvent { + match self { + InteractiveTxMessageSend::TxAddInput(msg) => { + MessageSendEvent::SendTxAddInput { node_id: *counterparty_node_id, msg } + }, + InteractiveTxMessageSend::TxAddOutput(msg) => { + MessageSendEvent::SendTxAddOutput { node_id: *counterparty_node_id, msg } + }, + InteractiveTxMessageSend::TxComplete(msg) => { + MessageSendEvent::SendTxComplete { node_id: *counterparty_node_id, msg } + }, + } + } +} + +pub(super) struct InteractiveTxMessageSendResult( + pub Result, +); + +impl InteractiveTxMessageSendResult { + pub fn into_msg_send_event(self, counterparty_node_id: &PublicKey) -> MessageSendEvent { + match self.0 { + Ok(interactive_tx_msg_send) => { + interactive_tx_msg_send.into_msg_send_event(counterparty_node_id) + }, + Err(tx_abort_msg) => { + MessageSendEvent::SendTxAbort { node_id: *counterparty_node_id, msg: tx_abort_msg } + }, + } + } +} + // This macro executes a state machine transition based on a provided action. macro_rules! do_state_transition { ($self: ident, $transition: ident, $msg: expr) => {{ @@ -864,8 +1150,41 @@ where pub(crate) enum HandleTxCompleteValue { SendTxMessage(InteractiveTxMessageSend), - SendTxComplete(InteractiveTxMessageSend, ConstructedTransaction), - NegotiationComplete(ConstructedTransaction), + SendTxComplete(InteractiveTxMessageSend, InteractiveTxSigningSession), + NegotiationComplete(InteractiveTxSigningSession), +} + +pub(super) struct HandleTxCompleteResult(pub Result); + +impl HandleTxCompleteResult { + pub fn into_msg_send_event( + self, counterparty_node_id: &PublicKey, + ) -> (Option, Option) { + match self.0 { + Ok(tx_complete_res) => { + let (tx_msg_opt, signing_session_opt) = match tx_complete_res { + HandleTxCompleteValue::SendTxMessage(msg) => (Some(msg), None), + HandleTxCompleteValue::SendTxComplete(msg, signing_session) => { + (Some(msg), Some(signing_session)) + }, + HandleTxCompleteValue::NegotiationComplete(signing_session) => { + (None, Some(signing_session)) + }, + }; + ( + tx_msg_opt.map(|tx_msg| tx_msg.into_msg_send_event(counterparty_node_id)), + signing_session_opt, + ) + }, + Err(tx_abort_msg) => ( + Some(MessageSendEvent::SendTxAbort { + node_id: *counterparty_node_id, + msg: tx_abort_msg, + }), + None, + ), + } + } } impl InteractiveTxConstructor { @@ -874,7 +1193,8 @@ impl InteractiveTxConstructor { /// A tuple is returned containing the newly instantiate `InteractiveTxConstructor` and optionally /// an initial wrapped `Tx_` message which the holder needs to send to the counterparty. pub fn new( - entropy_source: &ES, channel_id: ChannelId, feerate_sat_per_kw: u32, is_initiator: bool, + entropy_source: &ES, channel_id: ChannelId, feerate_sat_per_kw: u32, + holder_node_id: PublicKey, counterparty_node_id: PublicKey, is_initiator: bool, funding_tx_locktime: AbsoluteLockTime, inputs_to_contribute: Vec<(TxIn, TransactionU16LenLimited)>, outputs_to_contribute: Vec, @@ -882,8 +1202,13 @@ impl InteractiveTxConstructor { where ES::Target: EntropySource, { - let state_machine = - StateMachine::new(feerate_sat_per_kw, is_initiator, funding_tx_locktime); + let state_machine = StateMachine::new( + holder_node_id, + counterparty_node_id, + feerate_sat_per_kw, + is_initiator, + funding_tx_locktime, + ); let mut inputs_to_contribute: Vec<(SerialId, TxIn, TransactionU16LenLimited)> = inputs_to_contribute .into_iter() @@ -934,6 +1259,7 @@ impl InteractiveTxConstructor { prevtx, prevtx_out: input.previous_output.vout, sequence: input.sequence.to_consensus_u32(), + shared_input_txid: None, }; do_state_transition!(self, sent_tx_add_input, &msg)?; Ok(InteractiveTxMessageSend::TxAddInput(msg)) @@ -1024,26 +1350,31 @@ mod tests { InteractiveTxMessageSend, MAX_INPUTS_OUTPUTS_COUNT, MAX_RECEIVED_TX_ADD_INPUT_COUNT, MAX_RECEIVED_TX_ADD_OUTPUT_COUNT, }; + use crate::ln::msgs::{CommitmentSigned, TxSignatures}; use crate::ln::types::ChannelId; use crate::sign::EntropySource; use crate::util::atomic_counter::AtomicCounter; use crate::util::ser::TransactionU16LenLimited; + use bitcoin::absolute::LockTime; use bitcoin::blockdata::opcodes; use bitcoin::blockdata::script::Builder; use bitcoin::hashes::Hash; use bitcoin::key::UntweakedPublicKey; - use bitcoin::secp256k1::{KeyPair, Secp256k1}; + use bitcoin::secp256k1::ecdsa::Signature; + use bitcoin::secp256k1::{KeyPair, PublicKey, Secp256k1, SecretKey}; use bitcoin::{ - absolute::LockTime as AbsoluteLockTime, OutPoint, Sequence, Transaction, TxIn, TxOut, + absolute::LockTime as AbsoluteLockTime, OutPoint, Sequence, Transaction, TxIn, Txid, TxOut, Witness, }; use bitcoin::{PubkeyHash, ScriptBuf, WPubkeyHash, WScriptHash}; use core::ops::Deref; use super::{ - get_output_weight, P2TR_INPUT_WEIGHT_LOWER_BOUND, P2WPKH_INPUT_WEIGHT_LOWER_BOUND, + get_output_weight, ConstructedTransaction, InteractiveTxSigningSession, NegotiationContext, P2TR_INPUT_WEIGHT_LOWER_BOUND, P2WPKH_INPUT_WEIGHT_LOWER_BOUND, P2WSH_INPUT_WEIGHT_LOWER_BOUND, TX_COMMON_FIELDS_WEIGHT, }; + use crate::prelude::*; + const TEST_FEERATE_SATS_PER_KW: u32 = FEERATE_FLOOR_SATS_PER_KW * 10; // A simple entropy source that works based on an atomic counter. @@ -1115,12 +1446,22 @@ mod tests { { let channel_id = ChannelId(entropy_source.get_secure_random_bytes()); let tx_locktime = AbsoluteLockTime::from_height(1337).unwrap(); + let holder_node_id = PublicKey::from_secret_key( + &Secp256k1::signing_only(), + &SecretKey::from_slice(&[42; 32]).unwrap(), + ); + let counterparty_node_id = PublicKey::from_secret_key( + &Secp256k1::signing_only(), + &SecretKey::from_slice(&[43; 32]).unwrap(), + ); let (mut constructor_a, first_message_a) = InteractiveTxConstructor::new( entropy_source, channel_id, TEST_FEERATE_SATS_PER_KW, - true, + holder_node_id, + counterparty_node_id, + true, // is_initiator tx_locktime, session.inputs_a, session.outputs_a, @@ -1129,7 +1470,9 @@ mod tests { entropy_source, channel_id, TEST_FEERATE_SATS_PER_KW, - false, + holder_node_id, + counterparty_node_id, + false, // is_initiator tx_locktime, session.inputs_b, session.outputs_b, @@ -1166,9 +1509,10 @@ mod tests { while final_tx_a.is_none() || final_tx_b.is_none() { if let Some(message_send_a) = message_send_a.take() { match handle_message_send(message_send_a, &mut constructor_b) { - Ok((msg_send, final_tx)) => { + Ok((msg_send, interactive_signing_session)) => { message_send_b = msg_send; - final_tx_b = final_tx; + final_tx_b = + interactive_signing_session.map(|session| session.unsigned_tx.txid()); }, Err(abort_reason) => { let error_culprit = match abort_reason { @@ -1190,9 +1534,10 @@ mod tests { } if let Some(message_send_b) = message_send_b.take() { match handle_message_send(message_send_b, &mut constructor_a) { - Ok((msg_send, final_tx)) => { + Ok((msg_send, interactive_signing_session)) => { message_send_a = msg_send; - final_tx_a = final_tx; + final_tx_a = + interactive_signing_session.map(|session| session.unsigned_tx.txid()); }, Err(abort_reason) => { let error_culprit = match abort_reason { @@ -1215,7 +1560,7 @@ mod tests { } assert!(message_send_a.is_none()); assert!(message_send_b.is_none()); - assert_eq!(final_tx_a.unwrap().into_unsigned_tx(), final_tx_b.unwrap().into_unsigned_tx()); + assert_eq!(final_tx_a.unwrap(), final_tx_b.unwrap()); assert!(session.expect_error.is_none(), "Test: {}", session.description); } @@ -1637,4 +1982,243 @@ mod tests { assert_eq!(generate_holder_serial_id(&&entropy_source, true) % 2, 0); assert_eq!(generate_holder_serial_id(&&entropy_source, false) % 2, 1) } + + fn create_signing_session(is_initiator: bool, holder_sends_tx_signatures_first: bool) -> InteractiveTxSigningSession { + let dummy_pk = PublicKey::from_slice(&[2; 33]).unwrap(); + let context = NegotiationContext { + tx_locktime: LockTime::ZERO, + holder_node_id: dummy_pk, + counterparty_node_id: dummy_pk, + holder_is_initiator: is_initiator, + received_tx_add_input_count: 0, + received_tx_add_output_count: 0, + inputs: new_hash_map(), + prevtx_outpoints: new_hash_set(), + outputs: new_hash_map(), + feerate_sat_per_kw: 1024, + }; + InteractiveTxSigningSession { + unsigned_tx: ConstructedTransaction::new(context), + holder_sends_tx_signatures_first, + sent_commitment_signed: None, + received_commitment_signed: None, + holder_tx_signatures: None, + counterparty_tx_signatures: None, + shared_signature: None, + } + } + + /// Test various combination of event orders on `InteractiveTxSigningSession`. + #[test] + fn signing_session_receive_orders() { + let channel_id = ChannelId::from_bytes([1; 32]); + let signature = Signature::from_compact(&[4; 64]).unwrap(); + let tx_hash = Txid::from_slice(&[5; 32]).unwrap(); + let dummy_commitment_signed = CommitmentSigned { + channel_id, + signature, + htlc_signatures: vec![], + batch: None, + #[cfg(taproot)] + partial_signature_with_nonce: None, + }; + let dummy_tx_sigs = TxSignatures { + channel_id, + tx_hash, + witnesses: vec![], + shared_input_signature: None, + }; + + // Order: local signature, received commitment, received signatures; CP sends first + { + let mut signing_session = create_signing_session(true, false); + + let res1 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert_eq!(res1.0, None); // because no commitment_signed was received yet + assert_eq!(res1.1, None); // because no tx_sigs was received yet + + let res2 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert_eq!(res2, None); // because we don't send it first + + let res3 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert!(res3.0.is_some()); + assert!(res3.1.is_some()); + } + + // Order: received commitment, local signature, received signatures; CP sends first + { + let mut signing_session = create_signing_session(true, false); + + let res1 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert_eq!(res1, None); // because we don't send it first + + let res2 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert_eq!(res2.0, None); // because we don't send it first and tx_sigs was not yet received + assert_eq!(res2.1, None); // because no tx_sigs was received yet + + let res3 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert!(res3.0.is_some()); + assert!(res3.1.is_some()); + } + + // Order: received commitment, received signatures, local signature; CP sends first + { + let mut signing_session = create_signing_session(true, false); + + let res1 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert!(res1.is_none()); // because we don't send it first + + let res2 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert_eq!(res2.0, None); // because there is no local signature yet + assert_eq!(res2.1, None); // because there is no local signature yet + + let res3 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert!(res3.0.is_some()); + assert!(res3.1.is_some()); + } + + // Invalid order: local signature, received signatures, received commitment; CP sends first + { + let mut signing_session = create_signing_session(true, false); + + let res1 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert_eq!(res1.0, None); // because no commitment_signed was received yet + assert_eq!(res1.1, None); // because no tx_sigs was received yet + + let res2 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert_eq!(res2.0, None); // because no commitment_signed was received yet + assert!(res2.1.is_some()); + + let res3 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert!(res3.is_some()); + } + + // Invalid order: received signatures, local signature, received commitment; CP sends first + { + let mut signing_session = create_signing_session(true, false); + + let res1 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert_eq!(res1.0, None); // because there is no local signature yet + assert_eq!(res1.1, None); // because there is no local signature yet + + let res2 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert_eq!(res2.0, None); // because we don't send it first and tx_sigs was not yet received + assert!(res2.1.is_some()); + + let res3 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert!(res3.is_some()); + } + + // Invalid order: received signatures, received commitment, local signature; CP sends first + { + let mut signing_session = create_signing_session(true, false); + + let res1 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert_eq!(res1.0, None); // because there is no local signature yet + assert_eq!(res1.1, None); // because there is no local signature yet + + let res2 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert!(res2.is_none()); // because we don't send it first + + let res3 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert!(res3.0.is_some()); + assert!(res3.1.is_some()); + } + + // Order: local signature, received commitment, received signatures; holder sends first + { + let mut signing_session = create_signing_session(true, true); + + let res1 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert_eq!(res1.0, None); // because no commitment_signed was received yet + assert_eq!(res1.1, None); // because no tx_sigs was received yet + + let res2 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert!(res2.is_some()); + + let res3 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert_eq!(res3.0, None); // because we send it first + assert!(res3.1.is_some()); + } + + // Order: received commitment, local signature, received signatures; holder sends first + { + let mut signing_session = create_signing_session(true, true); + + let res1 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert_eq!(res1, None); // because no tx_sigs was received yet + + let res2 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert!(res2.0.is_some()); + assert_eq!(res2.1, None); // because no tx_sigs was received yet + + let res3 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert_eq!(res3.0, None); // because we send it first + assert!(res3.1.is_some()); + } + + // Invalid order: received commitment, received signatures, local signature; holder sends first + { + let mut signing_session = create_signing_session(true, true); + + let res1 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert_eq!(res1, None); // because tx_sigs was not yet received + + let res2 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert_eq!(res2.0, None); // because there is no local signature yet + assert_eq!(res2.1, None); // because there is no local signature yet + + let res3 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert_eq!(res3.0, None); // because we wanted to send it first + assert!(res3.1.is_some()); + } + + // Invalid order: local signature, received signatures, received commitment; holder sends first + { + let mut signing_session = create_signing_session(true, true); + + let res1 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert_eq!(res1.0, None); // because no commitment_signed was received yet + assert_eq!(res1.1, None); // because no tx_sigs was received yet + + let res2 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert_eq!(res2.0, None); // because we wanted to send it first + assert!(res2.1.is_some()); + + let res3 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert_eq!(res3, None); // because we wanted to send it first + } + + // Invalid order: received signatures, local signature, received commitment; holder sends first + { + let mut signing_session = create_signing_session(true, true); + + let res1 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert_eq!(res1.0, None); // because we wanted to send it first + assert_eq!(res1.1, None); // because there is no local signature yet + + let res2 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert_eq!(res2.0, None); // because no commitment_signed was received yet + assert!(res2.1.is_some()); + + let res3 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert_eq!(res3, None); // because we wanted to send it first + } + + // Invalid order: received signatures, received commitment, local signature; holder sends first + { + let mut signing_session = create_signing_session(true, true); + + let res1 = signing_session.received_tx_signatures(dummy_tx_sigs.clone()); + assert_eq!(res1.0, None); // because there is no local signature yet + assert_eq!(res1.1, None); // because there is no local signature yet + + let res2 = signing_session.received_commitment_signed(dummy_commitment_signed.clone()); + assert_eq!(res2, None); // because there is no local signature yet + + let res3 = signing_session.provide_holder_witnesses(channel_id, vec![Witness::new()], None); + assert_eq!(res3.0, None); // because we wanted to send it first + assert!(res3.1.is_some()); + } + } } diff --git a/lightning/src/ln/mod.rs b/lightning/src/ln/mod.rs index 7cbb2ce5ebe..61349c784a1 100644 --- a/lightning/src/ln/mod.rs +++ b/lightning/src/ln/mod.rs @@ -23,6 +23,8 @@ pub mod chan_utils; pub mod features; pub mod script; pub mod types; +#[cfg(splicing)] +mod channel_splice; pub use types::{ChannelId, PaymentHash, PaymentPreimage, PaymentSecret}; @@ -51,6 +53,9 @@ mod blinded_payment_tests; #[cfg(test)] #[allow(unused_mut)] mod functional_tests; +#[cfg(all(test, splicing))] +#[allow(unused_mut)] +mod functional_tests_splice; #[cfg(test)] #[allow(unused_mut)] mod payment_tests; diff --git a/lightning/src/ln/monitor_tests.rs b/lightning/src/ln/monitor_tests.rs index 52bda818583..761d4b5316c 100644 --- a/lightning/src/ln/monitor_tests.rs +++ b/lightning/src/ln/monitor_tests.rs @@ -19,7 +19,6 @@ use crate::ln::channel; use crate::ln::types::ChannelId; use crate::ln::channelmanager::{BREAKDOWN_TIMEOUT, PaymentId, RecipientOnionFields}; use crate::ln::msgs::ChannelMessageHandler; -use crate::util::config::UserConfig; use crate::crypto::utils::sign; use crate::util::ser::Writeable; use crate::util::scid_utils::block_from_scid; @@ -2250,7 +2249,7 @@ fn test_yield_anchors_events() { // emitted by LDK, such that the consumer can attach fees to the zero fee HTLC transactions. let mut chanmon_cfgs = create_chanmon_cfgs(2); let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); - let mut anchors_config = UserConfig::default(); + let mut anchors_config = test_default_channel_config(); anchors_config.channel_handshake_config.announced_channel = true; anchors_config.channel_handshake_config.negotiate_anchors_zero_fee_htlc_tx = true; anchors_config.manually_accept_inbound_channels = true; @@ -2401,7 +2400,7 @@ fn test_anchors_aggregated_revoked_htlc_tx() { let bob_persister; let bob_chain_monitor; - let mut anchors_config = UserConfig::default(); + let mut anchors_config = test_default_channel_config(); anchors_config.channel_handshake_config.announced_channel = true; anchors_config.channel_handshake_config.negotiate_anchors_zero_fee_htlc_tx = true; anchors_config.manually_accept_inbound_channels = true; diff --git a/lightning/src/ln/msgs.rs b/lightning/src/ln/msgs.rs index 87e8a814d33..7667f6fbfeb 100644 --- a/lightning/src/ln/msgs.rs +++ b/lightning/src/ln/msgs.rs @@ -410,8 +410,8 @@ pub struct ChannelReady { /// construction. pub type SerialId = u64; -/// An stfu (quiescence) message to be sent by or received from the stfu initiator. -// TODO(splicing): Add spec link for `stfu`; still in draft, using from https://github.com/lightning/bolts/pull/863 +/// An [`stfu`] (quiescence) message to be sent by or received from the stfu initiator. +// TODO(splicing): Add spec link for `stfu`; still in draft, using from https://github.com/lightning/bolts/pull/1160 #[derive(Clone, Debug, PartialEq, Eq)] pub struct Stfu { /// The channel ID where quiescence is intended @@ -420,48 +420,50 @@ pub struct Stfu { pub initiator: u8, } -/// A splice message to be sent by or received from the stfu initiator (splice initiator). -// TODO(splicing): Add spec link for `splice`; still in draft, using from https://github.com/lightning/bolts/pull/863 +/// A [`splice_init`] message to be sent by or received from the stfu initiator (splice initiator). +// TODO(splicing): Add spec link for `splice_init`; still in draft, using from https://github.com/lightning/bolts/pull/1160 #[derive(Clone, Debug, PartialEq, Eq)] -pub struct Splice { +pub struct SpliceInit { /// The channel ID where splicing is intended pub channel_id: ChannelId, - /// The genesis hash of the blockchain where the channel is intended to be spliced - pub chain_hash: ChainHash, - /// The intended change in channel capacity: the amount to be added (positive value) - /// or removed (negative value) by the sender (splice initiator) by splicing into/from the channel. - pub relative_satoshis: i64, + /// The amount the splice initiator is intending to add to its channel balance (splice-in) + /// or remove from its channel balance (splice-out). + pub funding_contribution_satoshis: i64, /// The feerate for the new funding transaction, set by the splice initiator pub funding_feerate_perkw: u32, /// The locktime for the new funding transaction pub locktime: u32, /// The key of the sender (splice initiator) controlling the new funding transaction pub funding_pubkey: PublicKey, + /// If set, only confirmed inputs added (by the splice acceptor) will be accepted + pub require_confirmed_inputs: Option, } -/// A splice_ack message to be received by or sent to the splice initiator. +/// A [`splice_ack`] message to be received by or sent to the splice initiator. /// -// TODO(splicing): Add spec link for `splice_ack`; still in draft, using from https://github.com/lightning/bolts/pull/863 +// TODO(splicing): Add spec link for `splice_ack`; still in draft, using from https://github.com/lightning/bolts/pull/1160 #[derive(Clone, Debug, PartialEq, Eq)] pub struct SpliceAck { /// The channel ID where splicing is intended pub channel_id: ChannelId, - /// The genesis hash of the blockchain where the channel is intended to be spliced - pub chain_hash: ChainHash, - /// The intended change in channel capacity: the amount to be added (positive value) - /// or removed (negative value) by the sender (splice acceptor) by splicing into/from the channel. - pub relative_satoshis: i64, + /// The amount the splice acceptor is intending to add to its channel balance (splice-in) + /// or remove from its channel balance (splice-out). + pub funding_contribution_satoshis: i64, /// The key of the sender (splice acceptor) controlling the new funding transaction pub funding_pubkey: PublicKey, + /// If set, only confirmed inputs added (by the splice initiator) will be accepted + pub require_confirmed_inputs: Option, } -/// A splice_locked message to be sent to or received from a peer. +/// A [`splice_locked`] message to be sent to or received from a peer. /// -// TODO(splicing): Add spec link for `splice_locked`; still in draft, using from https://github.com/lightning/bolts/pull/863 +// TODO(splicing): Add spec link for `splice_locked`; still in draft, using from https://github.com/lightning/bolts/pull/1160 #[derive(Clone, Debug, PartialEq, Eq)] pub struct SpliceLocked { /// The channel ID pub channel_id: ChannelId, + /// The ID of the new funding transaction that has been locked + pub splice_txid: Txid, } /// A tx_add_input message for adding an input during interactive transaction construction @@ -481,6 +483,8 @@ pub struct TxAddInput { pub prevtx_out: u32, /// The sequence number of this input pub sequence: u32, + /// The ID of the previous funding transaction, when it is being added as an input during splicing + pub shared_input_txid: Option, } /// A tx_add_output message for adding an output during interactive transaction construction. @@ -544,7 +548,7 @@ pub struct TxSignatures { /// The list of witnesses pub witnesses: Vec, /// Optional signature for the shared input -- the previous funding outpoint -- signed by both peers - pub funding_outpoint_sig: Option, + pub shared_input_signature: Option, } /// A tx_init_rbf message which initiates a replacement of the transaction after it's been @@ -708,6 +712,15 @@ pub struct UpdateFailMalformedHTLC { pub failure_code: u16, } +/// Optional batch parameters for [`commitment_signed`] message. +#[derive(Clone, Debug, Hash, PartialEq, Eq)] +pub struct CommitmentSignedBatch { + /// Batch size N: all N [`commitment_signed`] messages must be received before being processed + pub batch_size: u16, + /// The funding transaction, to discriminate among multiple pending funding transactions (e.g. in case of splicing) + pub funding_txid: Txid, +} + /// A [`commitment_signed`] message to be sent to or received from a peer. /// /// [`commitment_signed`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#committing-updates-so-far-commitment_signed @@ -719,6 +732,8 @@ pub struct CommitmentSigned { pub signature: Signature, /// Signatures on the HTLC transactions pub htlc_signatures: Vec, + /// Optional batch size and other parameters + pub batch: Option, #[cfg(taproot)] /// The partial Taproot signature on the commitment transaction pub partial_signature_with_nonce: Option, @@ -1438,10 +1453,13 @@ pub trait ChannelMessageHandler : MessageSendEventsProvider { /// Handle an incoming `open_channel` message from the given peer. fn handle_open_channel(&self, their_node_id: &PublicKey, msg: &OpenChannel); /// Handle an incoming `open_channel2` message from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_open_channel_v2(&self, their_node_id: &PublicKey, msg: &OpenChannelV2); /// Handle an incoming `accept_channel` message from the given peer. fn handle_accept_channel(&self, their_node_id: &PublicKey, msg: &AcceptChannel); + #[cfg(any(dual_funding, splicing))] /// Handle an incoming `accept_channel2` message from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_accept_channel_v2(&self, their_node_id: &PublicKey, msg: &AcceptChannelV2); /// Handle an incoming `funding_created` message from the given peer. fn handle_funding_created(&self, their_node_id: &PublicKey, msg: &FundingCreated); @@ -1461,9 +1479,9 @@ pub trait ChannelMessageHandler : MessageSendEventsProvider { fn handle_stfu(&self, their_node_id: &PublicKey, msg: &Stfu); // Splicing - /// Handle an incoming `splice` message from the given peer. + /// Handle an incoming `splice_init` message from the given peer. #[cfg(splicing)] - fn handle_splice(&self, their_node_id: &PublicKey, msg: &Splice); + fn handle_splice_init(&self, their_node_id: &PublicKey, msg: &SpliceInit); /// Handle an incoming `splice_ack` message from the given peer. #[cfg(splicing)] fn handle_splice_ack(&self, their_node_id: &PublicKey, msg: &SpliceAck); @@ -1473,22 +1491,31 @@ pub trait ChannelMessageHandler : MessageSendEventsProvider { // Interactive channel construction /// Handle an incoming `tx_add_input message` from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_tx_add_input(&self, their_node_id: &PublicKey, msg: &TxAddInput); /// Handle an incoming `tx_add_output` message from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_tx_add_output(&self, their_node_id: &PublicKey, msg: &TxAddOutput); /// Handle an incoming `tx_remove_input` message from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_tx_remove_input(&self, their_node_id: &PublicKey, msg: &TxRemoveInput); /// Handle an incoming `tx_remove_output` message from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_tx_remove_output(&self, their_node_id: &PublicKey, msg: &TxRemoveOutput); /// Handle an incoming `tx_complete message` from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_tx_complete(&self, their_node_id: &PublicKey, msg: &TxComplete); /// Handle an incoming `tx_signatures` message from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_tx_signatures(&self, their_node_id: &PublicKey, msg: &TxSignatures); /// Handle an incoming `tx_init_rbf` message from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_tx_init_rbf(&self, their_node_id: &PublicKey, msg: &TxInitRbf); /// Handle an incoming `tx_ack_rbf` message from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_tx_ack_rbf(&self, their_node_id: &PublicKey, msg: &TxAckRbf); /// Handle an incoming `tx_abort message` from the given peer. + #[cfg(any(dual_funding, splicing))] fn handle_tx_abort(&self, their_node_id: &PublicKey, msg: &TxAbort); // HTLC handling: @@ -2068,24 +2095,27 @@ impl_writeable_msg!(Stfu, { initiator, }, {}); -impl_writeable_msg!(Splice, { +impl_writeable_msg!(SpliceInit, { channel_id, - chain_hash, - relative_satoshis, + funding_contribution_satoshis, funding_feerate_perkw, locktime, funding_pubkey, -}, {}); +}, { + (2, require_confirmed_inputs, option), // `splice_init_tlvs` +}); impl_writeable_msg!(SpliceAck, { channel_id, - chain_hash, - relative_satoshis, + funding_contribution_satoshis, funding_pubkey, -}, {}); +}, { + (2, require_confirmed_inputs, option), // `splice_init_tlvs` +}); impl_writeable_msg!(SpliceLocked, { channel_id, + splice_txid, }, {}); impl_writeable_msg!(TxAddInput, { @@ -2094,7 +2124,9 @@ impl_writeable_msg!(TxAddInput, { prevtx, prevtx_out, sequence, -}, {}); +}, { + (0, shared_input_txid, option), // `funding_txid` +}); impl_writeable_msg!(TxAddOutput, { channel_id, @@ -2122,7 +2154,7 @@ impl_writeable_msg!(TxSignatures, { tx_hash, witnesses, }, { - (0, funding_outpoint_sig, option), + (0, shared_input_signature, option), // `signature` }); impl_writeable_msg!(TxInitRbf, { @@ -2171,12 +2203,19 @@ impl_writeable!(ClosingSignedFeeRange, { max_fee_satoshis }); +impl_writeable_msg!(CommitmentSignedBatch, { + batch_size, + funding_txid, +}, {}); + #[cfg(not(taproot))] impl_writeable_msg!(CommitmentSigned, { channel_id, signature, htlc_signatures -}, {}); +}, { + (0, batch, option), +}); #[cfg(taproot)] impl_writeable_msg!(CommitmentSigned, { @@ -2184,7 +2223,8 @@ impl_writeable_msg!(CommitmentSigned, { signature, htlc_signatures }, { - (2, partial_signature_with_nonce, option) + (0, batch, option), + (2, partial_signature_with_nonce, option), }); impl_writeable!(DecodedOnionErrorPacket, { @@ -3833,16 +3873,16 @@ mod tests { fn encoding_splice() { let secp_ctx = Secp256k1::new(); let (_, pubkey_1,) = get_keys_from!("0101010101010101010101010101010101010101010101010101010101010101", secp_ctx); - let splice = msgs::Splice { - chain_hash: ChainHash::from_hex("6fe28c0ab6f1b372c1a6a246ae63f74f931e8365e15a089c68d6190000000000").unwrap(), + let splice_init = msgs::SpliceInit { channel_id: ChannelId::from_bytes([2; 32]), - relative_satoshis: 123456, + funding_contribution_satoshis: -123456, funding_feerate_perkw: 2000, locktime: 0, funding_pubkey: pubkey_1, + require_confirmed_inputs: Some(true), }; - let encoded_value = splice.encode(); - assert_eq!(encoded_value.as_hex().to_string(), "02020202020202020202020202020202020202020202020202020202020202026fe28c0ab6f1b372c1a6a246ae63f74f931e8365e15a089c68d6190000000000000000000001e240000007d000000000031b84c5567b126440995d3ed5aaba0565d71e1834604819ff9c17f5e9d5dd078f"); + let encoded_value = splice_init.encode(); + assert_eq!(encoded_value.as_hex().to_string(), "0202020202020202020202020202020202020202020202020202020202020202fffffffffffe1dc0000007d000000000031b84c5567b126440995d3ed5aaba0565d71e1834604819ff9c17f5e9d5dd078f020101"); } #[test] @@ -3859,23 +3899,24 @@ mod tests { fn encoding_splice_ack() { let secp_ctx = Secp256k1::new(); let (_, pubkey_1,) = get_keys_from!("0101010101010101010101010101010101010101010101010101010101010101", secp_ctx); - let splice = msgs::SpliceAck { - chain_hash: ChainHash::from_hex("6fe28c0ab6f1b372c1a6a246ae63f74f931e8365e15a089c68d6190000000000").unwrap(), + let splice_ack = msgs::SpliceAck { channel_id: ChannelId::from_bytes([2; 32]), - relative_satoshis: 123456, + funding_contribution_satoshis: -123456, funding_pubkey: pubkey_1, + require_confirmed_inputs: Some(true), }; - let encoded_value = splice.encode(); - assert_eq!(encoded_value.as_hex().to_string(), "02020202020202020202020202020202020202020202020202020202020202026fe28c0ab6f1b372c1a6a246ae63f74f931e8365e15a089c68d6190000000000000000000001e240031b84c5567b126440995d3ed5aaba0565d71e1834604819ff9c17f5e9d5dd078f"); + let encoded_value = splice_ack.encode(); + assert_eq!(encoded_value.as_hex().to_string(), "0202020202020202020202020202020202020202020202020202020202020202fffffffffffe1dc0031b84c5567b126440995d3ed5aaba0565d71e1834604819ff9c17f5e9d5dd078f020101"); } #[test] fn encoding_splice_locked() { - let splice = msgs::SpliceLocked { + let splice_locked = msgs::SpliceLocked { channel_id: ChannelId::from_bytes([2; 32]), + splice_txid: Txid::from_str("c2d4449afa8d26140898dd54d3390b057ba2a5afcf03ba29d7dc0d8b9ffe966e").unwrap(), }; - let encoded_value = splice.encode(); - assert_eq!(encoded_value.as_hex().to_string(), "0202020202020202020202020202020202020202020202020202020202020202"); + let encoded_value = splice_locked.encode(); + assert_eq!(encoded_value.as_hex().to_string(), "02020202020202020202020202020202020202020202020202020202020202026e96fe9f8b0ddcd729ba03cfafa5a27b050b39d354dd980814268dfa9a44d4c2"); } #[test] @@ -3907,10 +3948,11 @@ mod tests { }).unwrap(), prevtx_out: 305419896, sequence: 305419896, + shared_input_txid: Some(Txid::from_str("c2d4449afa8d26140898dd54d3390b057ba2a5afcf03ba29d7dc0d8b9ffe966e").unwrap()), }; let encoded_value = tx_add_input.encode(); - let target_value = >::from_hex("0202020202020202020202020202020202020202020202020202020202020202000000012345678900de02000000000101779ced6c148293f86b60cb222108553d22c89207326bb7b6b897e23e64ab5b300200000000fdffffff0236dbc1000000000016001417d29e4dd454bac3b1cde50d1926da80cfc5287b9cbd03000000000016001436ec78d514df462da95e6a00c24daa8915362d420247304402206af85b7dd67450ad12c979302fac49dfacbc6a8620f49c5da2b5721cf9565ca502207002b32fed9ce1bf095f57aeb10c36928ac60b12e723d97d2964a54640ceefa701210301ab7dc16488303549bfcdd80f6ae5ee4c20bf97ab5410bbd6b1bfa85dcd6944000000001234567812345678").unwrap(); - assert_eq!(encoded_value, target_value); + let target_value = "0202020202020202020202020202020202020202020202020202020202020202000000012345678900de02000000000101779ced6c148293f86b60cb222108553d22c89207326bb7b6b897e23e64ab5b300200000000fdffffff0236dbc1000000000016001417d29e4dd454bac3b1cde50d1926da80cfc5287b9cbd03000000000016001436ec78d514df462da95e6a00c24daa8915362d420247304402206af85b7dd67450ad12c979302fac49dfacbc6a8620f49c5da2b5721cf9565ca502207002b32fed9ce1bf095f57aeb10c36928ac60b12e723d97d2964a54640ceefa701210301ab7dc16488303549bfcdd80f6ae5ee4c20bf97ab5410bbd6b1bfa85dcd694400000000123456781234567800206e96fe9f8b0ddcd729ba03cfafa5a27b050b39d354dd980814268dfa9a44d4c2"; + assert_eq!(encoded_value.as_hex().to_string(), target_value); } #[test] @@ -3975,7 +4017,7 @@ mod tests { >::from_hex("3045022100ee00dbf4a862463e837d7c08509de814d620e4d9830fa84818713e0fa358f145022021c3c7060c4d53fe84fd165d60208451108a778c13b92ca4c6bad439236126cc01").unwrap(), >::from_hex("028fbbf0b16f5ba5bcb5dd37cd4047ce6f726a21c06682f9ec2f52b057de1dbdb5").unwrap()]), ], - funding_outpoint_sig: Some(sig_1), + shared_input_signature: Some(sig_1), }; let encoded_value = tx_signatures.encode(); let mut target_value = >::from_hex("0202020202020202020202020202020202020202020202020202020202020202").unwrap(); // channel_id @@ -4204,17 +4246,19 @@ mod tests { channel_id: ChannelId::from_bytes([2; 32]), signature: sig_1, htlc_signatures: if htlcs { vec![sig_2, sig_3, sig_4] } else { Vec::new() }, + batch: Some(msgs::CommitmentSignedBatch { batch_size: 3, funding_txid: Txid::from_str("c2d4449afa8d26140898dd54d3390b057ba2a5afcf03ba29d7dc0d8b9ffe966e").unwrap() }), #[cfg(taproot)] partial_signature_with_nonce: None, }; let encoded_value = commitment_signed.encode(); - let mut target_value = >::from_hex("0202020202020202020202020202020202020202020202020202020202020202d977cb9b53d93a6ff64bb5f1e158b4094b66e798fb12911168a3ccdf80a83096340a6a95da0ae8d9f776528eecdbb747eb6b545495a4319ed5378e35b21e073a").unwrap(); + let mut target_value = "0202020202020202020202020202020202020202020202020202020202020202d977cb9b53d93a6ff64bb5f1e158b4094b66e798fb12911168a3ccdf80a83096340a6a95da0ae8d9f776528eecdbb747eb6b545495a4319ed5378e35b21e073a".to_string(); if htlcs { - target_value.append(&mut >::from_hex("00031735b6a427e80d5fe7cd90a2f4ee08dc9c27cda7c35a4172e5d85b12c49d4232537e98f9b1f3c5e6989a8b9644e90e8918127680dbd0d4043510840fc0f1e11a216c280b5395a2546e7e4b2663e04f811622f15a4f91e83aa2e92ba2a573c139142c54ae63072a1ec1ee7dc0c04bde5c847806172aa05c92c22ae8e308d1d2692b12cc195ce0a2d1bda6a88befa19fa07f51caa75ce83837f28965600b8aacab0855ffb0e741ec5f7c41421e9829a9d48611c8c831f71be5ea73e66594977ffd").unwrap()); + target_value += "00031735b6a427e80d5fe7cd90a2f4ee08dc9c27cda7c35a4172e5d85b12c49d4232537e98f9b1f3c5e6989a8b9644e90e8918127680dbd0d4043510840fc0f1e11a216c280b5395a2546e7e4b2663e04f811622f15a4f91e83aa2e92ba2a573c139142c54ae63072a1ec1ee7dc0c04bde5c847806172aa05c92c22ae8e308d1d2692b12cc195ce0a2d1bda6a88befa19fa07f51caa75ce83837f28965600b8aacab0855ffb0e741ec5f7c41421e9829a9d48611c8c831f71be5ea73e66594977ffd"; } else { - target_value.append(&mut >::from_hex("0000").unwrap()); + target_value += "0000"; } - assert_eq!(encoded_value, target_value); + target_value += "002200036e96fe9f8b0ddcd729ba03cfafa5a27b050b39d354dd980814268dfa9a44d4c2"; // batch + assert_eq!(encoded_value.as_hex().to_string(), target_value); } #[test] diff --git a/lightning/src/ln/onion_route_tests.rs b/lightning/src/ln/onion_route_tests.rs index 5ca4b4d5722..f4984730057 100644 --- a/lightning/src/ln/onion_route_tests.rs +++ b/lightning/src/ln/onion_route_tests.rs @@ -21,6 +21,7 @@ use crate::ln::onion_utils; use crate::routing::gossip::{NetworkUpdate, RoutingFees}; use crate::routing::router::{get_route, PaymentParameters, Route, RouteParameters, RouteHint, RouteHintHop}; use crate::ln::features::{InitFeatures, Bolt11InvoiceFeatures}; +use crate::ln::functional_test_utils::test_default_channel_config; use crate::ln::msgs; use crate::ln::msgs::{ChannelMessageHandler, ChannelUpdate, OutboundTrampolinePayload}; use crate::ln::wire::Encode; @@ -328,7 +329,7 @@ fn test_onion_failure() { // to 2000, which is above the default value of 1000 set in create_node_chanmgrs. // This exposed a previous bug because we were using the wrong value all the way down in // Channel::get_counterparty_htlc_minimum_msat(). - let mut node_2_cfg: UserConfig = Default::default(); + let mut node_2_cfg: UserConfig = test_default_channel_config(); node_2_cfg.channel_handshake_config.our_htlc_minimum_msat = 2000; node_2_cfg.channel_handshake_config.announced_channel = true; node_2_cfg.channel_handshake_limits.force_announced_channel_preference = false; diff --git a/lightning/src/ln/peer_handler.rs b/lightning/src/ln/peer_handler.rs index 17e46a274b1..b420161ec1f 100644 --- a/lightning/src/ln/peer_handler.rs +++ b/lightning/src/ln/peer_handler.rs @@ -249,7 +249,7 @@ impl ChannelMessageHandler for ErroringMessageHandler { ErroringMessageHandler::push_error(&self, their_node_id, msg.channel_id); } #[cfg(splicing)] - fn handle_splice(&self, their_node_id: &PublicKey, msg: &msgs::Splice) { + fn handle_splice_init(&self, their_node_id: &PublicKey, msg: &msgs::SpliceInit) { ErroringMessageHandler::push_error(&self, their_node_id, msg.channel_id); } #[cfg(splicing)] @@ -306,6 +306,7 @@ impl ChannelMessageHandler for ErroringMessageHandler { features.set_basic_mpp_optional(); features.set_wumbo_optional(); features.set_shutdown_any_segwit_optional(); + features.set_dual_fund_optional(); features.set_channel_type_optional(); features.set_scid_privacy_optional(); features.set_zero_conf_optional(); @@ -320,46 +321,57 @@ impl ChannelMessageHandler for ErroringMessageHandler { None } + #[cfg(any(dual_funding, splicing))] fn handle_open_channel_v2(&self, their_node_id: &PublicKey, msg: &msgs::OpenChannelV2) { ErroringMessageHandler::push_error(self, their_node_id, msg.common_fields.temporary_channel_id); } + #[cfg(any(dual_funding, splicing))] fn handle_accept_channel_v2(&self, their_node_id: &PublicKey, msg: &msgs::AcceptChannelV2) { ErroringMessageHandler::push_error(self, their_node_id, msg.common_fields.temporary_channel_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_add_input(&self, their_node_id: &PublicKey, msg: &msgs::TxAddInput) { ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_add_output(&self, their_node_id: &PublicKey, msg: &msgs::TxAddOutput) { ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_remove_input(&self, their_node_id: &PublicKey, msg: &msgs::TxRemoveInput) { ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_remove_output(&self, their_node_id: &PublicKey, msg: &msgs::TxRemoveOutput) { ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_complete(&self, their_node_id: &PublicKey, msg: &msgs::TxComplete) { ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_signatures(&self, their_node_id: &PublicKey, msg: &msgs::TxSignatures) { ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_init_rbf(&self, their_node_id: &PublicKey, msg: &msgs::TxInitRbf) { ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_ack_rbf(&self, their_node_id: &PublicKey, msg: &msgs::TxAckRbf) { ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_abort(&self, their_node_id: &PublicKey, msg: &msgs::TxAbort) { ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id); } @@ -1762,12 +1774,14 @@ impl { self.message_handler.chan_handler.handle_open_channel(&their_node_id, &msg); }, + #[cfg(any(dual_funding, splicing))] wire::Message::OpenChannelV2(msg) => { self.message_handler.chan_handler.handle_open_channel_v2(&their_node_id, &msg); }, wire::Message::AcceptChannel(msg) => { self.message_handler.chan_handler.handle_accept_channel(&their_node_id, &msg); }, + #[cfg(any(dual_funding, splicing))] wire::Message::AcceptChannelV2(msg) => { self.message_handler.chan_handler.handle_accept_channel_v2(&their_node_id, &msg); }, @@ -1789,8 +1803,8 @@ impl { - self.message_handler.chan_handler.handle_splice(&their_node_id, &msg); + wire::Message::SpliceInit(msg) => { + self.message_handler.chan_handler.handle_splice_init(&their_node_id, &msg); } #[cfg(splicing)] wire::Message::SpliceAck(msg) => { @@ -1802,30 +1816,39 @@ impl { self.message_handler.chan_handler.handle_tx_add_input(&their_node_id, &msg); }, + #[cfg(any(dual_funding, splicing))] wire::Message::TxAddOutput(msg) => { self.message_handler.chan_handler.handle_tx_add_output(&their_node_id, &msg); }, + #[cfg(any(dual_funding, splicing))] wire::Message::TxRemoveInput(msg) => { self.message_handler.chan_handler.handle_tx_remove_input(&their_node_id, &msg); }, + #[cfg(any(dual_funding, splicing))] wire::Message::TxRemoveOutput(msg) => { self.message_handler.chan_handler.handle_tx_remove_output(&their_node_id, &msg); }, + #[cfg(any(dual_funding, splicing))] wire::Message::TxComplete(msg) => { self.message_handler.chan_handler.handle_tx_complete(&their_node_id, &msg); }, + #[cfg(any(dual_funding, splicing))] wire::Message::TxSignatures(msg) => { self.message_handler.chan_handler.handle_tx_signatures(&their_node_id, &msg); }, + #[cfg(any(dual_funding, splicing))] wire::Message::TxInitRbf(msg) => { self.message_handler.chan_handler.handle_tx_init_rbf(&their_node_id, &msg); }, + #[cfg(any(dual_funding, splicing))] wire::Message::TxAckRbf(msg) => { self.message_handler.chan_handler.handle_tx_ack_rbf(&their_node_id, &msg); }, + #[cfg(any(dual_funding, splicing))] wire::Message::TxAbort(msg) => { self.message_handler.chan_handler.handle_tx_abort(&their_node_id, &msg); } @@ -2128,9 +2151,9 @@ impl { + MessageSendEvent::SendSpliceInit { ref node_id, ref msg} => { let logger = WithContext::from(&self.logger, Some(*node_id), Some(msg.channel_id)); - log_debug!(logger, "Handling SendSplice event in peer_handler for node {} for channel {}", + log_debug!(logger, "Handling SendSpliceInit event in peer_handler for node {} for channel {}", log_pubkey!(node_id), &msg.channel_id); self.enqueue_message(&mut *get_peer_for_forwarding!(node_id), msg); diff --git a/lightning/src/ln/wire.rs b/lightning/src/ln/wire.rs index 55e31399ae1..5735797d12a 100644 --- a/lightning/src/ln/wire.rs +++ b/lightning/src/ln/wire.rs @@ -54,26 +54,37 @@ pub(crate) enum Message where T: core::fmt::Debug + Type + TestEq { Ping(msgs::Ping), Pong(msgs::Pong), OpenChannel(msgs::OpenChannel), + #[cfg(any(dual_funding, splicing))] OpenChannelV2(msgs::OpenChannelV2), AcceptChannel(msgs::AcceptChannel), + #[cfg(any(dual_funding, splicing))] AcceptChannelV2(msgs::AcceptChannelV2), FundingCreated(msgs::FundingCreated), FundingSigned(msgs::FundingSigned), Stfu(msgs::Stfu), #[cfg(splicing)] - Splice(msgs::Splice), + SpliceInit(msgs::SpliceInit), #[cfg(splicing)] SpliceAck(msgs::SpliceAck), #[cfg(splicing)] SpliceLocked(msgs::SpliceLocked), + #[cfg(any(dual_funding, splicing))] TxAddInput(msgs::TxAddInput), + #[cfg(any(dual_funding, splicing))] TxAddOutput(msgs::TxAddOutput), + #[cfg(any(dual_funding, splicing))] TxRemoveInput(msgs::TxRemoveInput), + #[cfg(any(dual_funding, splicing))] TxRemoveOutput(msgs::TxRemoveOutput), + #[cfg(any(dual_funding, splicing))] TxComplete(msgs::TxComplete), + #[cfg(any(dual_funding, splicing))] TxSignatures(msgs::TxSignatures), + #[cfg(any(dual_funding, splicing))] TxInitRbf(msgs::TxInitRbf), + #[cfg(any(dual_funding, splicing))] TxAckRbf(msgs::TxAckRbf), + #[cfg(any(dual_funding, splicing))] TxAbort(msgs::TxAbort), ChannelReady(msgs::ChannelReady), Shutdown(msgs::Shutdown), @@ -112,26 +123,37 @@ impl Writeable for Message where T: core::fmt::Debug + Type + TestEq { &Message::Ping(ref msg) => msg.write(writer), &Message::Pong(ref msg) => msg.write(writer), &Message::OpenChannel(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::OpenChannelV2(ref msg) => msg.write(writer), &Message::AcceptChannel(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::AcceptChannelV2(ref msg) => msg.write(writer), &Message::FundingCreated(ref msg) => msg.write(writer), &Message::FundingSigned(ref msg) => msg.write(writer), &Message::Stfu(ref msg) => msg.write(writer), #[cfg(splicing)] - &Message::Splice(ref msg) => msg.write(writer), + &Message::SpliceInit(ref msg) => msg.write(writer), #[cfg(splicing)] &Message::SpliceAck(ref msg) => msg.write(writer), #[cfg(splicing)] &Message::SpliceLocked(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::TxAddInput(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::TxAddOutput(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::TxRemoveInput(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::TxRemoveOutput(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::TxComplete(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::TxSignatures(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::TxInitRbf(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::TxAckRbf(ref msg) => msg.write(writer), + #[cfg(any(dual_funding, splicing))] &Message::TxAbort(ref msg) => msg.write(writer), &Message::ChannelReady(ref msg) => msg.write(writer), &Message::Shutdown(ref msg) => msg.write(writer), @@ -170,26 +192,37 @@ impl Type for Message where T: core::fmt::Debug + Type + TestEq { &Message::Ping(ref msg) => msg.type_id(), &Message::Pong(ref msg) => msg.type_id(), &Message::OpenChannel(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::OpenChannelV2(ref msg) => msg.type_id(), &Message::AcceptChannel(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::AcceptChannelV2(ref msg) => msg.type_id(), &Message::FundingCreated(ref msg) => msg.type_id(), &Message::FundingSigned(ref msg) => msg.type_id(), &Message::Stfu(ref msg) => msg.type_id(), #[cfg(splicing)] - &Message::Splice(ref msg) => msg.type_id(), + &Message::SpliceInit(ref msg) => msg.type_id(), #[cfg(splicing)] &Message::SpliceAck(ref msg) => msg.type_id(), #[cfg(splicing)] &Message::SpliceLocked(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::TxAddInput(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::TxAddOutput(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::TxRemoveInput(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::TxRemoveOutput(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::TxComplete(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::TxSignatures(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::TxInitRbf(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::TxAckRbf(ref msg) => msg.type_id(), + #[cfg(any(dual_funding, splicing))] &Message::TxAbort(ref msg) => msg.type_id(), &Message::ChannelReady(ref msg) => msg.type_id(), &Message::Shutdown(ref msg) => msg.type_id(), @@ -264,12 +297,14 @@ fn do_read(buffer: &mut R, message_type: u1 msgs::OpenChannel::TYPE => { Ok(Message::OpenChannel(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::OpenChannelV2::TYPE => { Ok(Message::OpenChannelV2(Readable::read(buffer)?)) }, msgs::AcceptChannel::TYPE => { Ok(Message::AcceptChannel(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::AcceptChannelV2::TYPE => { Ok(Message::AcceptChannelV2(Readable::read(buffer)?)) }, @@ -280,8 +315,8 @@ fn do_read(buffer: &mut R, message_type: u1 Ok(Message::FundingSigned(Readable::read(buffer)?)) }, #[cfg(splicing)] - msgs::Splice::TYPE => { - Ok(Message::Splice(Readable::read(buffer)?)) + msgs::SpliceInit::TYPE => { + Ok(Message::SpliceInit(Readable::read(buffer)?)) }, msgs::Stfu::TYPE => { Ok(Message::Stfu(Readable::read(buffer)?)) @@ -294,30 +329,39 @@ fn do_read(buffer: &mut R, message_type: u1 msgs::SpliceLocked::TYPE => { Ok(Message::SpliceLocked(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::TxAddInput::TYPE => { Ok(Message::TxAddInput(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::TxAddOutput::TYPE => { Ok(Message::TxAddOutput(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::TxRemoveInput::TYPE => { Ok(Message::TxRemoveInput(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::TxRemoveOutput::TYPE => { Ok(Message::TxRemoveOutput(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::TxComplete::TYPE => { Ok(Message::TxComplete(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::TxSignatures::TYPE => { Ok(Message::TxSignatures(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::TxInitRbf::TYPE => { Ok(Message::TxInitRbf(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::TxAckRbf::TYPE => { Ok(Message::TxAckRbf(Readable::read(buffer)?)) }, + #[cfg(any(dual_funding, splicing))] msgs::TxAbort::TYPE => { Ok(Message::TxAbort(Readable::read(buffer)?)) }, @@ -504,13 +548,13 @@ impl Encode for msgs::AcceptChannelV2 { const TYPE: u16 = 65; } -impl Encode for msgs::Splice { - // TODO(splicing) Double check with finalized spec; draft spec contains 74, which is probably wrong as it is used by tx_Abort; CLN uses 75 - const TYPE: u16 = 75; +impl Encode for msgs::SpliceInit { + // TODO(splicing) Double check with finalized spec; draft spec contains 80; previously it was 74 (conflict with tx_abort); CLN used 75 + const TYPE: u16 = 80; } impl Encode for msgs::SpliceAck { - const TYPE: u16 = 76; + const TYPE: u16 = 81; } impl Encode for msgs::SpliceLocked { diff --git a/lightning/src/sign/ecdsa.rs b/lightning/src/sign/ecdsa.rs index a0409e54505..4282bf8ca5e 100644 --- a/lightning/src/sign/ecdsa.rs +++ b/lightning/src/sign/ecdsa.rs @@ -210,6 +210,12 @@ pub trait EcdsaChannelSigner: ChannelSigner { fn sign_channel_announcement_with_funding_key( &self, msg: &UnsignedChannelAnnouncement, secp_ctx: &Secp256k1, ) -> Result; + + /// #SPLICING + /// Create a signature for a splicing funding transaction, for the input which is the previous funding tx. + fn sign_splicing_funding_input( + &self, splicing_tx: &Transaction, splice_prev_funding_input_index: u16, splice_prev_funding_input_value: u64, /*redeem_script: &Script, */secp_ctx: &Secp256k1 + ) -> Result; } /// A writeable signer. diff --git a/lightning/src/sign/mod.rs b/lightning/src/sign/mod.rs index 79edf0aed2e..14d72198283 100644 --- a/lightning/src/sign/mod.rs +++ b/lightning/src/sign/mod.rs @@ -1672,6 +1672,19 @@ impl EcdsaChannelSigner for InMemorySigner { let msghash = hash_to_message!(&Sha256dHash::hash(&msg.encode()[..])[..]); Ok(secp_ctx.sign_ecdsa(&msghash, &self.funding_key)) } + + /// #SPLICING + /// #SPLICE-SIG + fn sign_splicing_funding_input(&self, splicing_tx: &Transaction, splice_prev_funding_input_index: u16, splice_prev_funding_input_value: u64, /*_redeem_script0: &Script, */secp_ctx: &Secp256k1) -> Result { + let funding_pubkey = PublicKey::from_secret_key(secp_ctx, &self.funding_key); + let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR); + let funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &counterparty_keys.funding_pubkey); + let sighash = &sighash::SighashCache::new(splicing_tx).segwit_signature_hash(splice_prev_funding_input_index as usize, &funding_redeemscript, splice_prev_funding_input_value, EcdsaSighashType::All).unwrap()[..]; + let msg = hash_to_message!(sighash); + let sig = sign(secp_ctx, &msg, &self.funding_key); + Ok(sig) + } + } #[cfg(taproot)] diff --git a/lightning/src/util/config.rs b/lightning/src/util/config.rs index 2c8f03b93c8..a2a9cb6e545 100644 --- a/lightning/src/util/config.rs +++ b/lightning/src/util/config.rs @@ -360,7 +360,7 @@ impl Readable for ChannelHandshakeLimits { } } -/// Options for how to set the max dust HTLC exposure allowed on a channel. See +/// Options for how to set the max dust exposure allowed on a channel. See /// [`ChannelConfig::max_dust_htlc_exposure`] for details. #[derive(Copy, Clone, Debug, PartialEq, Eq)] pub enum MaxDustHTLCExposure { @@ -374,19 +374,17 @@ pub enum MaxDustHTLCExposure { /// to this maximum the channel may be unable to send/receive HTLCs between the maximum dust /// exposure and the new minimum value for HTLCs to be economically viable to claim. FixedLimitMsat(u64), - /// This sets a multiplier on the estimated high priority feerate (sats/KW, as obtained from - /// [`FeeEstimator`]) to determine the maximum allowed dust exposure. If this variant is used - /// then the maximum dust exposure in millisatoshis is calculated as: - /// `high_priority_feerate_per_kw * value`. For example, with our default value - /// `FeeRateMultiplier(5000)`: + /// This sets a multiplier on the [`ConfirmationTarget::OnChainSweep`] feerate (in sats/KW) to + /// determine the maximum allowed dust exposure. If this variant is used then the maximum dust + /// exposure in millisatoshis is calculated as: + /// `feerate_per_kw * value`. For example, with our default value + /// `FeeRateMultiplier(10_000)`: /// /// - For the minimum fee rate of 1 sat/vByte (250 sat/KW, although the minimum /// defaults to 253 sats/KW for rounding, see [`FeeEstimator`]), the max dust exposure would - /// be 253 * 5000 = 1,265,000 msats. + /// be 253 * 10_000 = 2,530,000 msats. /// - For a fee rate of 30 sat/vByte (7500 sat/KW), the max dust exposure would be - /// 7500 * 5000 = 37,500,000 msats. - /// - /// This allows the maximum dust exposure to automatically scale with fee rate changes. + /// 7500 * 50_000 = 75,000,000 msats (0.00075 BTC). /// /// Note, if you're using a third-party fee estimator, this may leave you more exposed to a /// fee griefing attack, where your fee estimator may purposely overestimate the fee rate, @@ -401,6 +399,7 @@ pub enum MaxDustHTLCExposure { /// by default this will be set to a [`Self::FixedLimitMsat`] of 5,000,000 msat. /// /// [`FeeEstimator`]: crate::chain::chaininterface::FeeEstimator + /// [`ConfirmationTarget::OnChainSweep`]: crate::chain::chaininterface::ConfirmationTarget::OnChainSweep FeeRateMultiplier(u64), } @@ -453,13 +452,16 @@ pub struct ChannelConfig { /// /// [`MIN_CLTV_EXPIRY_DELTA`]: crate::ln::channelmanager::MIN_CLTV_EXPIRY_DELTA pub cltv_expiry_delta: u16, - /// Limit our total exposure to in-flight HTLCs which are burned to fees as they are too - /// small to claim on-chain. + /// Limit our total exposure to potential loss to on-chain fees on close, including in-flight + /// HTLCs which are burned to fees as they are too small to claim on-chain and fees on + /// commitment transaction(s) broadcasted by our counterparty in excess of our own fee estimate. + /// + /// # HTLC-based Dust Exposure /// /// When an HTLC present in one of our channels is below a "dust" threshold, the HTLC will /// not be claimable on-chain, instead being turned into additional miner fees if either /// party force-closes the channel. Because the threshold is per-HTLC, our total exposure - /// to such payments may be sustantial if there are many dust HTLCs present when the + /// to such payments may be substantial if there are many dust HTLCs present when the /// channel is force-closed. /// /// The dust threshold for each HTLC is based on the `dust_limit_satoshis` for each party in a @@ -473,7 +475,42 @@ pub struct ChannelConfig { /// The selected limit is applied for sent, forwarded, and received HTLCs and limits the total /// exposure across all three types per-channel. /// - /// Default value: [`MaxDustHTLCExposure::FeeRateMultiplier`] with a multiplier of 5000. + /// # Transaction Fee Dust Exposure + /// + /// Further, counterparties broadcasting a commitment transaction in a force-close may result + /// in other balance being burned to fees, and thus all fees on commitment and HTLC + /// transactions in excess of our local fee estimates are included in the dust calculation. + /// + /// Because of this, another way to look at this limit is to divide it by 43,000 (or 218,750 + /// for non-anchor channels) and see it as the maximum feerate disagreement (in sats/vB) per + /// non-dust HTLC we're allowed to have with our peers before risking a force-closure for + /// inbound channels. + // This works because, for anchor channels the on-chain cost is 172 weight (172+703 for + // non-anchors with an HTLC-Success transaction), i.e. + // dust_exposure_limit_msat / 1000 = 172 * feerate_in_sat_per_vb / 4 * HTLC count + // dust_exposure_limit_msat = 43,000 * feerate_in_sat_per_vb * HTLC count + // dust_exposure_limit_msat / HTLC count / 43,000 = feerate_in_sat_per_vb + /// + /// Thus, for the default value of 10_000 * a current feerate estimate of 10 sat/vB (or 2,500 + /// sat/KW), we risk force-closure if we disagree with our peer by: + /// * `10_000 * 2_500 / 43_000 / (483*2)` = 0.6 sat/vB for anchor channels with 483 HTLCs in + /// both directions (the maximum), + /// * `10_000 * 2_500 / 43_000 / (50*2)` = 5.8 sat/vB for anchor channels with 50 HTLCs in both + /// directions (the LDK default max from [`ChannelHandshakeConfig::our_max_accepted_htlcs`]) + /// * `10_000 * 2_500 / 218_750 / (483*2)` = 0.1 sat/vB for non-anchor channels with 483 HTLCs + /// in both directions (the maximum), + /// * `10_000 * 2_500 / 218_750 / (50*2)` = 1.1 sat/vB for non-anchor channels with 50 HTLCs + /// in both (the LDK default maximum from [`ChannelHandshakeConfig::our_max_accepted_htlcs`]) + /// + /// Note that when using [`MaxDustHTLCExposure::FeeRateMultiplier`] this maximum disagreement + /// will scale linearly with increases (or decreases) in the our feerate estimates. Further, + /// for anchor channels we expect our counterparty to use a relatively low feerate estimate + /// while we use [`ConfirmationTarget::OnChainSweep`] (which should be relatively high) and + /// feerate disagreement force-closures should only occur when theirs is higher than ours. + /// + /// Default value: [`MaxDustHTLCExposure::FeeRateMultiplier`] with a multiplier of 10_000. + /// + /// [`ConfirmationTarget::OnChainSweep`]: crate::chain::chaininterface::ConfirmationTarget::OnChainSweep pub max_dust_htlc_exposure: MaxDustHTLCExposure, /// The additional fee we're willing to pay to avoid waiting for the counterparty's /// `to_self_delay` to reclaim funds. @@ -561,7 +598,7 @@ impl Default for ChannelConfig { forwarding_fee_proportional_millionths: 0, forwarding_fee_base_msat: 1000, cltv_expiry_delta: 6 * 12, // 6 blocks/hour * 12 hours - max_dust_htlc_exposure: MaxDustHTLCExposure::FeeRateMultiplier(5000), + max_dust_htlc_exposure: MaxDustHTLCExposure::FeeRateMultiplier(10000), force_close_avoidance_max_fee_satoshis: 1000, accept_underpaying_htlcs: false, } @@ -782,11 +819,23 @@ pub struct UserConfig { /// [`msgs::AcceptChannel`] message will not be sent back to the counterparty node unless the /// user explicitly chooses to accept the request. /// + // TODO(dual_funding): Make these part of doc comments when #[cfg(dual_funding)] is dropped. + // To be able to contribute to inbound dual-funded channels, this field must be set to true. + // In that case the analogous [`Event::OpenChannelV2Request`] will be triggered once a request + // to open a new dual-funded channel is received through a [`msgs::OpenChannelV2`] message. + // A corresponding [`msgs::AcceptChannelV2`] message will not be sent back to the counterparty + // node until the user explicitly chooses to accept the request, optionally contributing funds + // to it. + /// /// Default value: false. /// /// [`Event::OpenChannelRequest`]: crate::events::Event::OpenChannelRequest /// [`msgs::OpenChannel`]: crate::ln::msgs::OpenChannel /// [`msgs::AcceptChannel`]: crate::ln::msgs::AcceptChannel + /// TODO(dual_funding): Make these part of doc comments when #[cfg(dual_funding)] is dropped. + /// [`Event::OpenChannelV2Request`]: crate::events::Event::OpenChannelV2Request + /// [`msgs::OpenChannelV2`]: crate::ln::msgs::OpenChannelV2 + /// [`msgs::AcceptChannelV2`]: crate::ln::msgs::AcceptChannelV2 pub manually_accept_inbound_channels: bool, /// If this is set to true, LDK will intercept HTLCs that are attempting to be forwarded over /// fake short channel ids generated via [`ChannelManager::get_intercept_scid`]. Upon HTLC diff --git a/lightning/src/util/scid_utils.rs b/lightning/src/util/scid_utils.rs index c9485b60b70..a44182fd856 100644 --- a/lightning/src/util/scid_utils.rs +++ b/lightning/src/util/scid_utils.rs @@ -140,7 +140,7 @@ pub(crate) mod fake_scid { /// `Namespace`. Therefore, we encrypt it using a random bytes provided by `ChannelManager`. fn get_encrypted_vout(&self, block_height: u32, tx_index: u32, fake_scid_rand_bytes: &[u8; 32]) -> u8 { let mut salt = [0 as u8; 8]; - let block_height_bytes = block_height.to_be_bytes(); + let block_height_bytes = block_height .to_be_bytes(); salt[0..4].copy_from_slice(&block_height_bytes); let tx_index_bytes = tx_index.to_be_bytes(); salt[4..8].copy_from_slice(&tx_index_bytes); diff --git a/lightning/src/util/ser.rs b/lightning/src/util/ser.rs index 5e7c6f85659..fc58d3d68bd 100644 --- a/lightning/src/util/ser.rs +++ b/lightning/src/util/ser.rs @@ -1434,7 +1434,7 @@ impl Readable for Duration { /// /// Use [`TransactionU16LenLimited::into_transaction`] to convert into the contained `Transaction`. #[derive(Clone, Debug, Hash, PartialEq, Eq)] -pub struct TransactionU16LenLimited(Transaction); +pub struct TransactionU16LenLimited(pub Transaction); impl TransactionU16LenLimited { /// Constructs a new `TransactionU16LenLimited` from a `Transaction` only if it's consensus- diff --git a/lightning/src/util/test_channel_signer.rs b/lightning/src/util/test_channel_signer.rs index 64320bbbaf5..2ad6acc6a2c 100644 --- a/lightning/src/util/test_channel_signer.rs +++ b/lightning/src/util/test_channel_signer.rs @@ -148,14 +148,14 @@ impl ChannelSigner for TestChannelSigner { Ok(()) } - fn validate_counterparty_revocation(&self, idx: u64, _secret: &SecretKey) -> Result<(), ()> { + fn validate_counterparty_revocation(&self, idx: u64, secret: &SecretKey) -> Result<(), ()> { if !*self.available.lock().unwrap() { return Err(()); } let mut state = self.state.lock().unwrap(); assert!(idx == state.last_counterparty_revoked_commitment || idx == state.last_counterparty_revoked_commitment - 1, "expecting to validate the current or next counterparty revocation - trying {}, current {}", idx, state.last_counterparty_revoked_commitment); state.last_counterparty_revoked_commitment = idx; - Ok(()) + self.inner.validate_counterparty_revocation(idx, secret) } fn pubkeys(&self) -> &ChannelPublicKeys { self.inner.pubkeys() } @@ -184,7 +184,7 @@ impl EcdsaChannelSigner for TestChannelSigner { // Ensure that the counterparty doesn't get more than two broadcastable commitments - // the last and the one we are trying to sign assert!(actual_commitment_number >= state.last_counterparty_revoked_commitment - 2, "cannot sign a commitment if second to last wasn't revoked - signing {} revoked {}", actual_commitment_number, state.last_counterparty_revoked_commitment); - state.last_counterparty_commitment = cmp::min(last_commitment_number, actual_commitment_number) + state.last_counterparty_commitment = cmp::min(last_commitment_number, actual_commitment_number); } Ok(self.inner.sign_counterparty_commitment(commitment_tx, inbound_htlc_preimages, outbound_htlc_preimages, secp_ctx).unwrap()) @@ -295,6 +295,10 @@ impl EcdsaChannelSigner for TestChannelSigner { ) -> Result { self.inner.sign_channel_announcement_with_funding_key(msg, secp_ctx) } + + fn sign_splicing_funding_input(&self, splicing_tx: &Transaction, splice_prev_funding_input_index: u16, splice_prev_funding_input_value: u64, /*redeem_script: &Script, */secp_ctx: &Secp256k1) -> Result { + self.inner.sign_splicing_funding_input(splicing_tx, splice_prev_funding_input_index, splice_prev_funding_input_value, /*redeem_script, */secp_ctx) + } } impl WriteableEcdsaChannelSigner for TestChannelSigner {} diff --git a/lightning/src/util/test_utils.rs b/lightning/src/util/test_utils.rs index 6b4d2acd4d9..369e8f4c69c 100644 --- a/lightning/src/util/test_utils.rs +++ b/lightning/src/util/test_utils.rs @@ -230,9 +230,9 @@ impl<'a> Router for TestRouter<'a> { impl<'a> MessageRouter for TestRouter<'a> { fn find_path( - &self, sender: PublicKey, peers: Vec, destination: Destination + &self, _sender: PublicKey, _peers: Vec, _destination: Destination ) -> Result { - self.router.find_path(sender, peers, destination) + unreachable!() } fn create_blinded_paths< @@ -785,8 +785,8 @@ impl msgs::ChannelMessageHandler for TestChannelMessageHandler { self.received_msg(wire::Message::Stfu(msg.clone())); } #[cfg(splicing)] - fn handle_splice(&self, _their_node_id: &PublicKey, msg: &msgs::Splice) { - self.received_msg(wire::Message::Splice(msg.clone())); + fn handle_splice_init(&self, _their_node_id: &PublicKey, msg: &msgs::SpliceInit) { + self.received_msg(wire::Message::SpliceInit(msg.clone())); } #[cfg(splicing)] fn handle_splice_ack(&self, _their_node_id: &PublicKey, msg: &msgs::SpliceAck) { @@ -849,46 +849,57 @@ impl msgs::ChannelMessageHandler for TestChannelMessageHandler { Some(vec![self.chain_hash]) } + #[cfg(any(dual_funding, splicing))] fn handle_open_channel_v2(&self, _their_node_id: &PublicKey, msg: &msgs::OpenChannelV2) { self.received_msg(wire::Message::OpenChannelV2(msg.clone())); } + #[cfg(any(dual_funding, splicing))] fn handle_accept_channel_v2(&self, _their_node_id: &PublicKey, msg: &msgs::AcceptChannelV2) { self.received_msg(wire::Message::AcceptChannelV2(msg.clone())); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_add_input(&self, _their_node_id: &PublicKey, msg: &msgs::TxAddInput) { self.received_msg(wire::Message::TxAddInput(msg.clone())); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_add_output(&self, _their_node_id: &PublicKey, msg: &msgs::TxAddOutput) { self.received_msg(wire::Message::TxAddOutput(msg.clone())); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_remove_input(&self, _their_node_id: &PublicKey, msg: &msgs::TxRemoveInput) { self.received_msg(wire::Message::TxRemoveInput(msg.clone())); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_remove_output(&self, _their_node_id: &PublicKey, msg: &msgs::TxRemoveOutput) { self.received_msg(wire::Message::TxRemoveOutput(msg.clone())); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_complete(&self, _their_node_id: &PublicKey, msg: &msgs::TxComplete) { self.received_msg(wire::Message::TxComplete(msg.clone())); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_signatures(&self, _their_node_id: &PublicKey, msg: &msgs::TxSignatures) { self.received_msg(wire::Message::TxSignatures(msg.clone())); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_init_rbf(&self, _their_node_id: &PublicKey, msg: &msgs::TxInitRbf) { self.received_msg(wire::Message::TxInitRbf(msg.clone())); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_ack_rbf(&self, _their_node_id: &PublicKey, msg: &msgs::TxAckRbf) { self.received_msg(wire::Message::TxAckRbf(msg.clone())); } + #[cfg(any(dual_funding, splicing))] fn handle_tx_abort(&self, _their_node_id: &PublicKey, msg: &msgs::TxAbort) { self.received_msg(wire::Message::TxAbort(msg.clone())); } diff --git a/possiblyrandom/Cargo.toml b/possiblyrandom/Cargo.toml index e02b59669b1..5d8a35ccdf5 100644 --- a/possiblyrandom/Cargo.toml +++ b/possiblyrandom/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "possiblyrandom" -version = "0.1.0" +version = "0.2.0" authors = ["Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning/"