-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(watchers): align taker fee validation retries with makers #2263
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the fix, it now works fine, errors like this are gone
· 2024-11-03 05:53:07 +0000 [] swap_watcher:632] Entering the taker swap watcher loop LTC-segwit/DGB-segwit with taker fee hash: cf67f954baf648935ec2e419c98c74a657884867c65cc32897acf2ccd52b4495
03 05:53:07, mm2_main::lp_swap::swap_watcher:514] INFO Watcher loop for swap a569ee07-0797-4dc4-b1a1-e928b48e218c stopped with reason Error(mod:717] InvalidTakerFee("utxo_common:2086] WrongPaymentTx(\"Early confirmation: Fee tx Transaction { version: 1, n_time: None, overwintered: false, version_group_id: 0, inputs: [TransactionInput { previous_output: OutPoint { hash: c8c61d412e285087ecbd0f491835697af14ed5c01c4d14af637b7642fe3427f4, index: 0 }, script_sig: , sequence: 4294967295, script_witness: [30450221008458b766554b3d116a7da9c037b80446bbea33759481cfa39b9db267983a4da60220285b2dc2c43e2bb343c82eed65937e435039869f17a82a83d4aa1df6aa3e132301, 027245291c79cd3856857e0edcc4a479a347fc2ced65ea89236b1329fb07de6847] }], outputs: [TransactionOutput { value: 43616781, script_pubkey: 76a914ca1e04745e8ca0c60d8c5881531d51bec470743f88ac }, TransactionOutput { value: 18201068025, script_pubkey: 00149ac6fafb7e02dc5ebe1589ef9b3558486f20b63e }], lock_time: 1730612792, expiry_height: 0, shielded_spends: [], shielded_outputs: [], join_splits: [], value_balance: 0, join_split_pubkey: 0000000000000000000000000000000000000000000000000000000000000000, join_split_sig: 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, binding_sig: 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, zcash: false, posv: false, str_d_zeel: None, tx_hash_algo: DSHA256 } confirmed before min_block 20271407\")"))
still happens sometimes:
idk which coin it means, it doesn't say... likely XVC here it's CLAM or FTC
|
@cipig for this #2263 (comment) do we get
Or only an error log and it continues? |
it's only that error, i checked log file... eg
i guess the |
It's related to this #1238 , are you any coins in native mode on this KDF node? |
No, it's a watcher/seed node and all coins/tokens are enabled with electrum/RPC. |
ooh, it's a seed node that's why it receives this message. Will add a fix to not process this for seed nodes that are not running the coin in native mode. |
It will still process the |
@onur-ozkan should we also set komodo-defi-framework/mm2src/mm2_main/src/lp_network.rs Lines 217 to 219 in b7781b3
|
if let Err(e) = coin.tx_enum_from_bytes(&message.data) { | ||
log::error!("Message cannot continue the process due to: {:?}", e); | ||
return; | ||
}; | ||
|
||
let fut = coin.send_raw_tx_bytes(&message.data); | ||
ctx.spawner().spawn(async { | ||
if coin.is_utxo_in_native_mode() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about that, Should we allow seed nodes to send failed swap transactions to electrums? My opinion is no, but opening this to discussion. There is also this comment #1238 (comment) to think about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why are we sending the tx only when running in top of native?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use this in case the electrum/s used by the user in the swap is down or censoring transactions, if any one node in the network have the coin enabled with a native daemon, it will be an additional way to get the transaction through. After your electrums manager PR, we can add all the available electrums to the user's list and since all electrums are tried for broadcasting, this message will be only used as fallback if all electrums fail. Now, one of the seed nodes can have a different electrum which is why I thought about "Should we allow seed nodes to send failed swap transactions to electrums?" but the drawback of this is that this will be resource exhaustive for electrums which is another reason we only send the tx to nodes running the coin on top of native daemons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this mean "send failed swap transactions"? (Are those refund transactions - why then should they be treated differently)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are spend or refund transactions. If electrums censor or fail the broadcasting of these transactions, we rely on nodes running native daemon to get it into the mempool. It should be combined with spv validation in the future, so even if an electrum returns transaction broadcasted successfully, we make sure by validating it and if spv fails we try this p2p transaction helper as a last resort.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was also wondered what would happen if several nodes in native mode are requested to send same transaction (like banning nodes).
Apparently not, as I can see in kmd daemons code: if a transaction is relayed from some node's mempool to other nodes (which also have the same transaction in mempool) they do not consider the originating node as misbehaving.
As they are being sent to specific topic that is supposed to be subscribed from only one peer, no. |
So this is not for propagating from seed nodes? I thought it meant that, will recheck. |
From libp2p gossipsub behaviour /// This function should be called when [`Config::validate_messages()`] is `true` in order to
/// propagate messages. Messages are stored in the ['Memcache'] and validation is expected to be
/// fast enough that the messages should still exist in the cache.
///
/// Calling this function will propagate a message stored in the cache, if it still exists.
/// If the message still exists in the cache, it will be forwarded and this function will return true,
/// otherwise it will return false.
pub fn propagate_message(
&mut self,
message_id: &MessageId,
propagation_source: &PeerId,
) -> Result<bool, PublishError> { We set
That's why we propogate after validation komodo-defi-framework/mm2src/mm2_main/src/lp_network.rs Lines 219 to 221 in a538a02
|
Oh it's for the seeenodes. How did this work until now? If it's not being propagated, how did non-seed nodes receive hcheck messages? |
No idea, maybe you tested with only one seed node and one light node. |
I think we should separate validation from actually processing messages as well in all process p2p functions, will look into it and open an issue if needed. Edit: if we don't do that already somehow. |
Yeaah, most likely. So, the message should be propagated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM other than one nit
mm2src/mm2_main/src/lp_swap.rs
Outdated
|
||
pub(crate) const TAKER_FEE_VALIDATION_ATTEMPTS: usize = 6; | ||
pub(crate) const TAKER_FEE_VALIDATION_RETRY_DELAY: f64 = 10.; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub(crate) const TAKER_FEE_VALIDATION_RETRY_DELAY: f64 = 10.; | |
pub(crate) const TAKER_FEE_VALIDATION_RETRY_DELAY_SECS: f64 = 10.; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done here 515f6ae
Also, of course, |
Will return validation errors from |
Done here b81bfc6 I will revisit other messages processing and validations in a different PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -181,24 +182,31 @@ impl State for ValidateTakerFee { | |||
|
|||
async fn on_changed(self: Box<Self>, watcher_ctx: &mut WatcherStateMachine) -> StateResult<WatcherStateMachine> { | |||
debug!("Watcher validate taker fee"); | |||
let validated_f = watcher_ctx | |||
.taker_coin | |||
.watcher_validate_taker_fee(WatcherValidateTakerFeeInput { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this watcher_validate_taker_fee fn used to fail because inside there is a check that tx.height should be over min_block_number. The case when the tx is in mempool (tx.height is None) is processed and validation does not fail.
But tx.height also may be not None and contain value of '-1' (at least in KMD) if the tx is in a forked branch (so tx validation may fail)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But tx.height also may be not None and contain value of '-1' (at least in KMD) if the tx is in a forked branch (so tx validation may fail)
But we use confirmations not height
komodo-defi-framework/mm2src/coins/utxo/utxo_common.rs
Lines 2004 to 2006 in f487947
if tx.confirmations > 0 { | |
let current_block = try_s!(coin.as_ref().rpc_client.get_block_count().compat().await); | |
let confirmed_at = current_block + 1 - tx.confirmations as u64; |
I think it fails due to the
get_block_count
delay in having the current height of the blockchain, as the watcher can use a different electrum other than the taker. This is why the retries fix this.
I suggest to use
|
* dev: chore(release): update v2.2.0-beta date (#2277) chore(release): add changelog entries for v2.2.0-beta (#2240) fix(watchers): align taker fee validation retries with makers (#2263) feat(tokens): custom token activation for evm (#2141) use safer subtraction on healthcheck expiration check (#2272) fix(hd-wallet): correctly display evm addr in `get_new_address` response (#2264)
This also fixes propagation of health check messages.
To Test:
Already tested by @cipig