-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: move CInstantSendManager::AskNodesForLockedTx into PeerManager #6425
base: develop
Are you sure you want to change the base?
refactor: move CInstantSendManager::AskNodesForLockedTx into PeerManager #6425
Conversation
clang-format complains: --- src/llmq/context.cpp (before formatting)
+++ src/llmq/context.cpp (after formatting)
@@ -46,9 +46,9 @@
isman{[&]() -> llmq::CInstantSendManager* const {
assert(llmq::quorumInstantSendManager == nullptr);
llmq::quorumInstantSendManager = std::make_unique<llmq::CInstantSendManager>(*llmq::chainLocksHandler,
- chainman.ActiveChainstate(),
- *qman, *sigman, *shareman,
- sporkman, mempool, mn_sync, peerman,
+ chainman.ActiveChainstate(), *qman,
+ *sigman, *shareman, sporkman,
+ mempool, mn_sync, peerman,
is_masternode, unit_tests, wipe);
return llmq::quorumInstantSendManager.get();
}()},
--- src/llmq/instantsend.h (before formatting)
+++ src/llmq/instantsend.h (after formatting)
@@ -253,[13](https://github.com/dashpay/dash/actions/runs/11979610767/job/33402154445?pr=6425#step:4:14) +253,21 @@
std::unordered_set<uint256, StaticSaltedHasher> pendingRetryTxs GUARDED_BY(cs_pendingRetry);
public:
- explicit CInstantSendManager(CChainLocksHandler& _clhandler, CChainState& chainstate,
- CQuorumManager& _qman, CSigningManager& _sigman, CSigSharesManager& _shareman,
- CSporkManager& sporkman, CTxMemPool& _mempool, const CMasternodeSync& mn_sync,
- const std::unique_ptr<PeerManager>& peerman, bool is_masternode, bool unitTests, bool fWipe) :
+ explicit CInstantSendManager(CChainLocksHandler& _clhandler, CChainState& chainstate, CQuorumManager& _qman,
+ CSigningManager& _sigman, CSigSharesManager& _shareman, CSporkManager& sporkman,
+ CTxMemPool& _mempool, const CMasternodeSync& mn_sync,
+ const std::unique_ptr<PeerManager>& peerman, bool is_masternode, bool unitTests,
+ bool fWipe) :
db(unitTests, fWipe),
- clhandler(_clhandler), m_chainstate(chainstate), qman(_qman), sigman(_sigman),
- shareman(_shareman), spork_manager(sporkman), mempool(_mempool), m_mn_sync(mn_sync), m_peerman(peerman),
+ clhandler(_clhandler),
+ m_chainstate(chainstate),
+ qman(_qman),
+ sigman(_sigman),
+ shareman(_shareman),
+ spork_manager(sporkman),
+ mempool(_mempool),
+ m_mn_sync(mn_sync),
+ m_peerman(peerman),
m_is_masternode{is_masternode}
{
workInterrupt.reset(); |
880280b
to
1e74ad2
Compare
This pull request has conflicts, please rebase. |
This pull request has conflicts, please rebase. |
**This does change the logic!** We no longer prioritize asking MNs. This is probably fine? I don't specifically recall why we wanted to ask MNs besides potentially that they may be higher performing or better connected? We can potentially restore this logic once we bring masternode connection logic into Peer Does also change logic, by short-circuiting once peersToAsk is full. This commit has the added benefit of reducing contention on m_nodes_mutex due to no-longer calling connman.ForEachNode not once but twice This may slightly increase contention on m_peer_mutex; but that should be an ok tradeoff for not only removing dependencies, but also reducing contention on a much more contested RecursiveMutex
1e74ad2
to
090ae92
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK 090ae92
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK 090ae92
auto maybe_add_to_nodesToAskFor = [&](const PeerRef& peer) { | ||
if (peersToAsk.size() >= 4) { | ||
return false; | ||
} | ||
if (IsInvInFilter(peer, txid)) { | ||
peersToAsk.emplace_back(peer); | ||
} | ||
return true; | ||
}; | ||
|
||
{ | ||
LOCK(m_peer_mutex); | ||
// TODO consider prioritizing MNs again, once that flag is moved into Peer | ||
for (const auto& [_, peer] : m_peer_map) { | ||
if (!maybe_add_to_nodesToAskFor(peer)) { | ||
break; | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe simplify it?
how soon are you planning to implement this TODO?
If it's not short-term plans like very soon, it can be simplified to:
auto maybe_add_to_nodesToAskFor = [&](const PeerRef& peer) { | |
if (peersToAsk.size() >= 4) { | |
return false; | |
} | |
if (IsInvInFilter(peer, txid)) { | |
peersToAsk.emplace_back(peer); | |
} | |
return true; | |
}; | |
{ | |
LOCK(m_peer_mutex); | |
// TODO consider prioritizing MNs again, once that flag is moved into Peer | |
for (const auto& [_, peer] : m_peer_map) { | |
if (!maybe_add_to_nodesToAskFor(peer)) { | |
break; | |
} | |
} | |
} | |
{ | |
LOCK(m_peer_mutex); | |
// TODO consider prioritizing MNs again, once that flag is moved into Peer | |
for (const auto& [_, peer] : m_peer_map) { | |
if (peersToAsk.size() >= 4) { | |
break; | |
} | |
if (IsInvInFilter(peer, txid)) { | |
peersToAsk.emplace_back(peer); | |
} | |
} | |
} |
Issue being fixed or feature implemented
Instantsend manager currently relies on CConnMan, which is not needed. The function AskNodesForLockedTx is all networking logic anyhow, and these no reason why this logic would be contained to instantsend processing. Move it into net_processing instead.
What was done?
This does change the logic! We no longer prioritize asking MNs. This is probably fine? I don't specifically recall why we wanted to ask MNs besides potentially that they may be higher performing or better connected? We can potentially restore this logic once we bring masternode connection logic into Peer
Does also change logic, by short-circuiting once peersToAsk is full.
This commit has the added benefit of reducing contention on m_nodes_mutex due to no-longer calling connman.ForEachNode not once but twice
This may slightly increase contention on m_peer_mutex; but that should be an ok tradeoff for not only removing dependencies, but also reducing contention on a much more contested RecursiveMutex
How Has This Been Tested?
Built, local tests
Breaking Changes
None
Checklist:
Go over all the following points, and put an
x
in all the boxes that apply.