Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: move CInstantSendManager::AskNodesForLockedTx into PeerManager #6425

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

PastaPastaPasta
Copy link
Member

Issue being fixed or feature implemented

Instantsend manager currently relies on CConnMan, which is not needed. The function AskNodesForLockedTx is all networking logic anyhow, and these no reason why this logic would be contained to instantsend processing. Move it into net_processing instead.

What was done?

This does change the logic! We no longer prioritize asking MNs. This is probably fine? I don't specifically recall why we wanted to ask MNs besides potentially that they may be higher performing or better connected? We can potentially restore this logic once we bring masternode connection logic into Peer

Does also change logic, by short-circuiting once peersToAsk is full.

This commit has the added benefit of reducing contention on m_nodes_mutex due to no-longer calling connman.ForEachNode not once but twice

This may slightly increase contention on m_peer_mutex; but that should be an ok tradeoff for not only removing dependencies, but also reducing contention on a much more contested RecursiveMutex

How Has This Been Tested?

Built, local tests

Breaking Changes

None

Checklist:

Go over all the following points, and put an x in all the boxes that apply.

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have added or updated relevant unit/integration/functional/e2e tests
  • I have made corresponding changes to the documentation
  • I have assigned this pull request to a milestone (for repository code-owners and collaborators only)

src/net_processing.cpp Outdated Show resolved Hide resolved
src/net_processing.cpp Outdated Show resolved Hide resolved
@UdjinM6
Copy link

UdjinM6 commented Nov 22, 2024

clang-format complains:

--- src/llmq/context.cpp	(before formatting)
+++ src/llmq/context.cpp	(after formatting)
@@ -46,9 +46,9 @@
     isman{[&]() -> llmq::CInstantSendManager* const {
         assert(llmq::quorumInstantSendManager == nullptr);
         llmq::quorumInstantSendManager = std::make_unique<llmq::CInstantSendManager>(*llmq::chainLocksHandler,
-                                                                                     chainman.ActiveChainstate(),
-                                                                                     *qman, *sigman, *shareman,
-                                                                                     sporkman, mempool, mn_sync, peerman,
+                                                                                     chainman.ActiveChainstate(), *qman,
+                                                                                     *sigman, *shareman, sporkman,
+                                                                                     mempool, mn_sync, peerman,
                                                                                      is_masternode, unit_tests, wipe);
         return llmq::quorumInstantSendManager.get();
     }()},
--- src/llmq/instantsend.h	(before formatting)
+++ src/llmq/instantsend.h	(after formatting)
@@ -253,[13](https://github.com/dashpay/dash/actions/runs/11979610767/job/33402154445?pr=6425#step:4:14) +253,21 @@
     std::unordered_set<uint256, StaticSaltedHasher> pendingRetryTxs GUARDED_BY(cs_pendingRetry);
 
 public:
-    explicit CInstantSendManager(CChainLocksHandler& _clhandler, CChainState& chainstate,
-                                 CQuorumManager& _qman, CSigningManager& _sigman, CSigSharesManager& _shareman,
-                                 CSporkManager& sporkman, CTxMemPool& _mempool, const CMasternodeSync& mn_sync,
-                                 const std::unique_ptr<PeerManager>& peerman, bool is_masternode, bool unitTests, bool fWipe) :
+    explicit CInstantSendManager(CChainLocksHandler& _clhandler, CChainState& chainstate, CQuorumManager& _qman,
+                                 CSigningManager& _sigman, CSigSharesManager& _shareman, CSporkManager& sporkman,
+                                 CTxMemPool& _mempool, const CMasternodeSync& mn_sync,
+                                 const std::unique_ptr<PeerManager>& peerman, bool is_masternode, bool unitTests,
+                                 bool fWipe) :
         db(unitTests, fWipe),
-        clhandler(_clhandler), m_chainstate(chainstate), qman(_qman), sigman(_sigman),
-        shareman(_shareman), spork_manager(sporkman), mempool(_mempool), m_mn_sync(mn_sync), m_peerman(peerman),
+        clhandler(_clhandler),
+        m_chainstate(chainstate),
+        qman(_qman),
+        sigman(_sigman),
+        shareman(_shareman),
+        spork_manager(sporkman),
+        mempool(_mempool),
+        m_mn_sync(mn_sync),
+        m_peerman(peerman),
         m_is_masternode{is_masternode}
     {
         workInterrupt.reset();

Copy link

This pull request has conflicts, please rebase.

Copy link

This pull request has conflicts, please rebase.

**This does change the logic!** We no longer prioritize asking MNs. This is probably fine? I don't specifically recall why we wanted to ask MNs besides potentially that they may be higher performing or better connected? We can potentially restore this logic once we bring masternode connection logic into Peer

Does also change logic, by short-circuiting once peersToAsk is full.

This commit has the added benefit of reducing contention on m_nodes_mutex due to no-longer calling connman.ForEachNode not once but twice

This may slightly increase contention on m_peer_mutex; but that should be an ok tradeoff for not only removing dependencies, but also reducing contention on a much more contested RecursiveMutex
Copy link

@UdjinM6 UdjinM6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK 090ae92

Copy link
Collaborator

@kwvg kwvg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK 090ae92

Comment on lines +2255 to +2273
auto maybe_add_to_nodesToAskFor = [&](const PeerRef& peer) {
if (peersToAsk.size() >= 4) {
return false;
}
if (IsInvInFilter(peer, txid)) {
peersToAsk.emplace_back(peer);
}
return true;
};

{
LOCK(m_peer_mutex);
// TODO consider prioritizing MNs again, once that flag is moved into Peer
for (const auto& [_, peer] : m_peer_map) {
if (!maybe_add_to_nodesToAskFor(peer)) {
break;
}
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe simplify it?

how soon are you planning to implement this TODO?
If it's not short-term plans like very soon, it can be simplified to:

Suggested change
auto maybe_add_to_nodesToAskFor = [&](const PeerRef& peer) {
if (peersToAsk.size() >= 4) {
return false;
}
if (IsInvInFilter(peer, txid)) {
peersToAsk.emplace_back(peer);
}
return true;
};
{
LOCK(m_peer_mutex);
// TODO consider prioritizing MNs again, once that flag is moved into Peer
for (const auto& [_, peer] : m_peer_map) {
if (!maybe_add_to_nodesToAskFor(peer)) {
break;
}
}
}
{
LOCK(m_peer_mutex);
// TODO consider prioritizing MNs again, once that flag is moved into Peer
for (const auto& [_, peer] : m_peer_map) {
if (peersToAsk.size() >= 4) {
break;
}
if (IsInvInFilter(peer, txid)) {
peersToAsk.emplace_back(peer);
}
}
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants