You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By applying the following patch, I got an anecdotal speed increase when testing helia-http-gateway locally, but was unable to prove it via benchmarks.
diff --git a/node_modules/libp2p/dist/src/connection-manager/dial-queue.js b/node_modules/libp2p/dist/src/connection-manager/dial-queue.js
index bcafe31..199f202 100644
--- a/node_modules/libp2p/dist/src/connection-manager/dial-queue.js+++ b/node_modules/libp2p/dist/src/connection-manager/dial-queue.js@@ -100,6 +100,8 @@ export class DialQueue {
stop() {
this.shutDownController.abort();
}
++ calculatedMultiaddrCache = new Map();
/**
* Connects to a given peer, multiaddr or list of multiaddrs.
*
@@ -123,11 +125,17 @@ export class DialQueue {
const signal = this.createDialAbortControllers(options.signal);
let addrsToDial;
try {
+ const multiaddrCacheKey = peerId.toString() + addrs.map(({ multiaddr }) => multiaddr.toString()).join()+ if (this.calculatedMultiaddrCache.has(multiaddrCacheKey)) {+ addrsToDial = this.calculatedMultiaddrCache.get(multiaddrCacheKey);+ } else {
// load addresses from address book, resolve and dnsaddrs, filter undiallables, add peer IDs, etc
addrsToDial = await this.calculateMultiaddrs(peerId, addrs, {
...options,
signal
});
+ this.calculatedMultiaddrCache.set(multiaddrCacheKey, addrsToDial);+ }
}
catch (err) {
signal.clear();
╰─ ✔ ❯ hyperfine --parameter-list branch fix/fastify-e2e,testing/libp2p-patch --setup "git switch {branch} && npm i && npm run build" --runs 10 -w 1 "npm run test:e2e"
Benchmark 1: npm run test:e2e (branch = fix/fastify-e2e)
Time (mean ± σ): 55.747 s ± 11.514 s [User: 2.729 s, System: 0.662 s]
Range (min … max): 36.724 s … 68.362 s 10 runs
Benchmark 2: npm run test:e2e (branch = testing/libp2p-patch)
Time (mean ± σ): 60.814 s ± 9.454 s [User: 2.785 s, System: 0.716 s]
Range (min … max): 40.612 s … 71.642 s 10 runs
Summary
npm run test:e2e (branch = fix/fastify-e2e) ran
1.09 ± 0.28 times faster than npm run test:e2e (branch = testing/libp2p-patch)
However, I believe there are some performance improvements we can make in libp2p:
Instead of methods constantly asking peerStore (with raw bytes and then parsing them on every dial request) for the content, can we have a peerStoreParsed of sorts that is updated whenever peerStore.add is called, that contains the actual types we need? (i.e. avoid dcutr/dcutr.ts, connection-manager/dial-queue.ts
We could clear the parsed data using a LRU strategy, and provide libp2p consumers a config option that controls the size of that cache.
AFAIK, we only need to store network-level things in the peerStore, and we should be able to access that if we need to, but when we don't, we usually process the data the same way. Not needing to do this work multiple times should be better.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
While testing helia-http-gateway locally, I have been running
npm run test:e2e-flame
and noticed thatjs-libp2p/packages/libp2p/src/connection-manager/dial-queue.ts
Line 284 in c6db210
By applying the following patch, I got an anecdotal speed increase when testing helia-http-gateway locally, but was unable to prove it via benchmarks.
However, I believe there are some performance improvements we can make in libp2p:
peerStoreParsed
of sorts that is updated wheneverpeerStore.add
is called, that contains the actual types we need? (i.e. avoid dcutr/dcutr.ts, connection-manager/dial-queue.tsThoughts?
Beta Was this translation helpful? Give feedback.
All reactions