-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Infrastructure needs #1
Comments
Do you/someone have an idea of the space requirements? We could start small, e.g. just x86_64-linux, etc. I think I might setup a server in CZ, at least a day-only one for initial experimentation. |
We had a server running and the test results are documented here: We were lucky to have 256 GB ram (yes...no typo) and 48 cores (yes...again no typo) - so we had no bottlenecks™. You need roughly 4 GB for a nixos-small release and ~ 60 GB disk space for a normal release. The delta depends on how many new/changed .nars have been built. Double everything since the current version of The scripts (NixIPFS/nixipfs-scripts) still need a garbage collector for old releases. The documented shortcomings of IPFS will be addressed in ipfs/kubo/pull/3867 - I suggest you wait with your experiment until this is merged or create a custom IPFS package containing the patches. Please also have a look at NixIPFS/notes#2 |
Thanks, I had read the last link and now also the other two. So the experiment has ended and nothing is running ATM? Do I understand it correctly that the high CPU requirements only happenned during import of new paths and should be better soon after IPFS bitswap is improved? For a future stabler server I'm thinking of a dedicated raspberry (4-core) with an external rotating drive. |
Nope, nothing is running currently since the server went away (did no investigate further since the bitswap stuff was a show stopper). You need quite some processing power to hash all the content. The RPi seems like a good idea, however the import could be beyond the limits of this platform - so it will take quite some time. On the @ CPU: The bitswap "load explosion" is caused by too much DHT chatter, caused by the |
Is there a chance that the same IPFS node for pinning downstream packages can also be used for upstream packages? By downstream I mean the binary cache, and by upstream I mean the all the fetchurls.. etc. |
Yes, I certainly counted on that. |
I don't see a reason why this shouldn't be possible. However, there is currently no way of caching the "upstream" (I like the expression) packages. Once this is figured out (NixIPFS/notes/issues/1), this can be implemented in e.g. nixipfs-scripts |
@mguentner That's what we're working on atm. During our work on Forge Package Archiving (the name I gave to ipfsing upstream nix packages), we discovered we had to get a more deeper integration into ipfs, right now we have a haskell multiaddr implementation (forked and extended from basile-henry's version) https://github.com/MatrixAI/haskell-multiaddr (which I intend to make the official haskell impl of Multiaddr), and @plintx is working on integrating multistream and multistream muxer. |
Nixpkgs already has |
@vcunat FYI: The bitswap session PR has been merged |
Bitswap sessions should improve the transfer speed, however since the exporter / initial seeder will be known, IPFS can be run with The process of a full pin would be as follows:
This would speed up full pins/syncs while allowing partial or even full syncs via the DHT. |
This is collection of things we might need to get binary cache distribution up and running using IPFS.
3+ Initial pin server that also can be used as a gateway (all content is pinnend, no delay like on ipfs.io)
The publish and the pin servers could be linked together using cjdns to improve routing between the "core" distribution infrastructure.
IPFS infrastructure repo:
https://github.com/ipfs/infrastructure
The text was updated successfully, but these errors were encountered: