Replies: 7 comments 9 replies
-
I am sorry but the vpp architecture is not very amiable to applying anything other than a policer. It hurts my head to think about it, actually. Pie, did appear recently for dpdk but I have not poked into it. I gave up when xdp started giving us dpdk-like read performance, and went all in on that approach. We are currently handling 10k subscribers at 10gbit total on 16 cores at 22% cpu use so we are certain we scale to at least 25Gbit today on a $1500 box, is that not enough? :) All the devs can be easily found in the #libreqos:matrix.org chatroom, if you would like faster comms. |
Beta Was this translation helpful? Give feedback.
-
I think the per site/ap limit (which is per core) is the more pressing issue. 4 Gbps is probably enough for WISP deployments at this point, but FTTH or other Bitstream based access models will be limited by this. The workaround is to create "virtual" or partitioned sites/ap to be able to use more cores, but this very ugly and error prone. |
Beta Was this translation helpful? Give feedback.
-
We've had individual cores pushing just shy of 10gbps on recent builds, so
that problem is slowly going away.
…On Wed, Feb 22, 2023 at 11:22 AM Lukas Tribus ***@***.***> wrote:
I think the per site/ap limit (which is per core) is the more pressing
issue. 4 Gbps is probably enough for WISP deployments at this point, but
FTTH or other Bitstream based access models will be limited by this. The
workaround is to create "virtual" or partitioned sites/ap to be able to use
more cores, but this very ugly and error prone.
—
Reply to this email directly, view it on GitHub
<#272 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADRU434D34JYD66GFDPWN4TWYZDL7ANCNFSM6AAAAAAVD64UCA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
We have a testbed setup here: https://payne.taht.net/ Click run bandwidth test, click on any of the plans. Please note that the tests we are running day to day vary, right now I am for example looking harder at the RTT estimator than the actual throughput, but here we are pushing 8Gbit in each direction, with headroom to spare. |
Beta Was this translation helpful? Give feedback.
-
"I guess what I would like now is some guidance on the current issue(s) that LibreQoS faces that prevent it from being more performant per-core, or if there's any ideas around what might need to be looked at to start potentially offloading the work into SmartNICs (eg. Netronome, or Xilinx/AMD with OnLoad). I believe the former in that list is capable of offloading eBPF policies entirely, but it's been difficult finding any real-world data on them. They're affordable enough that it's still much more preferable to big-vendor-box-here." We talked about our issues with moving forward we have on this recent podcast: https://packetpushers.net/podcast/heavy-networking-666-improving-quality-of-experience-with-libreqos/
The onload card you referenced is a tcp offload engine, not helpful. Netronome was barely in business when last I checked. The tremendous expense of these cards in general lead me towards prefering to adopt a whitebox strategy - if you need more than X GB, just rack up another box. That said, if someone dropped a bunch on us and gave us enough development money to go somewhere with it (in a DPU multi-tenant scenario, perhaps), we'd probably pursue it. As it is, please give v1.4 alpha a shot (in monitor mode, at least) see our github sponsors page for how we presently keep the project alive. |
Beta Was this translation helpful? Give feedback.
-
We are also tracking and co-operating with the Linux XDP and eBPF communities closely. There are some major improvements in the next Linux release that we will adopt in the next 6 months or so, after the next stable release of ubuntu has time to stablize. |
Beta Was this translation helpful? Give feedback.
-
Ok this is excellent, lots of information to take in 😁 I've just woken up so I'll dig into the details of everything shared here soon. I realised I forgot to explain one thing - the motivation behind wanting to scale the performance! We are facing the reality of needing to service up to 10gbps FTTH/FTTP tails on our government-backed national broadband network in the not too distant future. Currently 1gbps tails are the fastest tails we get and the rapidly increasingly popularity of these circuits is already becoming problematic, so you can imagine the impact of multiplying that workload by 10! |
Beta Was this translation helpful? Give feedback.
-
Hi everyone
Bit of a preamble:
I've actually stumbled across LibreQOS by accident during my own foray into VPP and p4-dpdk-target based dataplanes - because what all soft-dataplane solutions sorely lack right now is almost any decent HQoS or queue management features. My work is currently in building a custom BNG that isn't bloated with unnecessary features, nor expects you to fill it with many 10'000's of customers to make it scale financially - VPP fits the bill for this quite nicely on all fronts other than QOS.
I have done alot of reading up on
XDP
,eBPF
and functions around offloadingtc-bpf
usingcls bpf
, etc. The software development for such things is well over my head at my current skill level, but I am actively learning - I can at-least demonstrate that I understand why we can't just useXDP
to doTC
with the current setup for example (need the data from ansk_buffer
to do QoS which just isn't available in anxdp_buffer
).I guess what I would like now is some guidance on the current issue(s) that LibreQoS faces that prevent it from being more performant per-core, or if there's any ideas around what might need to be looked at to start potentially offloading the work into SmartNICs (eg. Netronome, or Xilinx/AMD with OnLoad). I believe the former in that list is capable of offloading
eBPF
policies entirely, but it's been difficult finding any real-world data on them. They're affordable enough that it's still much more preferable to big-vendor-box-here.I did find an email from @dtaht on the Spinics list recently saying he was looking for ways forward to 100gbps - I am running an ISP of a similar size here in Australia (15-20k subs), and in general I'm quite keen to work together to get figure that out too :).
Beta Was this translation helpful? Give feedback.
All reactions