Replies: 12 comments 4 replies
-
My preference would be to use Openfire clustering. See this discussion |
Beta Was this translation helpful? Give feedback.
-
I'll read this, but for me, Openfire clustering (i.e. XMPP clustering) is a different thing - is to have HA and load balancing at XMPP Server level. Add to my knowledge, the "Octo/JVB/JiCoFo"-Clustering is beyond this. Of course, it uses XMPP as transport and a fail or overload of the XMPP server will make conferences unusable, too. But to the experiences of the big german free Jitsi plattform "Freifunk München" (ffmuc.net), there's much more need to scale-out the number of bridges as the XMPP server. To my knowledge, they run about 20 JVBs in the last year, but upgraded to a HA-setup of their XMPP server this days - more for the requirement to be HA during service periods as from the load. |
Beta Was this translation helpful? Give feedback.
-
Just flight over igniterealtime/openfire-ofmeet-plugin#67 - it's comparable old. I seems to deal mostly with the HA of JiFoCo, not with JVB and the "Octo" loadbalancing provided by JFoCo itself. I sorry, but I don't see not much that match to my vision with respect to the current infrastructure used by recent Jitsi components. |
Beta Was this translation helpful? Give feedback.
-
Jitsi Meet is an SFU. The network bandwidth on a single server (1GB) will max out at just over 1000 send/receive HD media streams. Those 20 JVBs were running on 20 containers/servers with their own network interfaces. You can run 20 JVBs from a single Openfire server, but you will run out of network bandwidth first. Plus your 20 JVBs are useless if your Jicofo is offline. Clustering in Openfire is much more than just HA and load balancing at XMPP Server level. Plugins can dynamically allocate resources and share workload across multiple Openfire instances running the same set of plugins. We can dynamically allocate focus, jvb and gateway roles to instances depending on the "Octo" loadbalancing |
Beta Was this translation helpful? Give feedback.
-
Any solution we propose must be platform agnostic and run especially on Windows and easy to maintain from an admin web console with basic O/S skills and experience. The use of separate JVMs for JVB, Jicofo and Jigasi is temporary. When I can find time to work on it, I will revert back to calling the main Java classes all from the same JVM in different class loaders. |
Beta Was this translation helpful? Give feedback.
-
Why to you equalize a single server to a network interface of 1GB ?!? This days most of enterprise hardware will have at lease one 10Gbit interface. And it's quite normal to have more cores as one have fingers on both hands and feet, together! In fact, I don't need a whole bunch of JVB instances, I exactly need two to keep the "visible" service up-and-running while I do maintenance like updates or config changes in an alternating way.
Exactly.
Yes, I want to start it via ssh on N different "servers". See no reason to run out of network bandwith from this.
I strongly would recommend and vote not to do this re-integration because the processor load, the memory and thread management of this completely different payloads (XMPP-Server with other modules vs. JiCoFo vs. JVB) works seriously better with separate JVMs. The JVB and the end-use experience will be extreme sensitive to stop-the-world phases if the JVM, but JVB itself will not handle with bigger memory objects. There are many identical threads instead and it's very important to serve them. To my knowledge, there is no tread priority management inside the JVM, but this set of threads is much more important than the threads of other "payload" inside the Openfire Server. Therefore, with a modern garbage collector like GC1 we probably newer see a FullGC here, i.e. no STW-times. The Openfire server will handle with comparable big objects and an "unknown" workload. A "quite normal" STW of about one or two seconds will not bother at the UI interaction or XMPP messaging. For the reason of priority scheduling, I rised the process priority just for the JVM running the JVB to the maximum to reduce stuttering. Before, when all was running inside just one JVM, I was just able to rise the priority of "Openfire" at all and the effect of this was lower. |
Beta Was this translation helpful? Give feedback.
-
There are a lot of such documents with different qualities. I recently had a frist-hand discussion with the Admin staff of FFMuc and I have my own monitoring data.
That all in comparison to other well known services somebody might need to run like webservices, mailservices and so on. This is not in comparison to A/V-conference systems that use other than SFU. And all with the premise that the network is not a bottleneck at all, because if the traffic flow is restricted by this, you also don't need CPU resources to process it. And if the users can't use the service for that reason and this will keep the number of users low, you also will not much user-scaled memory resources. I'll evaluate a clustered OpenFire for sure this year! As said, my main requirement is HA for the overall service. And as you wrote, this is more than the JVB. |
Beta Was this translation helpful? Give feedback.
-
Hi Guido, Dele, ++++ Congrats both of you for efforts and lots of improvements done with Padé Jitsi Openfire !!!! |
Beta Was this translation helpful? Give feedback.
-
My most relevant need for "scale out" isn't a question of resources, but high availability: There might be the need for service at any level, but the A/V-platform has become "mission critical" ranking over the time of one year. I can't interrupt service during business hours, but this is also my work time. At the moment, I still offer my personal time to manager business things at the evening. About resources needed: At the moment I would recommend to calculate about "8 Cores" per 100 participants with a typical mix of a whole bunch of small conferences and some lager ones up to more than 30 or even 40 participants. On the hardware used at my institution, it's no problem to scale out a container to a larger number of cores. But I heard that "in the cloud" it's cheaper to use N standard hosts with typical 8 cores than one with N*8 or in-between. |
Beta Was this translation helpful? Give feedback.
-
Two node cluster First node is the senior node and has a videbridge jvm and the single focus user jvm Second node is a normal node with only a videobridge jvm Cluster has a single focus user and two jvb users Jitsi load-balancer is monitoring both JVBs using ofmeet MUC room |
Beta Was this translation helpful? Give feedback.
-
Simply 🚀 . I'll try to introduce a good business friend to help to maintain my platform. To give this a try ASAP! |
Beta Was this translation helpful? Give feedback.
-
With Pàdé release V1.6.3, it seems that we have a clustered OpenFire&Pàdé (with one JiCoFo and JVB per node) setup working. |
Beta Was this translation helpful? Give feedback.
-
To support larger number of conferences and participants, it would be become a need to use more than one instance of the Jitsi Video Brige (JVB2).
The basics are already available and build into the Jitsi Component. And with the recent evolutions since using JVB2, there "coupling" between OpenFire and the Jitsi components is at most at network level. On side of OpenFire, for the first spring we may "just" need:
java ...
) by a "ssh-wrapper" (ssh jvbuser@remotehost -c "${cmd}"
) or offer the possibility to call a script. (BTW: There should be a second slot for a "stop" script.)The next sprints may include UI integration of the "octo"-related loadbalancing settings.
Beta Was this translation helpful? Give feedback.
All reactions