-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Topic subscription failure with multiple zenoh-bridge-ros2dds peers #86
Comments
I think I may have something similar, a 3 way system. I will check our logs for the UndeclareSubscriber pattern you mention above. I will include my scenario, as a potential data point, although I have not yet been able to make a cut down reproducible example as you have above. I was attempting to reproduce it with I have 3 bare metal ubuntu 22.04 x86_64 humble systems. Currently two development "base stations" connected via lan switch and one "bot" communicated to on a wifi link. We similarly use the allow feature of the config, tailored for each systems function (basestation or bot). Albeit with more options. We similarly use the cyclone dds configuration xml file. At the moment we are developing a lifecycle node on the "bot" that has publishers(vehicle state) and a service(to change vehicle state mode, lights, park brake etc). We have a simple python gui running on one of the base stations subscribing to the vehicle state, and is able to issue service client requests. The other basestation is typically running rviz2 and/or plotjuggler. When we (run, change, compile, restart) a the particular lifecycle node on the "bot" while the gui on one of the basestation is running, we usually get a re-connection of the subscription to the vehicle state, but very very often do not get a re-established connection of the service client from the basestation gui to the bot service server. There are no errors when we make service client requests from the basestation, they are just not received by the service server on the bot. We work around this at the moment by restarting the zenoh-bridge-ros2dds service on the "bot". This is not a nice crutch to use. Some potential Ideas -
|
This does sound very similar, topic subscription is apparently successful with no errors, but no data. I now have a reasonable workaround for my purposes; running only one bridge in peer mode, with the others operating in client mode pointing to the IP of the peer, e.g:
Routing of topics works correctly between both robots and the server. This does require that the server is always running, but this is not a limitation right now for us and I have been running fairly intensive traffic today without obvious failures. Perhaps worth a try. |
We are also experiencing this issue. We worked around it by setting our command and control computer as a client which had all robots listed as endpoints. |
I ran into this issue using zenoh-bridge-ros2dds:0.11.0. However, using zenoh-bridge-ros2dds:nightly (0.11.0-dev-124-ga742b36) (2 July) I do not. Going back in the dockers, the 0.11.0-dev-123-ga36b951 image, (21 June) has the issue also. |
Describe the bug
This may be related to this and this.
We have a setup that involves a central server connected to multiple robots, all running ROS2 iron in docker containers. The central server provides some topics to all robots. We also need some robot to robot ROS2 communication.
We initially used zenoh-bridge-ros2dds in peer mode at the server and all robots, but experienced non-obvious failures of data transmission on topics between server and robots.
A simplified setup that exhibits the problem is:
The server containers are started, then the zenoh containers on the robots. Robot1 listener is started, correctly shows received data, then stopped. Robot2 listener is started, may show data, then is stopped. Robot2 listener is started again, does not show any data.
If the robot zenoh containers are changed to clients, connecting to the server ip address, the failure does not occur.
If the listener on the server is not started, the failure seems to occur very rarely.
To reproduce
We have not managed to reproduce this with composed containers on a single host. Server is running Ubuntu 20.04, robot1 and robot2 are running Ubuntu 22.04. The robots are connected over WiFi. The container
simonj23/dots_core:iron
is a ROS2 iron distribution with CycloneDDS installed.Run in all cases with config files in the current directory.
cyclonedds.xml:
minimal.json5:
compose.zenoh_peer.yaml
On server:
start talker
start listener
start zenoh
On robot1 start zenoh
On robot2 start zenoh
On robot1 start then stop listener:
On robot2 start then stop listener:
On robot2 start listener:
At this point, robot2 no longer gets any data on the
chatter
topic. The situation can be recovered by restarting the zenoh container on the server.Log files attached. Server IP address is 192.168.0.70, robot1: 192.168.0.101, robot2: 192.168.0.105.
It appears from the server logfile that something may be going wrong with topic unsubscribe. When robot1 listener is stopped,
2024-03-04T11:26:40Z
, there are two messages ofUndeclareSubscriber
, but when robot2 listener is stopped,2024-03-04T11:26:58Z
, there is only one, and the next subscribe does not correctly succeed.server_log.txt
robot1_log.txt
robot2_log.txt
System info
Server: Ubuntu 20.04 arm64
Robots: Ubuntu 22.04 arm64
zenoh-bridge-ros2dds: 0.10.1-rc.2
The text was updated successfully, but these errors were encountered: