You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, in the paper you simulate a VLP16 lidar using the Carla simulator and I have a question about that.
I tried to simulate a Velodyne HDL64E with Carla and I got the related pointclouds, aesthetically they look similar to those obtained with real lidar (like those of SemanticKitty dataset) but when I go to make inference on these pointclouds with some of the networks for semantic segmentation most famous (SqueezeSSegV3, SalsaNext, 3DminiNet etc) I get almost completely wrong labels, the networks cannot correctly recognize even the road. I doubted that I am doing something wrong in simulating the lidar in Carla, although I have entered, I believe, the same parameters of the lidar in question:
Here are the parameters I use to create my pointclouds with Carla (executed asynchronously with parameter -fps = 10):
lidar.set_position (0, 0.0, 1.73) # 1.73 meters should be the height from the ground of the lidar used in SemanticKitty, however I have the same results even with height = 0
lidar.set_rotation (0, 0, 0)
lidar.set (
Channels = 64,
Range = 120,
PointsPerSecond = 1250000,
RotationFrequency = 10,
UpperFovLimit = 2, #tested also with 3
LowerFovLimit = -24.9) #tested high with -25
So I would like to know how you have simulated your lidar, maybe there is some parameter that escapes me, I also leave attached the datasheet from where I got the parameters. What I think may be wrong is the range or ratio between the server's fps and the RotationFrequency parameter.
The text was updated successfully, but these errors were encountered:
Hi, in the paper you simulate a VLP16 lidar using the Carla simulator and I have a question about that.
I tried to simulate a Velodyne HDL64E with Carla and I got the related pointclouds, aesthetically they look similar to those obtained with real lidar (like those of SemanticKitty dataset) but when I go to make inference on these pointclouds with some of the networks for semantic segmentation most famous (SqueezeSSegV3, SalsaNext, 3DminiNet etc) I get almost completely wrong labels, the networks cannot correctly recognize even the road. I doubted that I am doing something wrong in simulating the lidar in Carla, although I have entered, I believe, the same parameters of the lidar in question:
Here are the parameters I use to create my pointclouds with Carla (executed asynchronously with parameter -fps = 10):
lidar.set_position (0, 0.0, 1.73) # 1.73 meters should be the height from the ground of the lidar used in SemanticKitty, however I have the same results even with height = 0
lidar.set_rotation (0, 0, 0)
lidar.set (
Channels = 64,
Range = 120,
PointsPerSecond = 1250000,
RotationFrequency = 10,
UpperFovLimit = 2, #tested also with 3
LowerFovLimit = -24.9) #tested high with -25
So I would like to know how you have simulated your lidar, maybe there is some parameter that escapes me, I also leave attached the datasheet from where I got the parameters. What I think may be wrong is the range or ratio between the server's fps and the RotationFrequency parameter.
The text was updated successfully, but these errors were encountered: