Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Semantic segmentation: Result of 4D input is worse than 3D input #61

Open
MCLYang opened this issue May 12, 2020 · 1 comment
Open

Semantic segmentation: Result of 4D input is worse than 3D input #61

MCLYang opened this issue May 12, 2020 · 1 comment

Comments

@MCLYang
Copy link

MCLYang commented May 12, 2020

@WangYueFt @antao97
I am using DGCNN for semantic segmentation task for lidar pointcloud. Base on my result I found that the if I only take XYZ as input, the result is significantly better than XYZ+intensity(intensity is a scalar, kinda feature of ray reflection). I want to discuss why 3D is better than 4D for dgcnn? Here are my thoughts and I hope you could give me some advice.

1.For knn, I pass only normalized XYZ to compute the distance. Should I pass XYZ + intensity to compute KNN?
2. For K value I use the default of the code which is 20, should I adjust the K value?
3. Any other suggestions will be very appreciated.

Notes that I believe the intensity of Lidar is not dummy variable. Based on my observation, I also tried other model such as pointnet, pointnet++ etc. All the model have better performance if I pass 4D input.

@vitakk98
Copy link

Excuse me, how about XYZI's improvement on semantic segmentation? Did you try? How do I modify the source code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants