Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about usage with SLAM #25

Open
mysterybc opened this issue Sep 23, 2021 · 13 comments
Open

Question about usage with SLAM #25

mysterybc opened this issue Sep 23, 2021 · 13 comments

Comments

@mysterybc
Copy link

Hi ! Thanks for your great work~

I've just read your paper and have some question about it. 
The Paper directly use the pose estimated by the SLAM algorithm and transform the pointcloud sequence to current pose before calculate the residual image, right? So the pose should be accurate, or the residual image will be wrong.  Am i right?
If what I said above is right, when moving object causes large drift in odometry, your algorithm might not improve the odometry accuracy since the moving object could not be identified accurately.

Thanks for your reply in advance!!

@Chen-Xieyuanli
Copy link
Member

Hey @mysterybc, thanks for following our work. Yes, you are right. If the odometry is not accurate, it will influence the MOS results. We also showed one ablation study on the noisy poses in Figure 4. You may see that the MOS performance will drop until the residual image is too noisy which doesn't appear during training and will be ignored.

However, during the real application, the proposed method should be done together with the pose estimation which means we can estimate the pose and conduct MOS iteratively. This may help both pose estimation and MOS and may not cause large drift in local pose estimation.

@mysterybc
Copy link
Author

Thanks!
And one more question, if I estimate pose and conduct MOS iteratively, is it able to achieve real time? I think iterative 2-3 times will take most Lidar odometry more than 100ms, which means it can't reach 10Hz performance. I didn't take MOS in to account since I don't konw its't operating time.

@Chen-Xieyuanli
Copy link
Member

Yes, runtime could be a problem. The best MOS performance model runs at around 20Hz. There are also faster models, but the MOS performance is worse. I never try the idea before. It's an interesting idea and worth trying.

@A1-one
Copy link

A1-one commented Oct 6, 2021

How to estimate pose using SLAM ?
Can you provide some link ?
Thanks in Advance

@Chen-Xieyuanli
Copy link
Member

Hey @A1-one, one easy way is to use our SuMa with the cleaned scans. You may compare the results before and after cleaning to see the influence of the moving objects.

@A1-one
Copy link

A1-one commented Oct 7, 2021

Thank you for the reply @Chen-Xieyuanli . I have the point cloud data in the form of bin files. How can i get poses corresponding to those bin files?

@Chen-Xieyuanli
Copy link
Member

Thank you for the reply @Chen-Xieyuanli . I have the point cloud data in the form of bin files. How can i get poses corresponding to those bin files?

You could use any LiDAR odometry/SLAM method to estimate the poses of your scans. SuMa is rather easy to use and you could find the document here. You could also use ICP like algorithms to easily get the poses.

@A1-one
Copy link

A1-one commented Oct 11, 2021

hey @Chen-Xieyuanli , i am getting this runtime error while running the visualizer in SuMa,
OpenGL Context Version 3.3 core profile
GLEW initialized.
OpenGL context version: 3.3
OpenGL vendor string : VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 9.0, 256 bits)
Segmentation fault (core dumped)

@Chen-Xieyuanli
Copy link
Member

hey @Chen-Xieyuanli , i am getting this runtime error while running the visualizer in SuMa, OpenGL Context Version 3.3 core profile GLEW initialized. OpenGL context version: 3.3 OpenGL vendor string : VMware, Inc. OpenGL renderer string: llvmpipe (LLVM 9.0, 256 bits) Segmentation fault (core dumped)

I never met such a problem before. Could you please open an issue in the SuMa repo and may get the solution there.

@mysterybc
Copy link
Author

Yes, runtime could be a problem. The best MOS performance model runs at around 20Hz. There are also faster models, but the MOS performance is worse. I never try the idea before. It's an interesting idea and worth trying.

I recently got caught in some projects and forget to comment your reply = = I'll try when I finish these project, Thanks a lot

@mysterybc
Copy link
Author

hey @Chen-Xieyuanli , i am getting this runtime error while running the visualizer in SuMa, OpenGL Context Version 3.3 core profile GLEW initialized. OpenGL context version: 3.3 OpenGL vendor string : VMware, Inc. OpenGL renderer string: llvmpipe (LLVM 9.0, 256 bits) Segmentation fault (core dumped)

Hi, if you haven't solved this problem and still want to use Lidar SLAM to estimate poses, I suggest you could try LOAM since it's easy to implement and its performance is satisfied.

@A1-one
Copy link

A1-one commented Oct 21, 2021

hey @Chen-Xieyuanli , i am getting this runtime error while running the visualizer in SuMa, OpenGL Context Version 3.3 core profile GLEW initialized. OpenGL context version: 3.3 OpenGL vendor string : VMware, Inc. OpenGL renderer string: llvmpipe (LLVM 9.0, 256 bits) Segmentation fault (core dumped)

Hi, if you haven't solved this problem and still want to use Lidar SLAM to estimate poses, I suggest you could try LOAM since it's easy to implement and its performance is satisfied.

Thank you @mysterybc

@ybyzy
Copy link

ybyzy commented Nov 2, 2022

It means that you need to first estimate the position of the SLAM system, and then remove the dynamic environment?
Thanks in Advance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants