Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the prediction of scared dataset #77

Open
xwx0924 opened this issue Oct 26, 2022 · 6 comments
Open

the prediction of scared dataset #77

xwx0924 opened this issue Oct 26, 2022 · 6 comments

Comments

@xwx0924
Copy link

xwx0924 commented Oct 26, 2022

Hello! when i use the STTR-light pretrain model, and transformar disparity map to depth map and then to point cloud map, the result is bad in the scared dataset. However, in the other endoscapy dataset, the result is perfect.
So, for the scared dataset, it have some others set~~
scared dataset:
图片1
图片2
图片3
图片4
other endoscope:
图片6
图片7
图片8
图片5

@mli0603
Copy link
Owner

mli0603 commented Oct 26, 2022

Hi @xwx0924

I couldn’t clearly tell the quality of the depth map from the above images. It looks reasonable?

  • Have you rectified the images?
  • Even if you have rectified the images, I know there are calibration errors in scared dataset, which may lead to poor performance.
  • Even if everything is perfect, STTR also has failure cases especially in reflecting regions. Artifacts should be expected.

I hope these help!

@xwx0924
Copy link
Author

xwx0924 commented Oct 26, 2022

Hi li~
I selected the video of dataset8-keyframe1 of the scared dataset, then took 20 pictures, cropped them to left and right, and not rectified the images. Then, using inference_example.ipynb and sttr-light pre-trained model, each time, input a pair of image pairs.Then output disparity map and occlusion map.
I am particularly confused that the output disparity maplooks okay (the artifacts of the reflective part will also be shown in the output occlusion map), when transformed to a depth map (using the preprocessing parameters of the scared dataset, baseline = 4.14339, focal = 1035), Especially when converted into a 3D point cloud by a depth map (fourth image above), the results are completely inconsistent with the ground truth. However, when entering image pairs from other datasets, the transformed point cloud and the ground truth are similar (last image). So, for the scared dataset, I don't know whether to add set when predicting the disparity map(or Is there any error in my use process ), and what to pay attention to when converting to a depth map~~

@xwx0924
Copy link
Author

xwx0924 commented Oct 26, 2022

I notic the color( left and right image ) is different, and you know how to rectify these images~~thakn you

@MichaelWangGo
Copy link

Hello! when i use the STTR-light pretrain model, and transformar disparity map to depth map and then to point cloud map, the result is bad in the scared dataset. However, in the other endoscapy dataset, the result is perfect. So, for the scared dataset, it have some others set~~ scared dataset: 图片1 图片2 图片3 图片4 other endoscope: 图片6 图片7 图片8 图片5

Hi bro, can I ask you how did you get occ_left data from SCARED dataset? Thanks in advance~

@Runshi-Zhang
Copy link

您好! 当我使用 STTR-light 预训练模型,并将变换视差映射用于深度映射,然后用于点云映射时,结果在 SCARED 数据集中很糟糕。 然而,在另一个内窥镜数据集中,结果是完美的。 所以,对于害怕的数据集,它设置了一些其他的~~害怕的数据集: 图片1 图片2 图片3 图片4 其他内窥镜: 图片6 图片7 图片8 图片5

嗨,兄弟,我能问你你是如何从SCARED数据集中获得occ_left数据的吗?提前致谢

哈喽,你知道如何从SCARED数据集中获得occ_left数据了吗?主要不知道这个数据是什么

@MichaelWangGo
Copy link

您好! 当我使用 STTR-light 预训练模型,并将变换视差映射用于深度映射,然后用于点云映射时,结果在 SCARED 数据集中很糟糕。 然而,在另一个内窥镜数据集中,结果是完美的。 所以,对于害怕的数据集,它设置了一些其他的~~害怕的数据集: 图片1 图片2 图片3 图片4 其他内窥镜: 图片6 图片7 图片8 图片5

嗨,兄弟,我能问你你是如何从SCARED数据集中获得occ_left数据的吗?提前致谢

哈喽,你知道如何从SCARED数据集中获得occ_left数据了吗?主要不知道这个数据是什么

Hi,是的,我获得了occ_left的数据,你可以参考https://github.com/mli0603/stereo-transformer/blob/main/utilities/misc.py#L69

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants