Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the real scans dataset #1

Open
yzp12 opened this issue Mar 21, 2018 · 9 comments
Open

about the real scans dataset #1

yzp12 opened this issue Mar 21, 2018 · 9 comments

Comments

@yzp12
Copy link

yzp12 commented Mar 21, 2018

Hi! Thanks for sharing the code & data. I looked at the real scans dataset, it seems that the real depth scans are all from the identical camera configurations. I wonder if you are firstly synthesize the shapes using multiple real depth scans, and then project the shapes to some fix directions to obtain a depth scan? If not, can you explain a bit how you obtain those real depths scans? Thank you!

@Yang7879
Copy link
Owner

@yzp12 The 3D shape is generated by fusing a long sequence (thousands) of depth images from Kinect V2. A number of depth images are randomly selected from the long sequence of depth images. Those selected depth images are transformed and voxelized to be the 2.5D partial views.

All depth views indeed have the same camera parameter because we use the same Kinect V2 to collect all data.

@yzp12
Copy link
Author

yzp12 commented Mar 22, 2018

@Yang7879 Thank you for the explains! If i understand correctly, the depth scans are voxelized in such way that if s[i,j,k] == 1(s is the depth scan voxelization), then for the shape voxelization(m) we also have m[i,j,k] == 1. i.e. the depth scans voxelization are aligned with the shape voxelization?

@Yang7879
Copy link
Owner

@yzp12 A pair of the partial 2.5D view and the ground truth 3D can be voxelized and aligned as you described, but in all of our experiments, both the partial 2.5D view and the true 3D shape are centered in a 256^3 voxel grid for training/testing, which means they are not aligned anymore. Previously, we did experiments and found that there is no difference using either aligned or centered data pairs.

@yzp12
Copy link
Author

yzp12 commented Mar 22, 2018

@Yang7879 Thank you!!

@yzp12 yzp12 closed this as completed Mar 22, 2018
@yzp12
Copy link
Author

yzp12 commented Mar 30, 2018

Hi @Yang7879 , may I know how do you prune the depth scans obtained from kinect sensor? I notice that the depth scan you used contain only the object(not include any backgroud). Do you do this manually? Thank you!

@yzp12 yzp12 reopened this Mar 30, 2018
@Yang7879
Copy link
Owner

@yzp12 Yes, we segmented the object manually, but there are existing algorithms for the task.

@PranjalBiswas
Copy link

@Yang7879 , what software/interface did you use for manual segmenting the raw depth image(depth scan)?

@Yang7879
Copy link
Owner

Yang7879 commented Aug 4, 2018

@PranjaLBiswas27 We use meshlab. It's pretty easy to remove floors or backgrounds.

@PranjalBiswas
Copy link

Thanks a lot for the information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants