-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about the real scans dataset #1
Comments
@yzp12 The 3D shape is generated by fusing a long sequence (thousands) of depth images from Kinect V2. A number of depth images are randomly selected from the long sequence of depth images. Those selected depth images are transformed and voxelized to be the 2.5D partial views. All depth views indeed have the same camera parameter because we use the same Kinect V2 to collect all data. |
@Yang7879 Thank you for the explains! If i understand correctly, the depth scans are voxelized in such way that if s[i,j,k] == 1(s is the depth scan voxelization), then for the shape voxelization(m) we also have m[i,j,k] == 1. i.e. the depth scans voxelization are aligned with the shape voxelization? |
@yzp12 A pair of the partial 2.5D view and the ground truth 3D can be voxelized and aligned as you described, but in all of our experiments, both the partial 2.5D view and the true 3D shape are centered in a 256^3 voxel grid for training/testing, which means they are not aligned anymore. Previously, we did experiments and found that there is no difference using either aligned or centered data pairs. |
@Yang7879 Thank you!! |
Hi @Yang7879 , may I know how do you prune the depth scans obtained from kinect sensor? I notice that the depth scan you used contain only the object(not include any backgroud). Do you do this manually? Thank you! |
@yzp12 Yes, we segmented the object manually, but there are existing algorithms for the task. |
@Yang7879 , what software/interface did you use for manual segmenting the raw depth image(depth scan)? |
@PranjaLBiswas27 We use meshlab. It's pretty easy to remove floors or backgrounds. |
Thanks a lot for the information. |
Hi! Thanks for sharing the code & data. I looked at the real scans dataset, it seems that the real depth scans are all from the identical camera configurations. I wonder if you are firstly synthesize the shapes using multiple real depth scans, and then project the shapes to some fix directions to obtain a depth scan? If not, can you explain a bit how you obtain those real depths scans? Thank you!
The text was updated successfully, but these errors were encountered: