-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I get the point cloud data from Realsense and convert it to the NPY files needed for the project demo? #7
Comments
Smoothed pc comes from averaging the depth for 10 frames and removingthe pixels with jittery depth between those 10 frames.How exactly is this step generated? Is there a reference case? |
I understand it's 10 frames of point cloud data, right? |
Yes you are correct it is 10 frames of point-cloud data.
What you need to do is to read in the data from Intel RealSense and then convert these to pytorch tensors which is then fed to the network. |
Thank you very much, I will try it these days! |
What does the output value of the network xyz stand for? |
xyz is the 3D coordinates of the point-cloud |
link:https://pan.baidu.com/s/1Q8Xbk9sW2sZ50Wk_aUz4LA password:coo3 I wrote my own code to gather data from Realsense and infer, So here's the code. |
Your link is dead |
@imdoublecats |
I am using your warehouse, I tried the demo effect and felt very good.
I see that the demo code is an input of test data in NPY format, and the output is the score and grasp location.
Now I want to merge the data from Intel RealSense into a similar output format, and then give it to machine control. How do I do that?
The text was updated successfully, but these errors were encountered: