You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your amazing work, I've managed to run some of the code using your documentation and examples too. I was hoping to extract the skeleton from Human3.6M myself, and maybe even from another popular dataset like Charades. I was wondering if you could suggest how we can obtain the 2 files (the .npy file and the accompanying label.pkl file). What will the input files required to generate these 2 files be? I'm looking to create these 2 files on both the original Human3.6M videos and the Charades dataset, to test with your model!
Thank you :)
The text was updated successfully, but these errors were encountered:
Hey @Meg-R , sorry for the late reply, we are already using Human3.6M, the files can be downloaded on the README.md!
Regarding how to obtain .npy and .pkl file from a new dataset, the dataset .npy is composed by samples (N, C, T, V) or (N,C,T,V,M), where N are the number of samples, C the coordinates, T the frames, V the joints, and M is for 2 or more skeletons of the actions (if it is the case, but not ready for Kinetic-GAN).
The .pkl file is composed by two lists (2, N), where the first list is the name of the sample and second is the number of the action (e.g., if you have 60 classes, it goes from 0-59, inclusive).
Hi @DegardinBruno
Thank you for your amazing work, I've managed to run some of the code using your documentation and examples too. I was hoping to extract the skeleton from Human3.6M myself, and maybe even from another popular dataset like Charades. I was wondering if you could suggest how we can obtain the 2 files (the .npy file and the accompanying label.pkl file). What will the input files required to generate these 2 files be? I'm looking to create these 2 files on both the original Human3.6M videos and the Charades dataset, to test with your model!
Thank you :)
The text was updated successfully, but these errors were encountered: