You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
inp = crop(img_trans, center, scale, [256,256]) trans_pts3d = transform_preds(torch.tensor(pts3d), center, scale, [256, 256])
I want to crop all data to [256,256], and the groundtrue landmarks is transformed by your transform_preds function. But finally the landmarks did not match the image. Can you help me to correct it? THANK YOU!
@wqz960
Hi, I also want to train this network on on my own dataset, my input picuture size is 720x1280,
but I don't know how to set the scale and center.
Can you give me some advice?
No description provided.
The text was updated successfully, but these errors were encountered: