-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Loss explodes when training on NTU-Dataset 120 #4
Comments
Hello @shubhMaheshwari! Thanks! GANs are very sensitive with hyperparameters even with batch sizes! Can you try with a smaller batch size like 32, 64 or 128? And then come back to us with your results? |
Hey @DegardinBruno,
We made only made the following changes to the code.
|
@shubhMaheshwari I will repeat the experiments with the information you provided, I will come back to you with an answer.
Btw, Kinetic-GAN's loss on NTU-120 for the cross-setup benchmark should have similar behaviour as follows but values may vary due to random initializations. |
Can you provide a single command to train on NTU-120? Similar to one provided in the readme
Thanks |
Just one small thing, can you show me your loss evolution? python visualization/plot_loss.py --batches 1970 --runs kinetic-gan --exp -1
Here is the entire command that I am running: python kinetic-gan.py --b1 0.5 --b2 0.999 --batch_size 32 --channels 3 --checkpoint_interval 10000 --data_path /home/degardin/DATASETS/st-gcn/NTU-120/xsub/train_data.npy --dataset ntu --label_path /home/degardin/DATASETS/st-gcn/NTU-120/xsub/train_label.pkl --lambda_gp 10 --latent_dim 512 --lr 0.0002 --mlp_dim 8 --n_classes 120 --n_cpu 8 --n_critic 5 --n_epochs 1200 --sample_interval 5000 --t_size 64 --v_size 25 |
@shubhMaheshwari This is my loss at this moment, which is normal to be high at the beginning and rapidly start to learn to generate the human structure before learning to synthesise human motion: |
|
@shubhMaheshwari did you downloaded the data from our server? Try with ntu-60 xsub (default settings as the code base) to see if the same is happening, or even with fewer classes like 5 or 10 (feeder is ready for it also). Feel free to reach me out. |
Hey @DegardinBruno
Great work. Thanks for sharing your code!
While training on the NTU-120 the generator loss is exploding. We only changed the batchsize from 32 to 380
Do you know why this could be happening ?
The text was updated successfully, but these errors were encountered: