You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Appreciate for the job on supporting RNN-T training on CPU (models/language_modeling/pytorch/rnnt/training/cpu), just quick evaluated the training code and found that WER would keep in 1.00 after even training 10+ epoches.
And I found this issue related on loss function used in training HawkAaron/warp-transducer#93
The grad in cpu is incorrect, is this a know issue? Or have we ever gotten the final WER of 0.058 rather than 1.0?
The text was updated successfully, but these errors were encountered:
Appreciate for the job on supporting RNN-T training on CPU (models/language_modeling/pytorch/rnnt/training/cpu), just quick evaluated the training code and found that WER would keep in 1.00 after even training 10+ epoches.
And I found this issue related on loss function used in training HawkAaron/warp-transducer#93
The grad in cpu is incorrect, is this a know issue? Or have we ever gotten the final WER of 0.058 rather than 1.0?
The text was updated successfully, but these errors were encountered: