-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can transformer model reproduce WMT14 English-German BLEU score? #9
Comments
Sorry for missing this issue. Surely there are other ways we can prove that our implementation is correct, for instance by winning the WMT2018 shared task on news translation for English-German: |
Well, I have read your paper and here are my questions:
|
|
Thanks a lot. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, thank you for great work and awesome documents. I have a question after I read your transformer example which is used on WMT2017 English-German corpus that if you have tested marian's performance using this example on WMT2014 English-German corpus and achieve equivalent BLEU score as reported in transformer original paper? I think this point is very important because only via this can you prove your transformer implementation is correct and it is also important for research use.
Thanks!
The text was updated successfully, but these errors were encountered: