We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我在跑Transformer_Captioning.ipynb的第一个计算,也就是测试MultiHeadAttention的相对错误,我甚至复制了学长的代码,可是运行结果仍然是:
self_attn_output error: 0.449382070034207 masked_self_attn_output error: 1.0 attn_output error: 1.0
并未得到和学长一样的结果,也就是作业要求的: The relative error should be less than e-3. 请问学长,问题出在哪里呢~
The text was updated successfully, but these errors were encountered:
可以把你的代码贴出来吗~
Sorry, something went wrong.
Same question lol... 这个问题其实是这样的,就是我先自己写了一份Transformer_Captioning.ipynb中的MultiHeadAttention这个东西,然后我自己写的error一直都是1,然后我直接把你的实现复制到我的py文件中后得到的结果也是错误的,最后我直接打开了你的Transformer_Captioning.ipynb文件直接调用你的py,得到的结果也是不对,所以我想问下这个可能是哪方面的问题,会不会是Colab和本地在实现随机生成输入数据时会有一些差异。 我在本地的notebook报的error如下: ‘’‘ self_attn_output error: 0.449382070034207 masked_self_attn_output error: 1.0 attn_output error: 1.0 ’‘’
我比较怀疑是colab和本地环境在实现torch.manual_seed()时会由于一些不可知的原因导致生成的数据不同。 在Transformer_Captioning.ipynb中我已经比较过很多种github上的实现和我自己的结果了,在本地跑的话他们都是相同的,但是与notebook中预设的结果不同,所以可能就比较玄...
No branches or pull requests
我在跑Transformer_Captioning.ipynb的第一个计算,也就是测试MultiHeadAttention的相对错误,我甚至复制了学长的代码,可是运行结果仍然是:
并未得到和学长一样的结果,也就是作业要求的: The relative error should be less than e-3.
请问学长,问题出在哪里呢~
The text was updated successfully, but these errors were encountered: