Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformer_Captioning.ipynb 的运行结果不对 #6

Open
ge1mina023 opened this issue Apr 15, 2023 · 3 comments
Open

Transformer_Captioning.ipynb 的运行结果不对 #6

ge1mina023 opened this issue Apr 15, 2023 · 3 comments

Comments

@ge1mina023
Copy link

我在跑Transformer_Captioning.ipynb的第一个计算,也就是测试MultiHeadAttention的相对错误,我甚至复制了学长的代码,可是运行结果仍然是:

self_attn_output error:  0.449382070034207
masked_self_attn_output error:  1.0
attn_output error:  1.0

并未得到和学长一样的结果,也就是作业要求的: The relative error should be less than e-3.
请问学长,问题出在哪里呢~

@yjb6
Copy link
Owner

yjb6 commented Apr 23, 2023

可以把你的代码贴出来吗~

@frank-thou
Copy link

Same question lol...
这个问题其实是这样的,就是我先自己写了一份Transformer_Captioning.ipynb中的MultiHeadAttention这个东西,然后我自己写的error一直都是1,然后我直接把你的实现复制到我的py文件中后得到的结果也是错误的,最后我直接打开了你的Transformer_Captioning.ipynb文件直接调用你的py,得到的结果也是不对,所以我想问下这个可能是哪方面的问题,会不会是Colab和本地在实现随机生成输入数据时会有一些差异。
我在本地的notebook报的error如下:
‘’‘
self_attn_output error: 0.449382070034207
masked_self_attn_output error: 1.0
attn_output error: 1.0
’‘’

@frank-thou
Copy link

我比较怀疑是colab和本地环境在实现torch.manual_seed()时会由于一些不可知的原因导致生成的数据不同。 在Transformer_Captioning.ipynb中我已经比较过很多种github上的实现和我自己的结果了,在本地跑的话他们都是相同的,但是与notebook中预设的结果不同,所以可能就比较玄...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants