-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【PaddlePaddle Hackathon 4】No.63 : add embedding fp16 test #51321
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
✅ This PR's description meets the template requirements! |
@ZzSean @zhangting2020 想问下这个报错怎么解决呢? |
@@ -15,6 +15,7 @@ | |||
#pragma once | |||
|
|||
#include "paddle/phi/core/dense_tensor.h" | |||
#include "paddle/phi/core/device_context.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
确认是否有必要加,下同
|
||
def test_check_output(self): | ||
place = core.CUDAPlace(0) | ||
self.check_output_with_place(place, check_eager=True, atol=1e-2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
atol默认值无需设置
['X', 'Y'], | ||
'Out', | ||
check_eager=True, | ||
max_relative_error=1e-2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
@@ -294,6 +294,40 @@ def test_param_dtype(): | |||
) | |||
|
|||
|
|||
@unittest.skipIf( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fp16不用加跳过
or not core.is_float16_supported(core.CUDAPlace(0)), | ||
"core is not complied with CUDA and not support the float16", | ||
) | ||
class TestEmbeddingFP16OP(OpTest): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同之前建议,直接继承TestLookupTableOp
@ZzSean 想问下CI-Coverage的应该如何解决? |
@ZzSean 辛苦review下~ |
or not core.is_bfloat16_supported(core.CUDAPlace(0)), | ||
"core is not complied with CUDA and not support the bfloat16", | ||
) | ||
class TestLerpBF16(OpTest): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
case太少,需要跟FP16对齐数量
table = np.random.random((17, 31)).astype("float64") | ||
self.init_dtype() | ||
|
||
table = np.random.random((17, 32)).astype(self.dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不能因为原始测例无法通过,就改测例啊
broadcast_tensor没添加单测 |
Sorry to inform you that f4cdd47's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
@zhangting2020 辛苦帮忙review下~,PR-CI-Coverage的报错不知道怎么解决 |
table = np.random.random((17, 31)).astype("float64") | ||
self.init_dtype() | ||
|
||
table = np.random.random((17, 31)).astype(self.dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个算子低精度实现存在Bug,对于奇数元素数量的场景处理存在问题。先在低精度的case里面,换一个偶数元素的shape吧,我们后续修复这个算子问题。
另外还有bf16的单测
@zhangting2020 修改好了,辛苦review下~ |
PR types
Others
PR changes
APIs
Description
相关链接:
#51281
#54871