-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[lower_to_mlir] Lower reduce op from tt-forge to mlir #78
Conversation
co_out = compiled_model(*inputs) | ||
|
||
co_out = [co.to("cpu") for co in co_out] | ||
assert compare_with_golden_pcc(golden=fw_out, calculated=co_out[0], pcc=0.99) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use this new function for verifying the output in other tests also?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, can we update the rest of the test/mlir?
Also, please update pybuda/test/mlir/mnist/test_inference.py
. At the moment, it has the similar issues as other ops. However, the current implementation doesn't detect it as [False]
interpreted as valid assert case. In sum, code should look something like this:
def test_mnist_inference():
inputs = [torch.rand(1, 784)]
framework_model = MNISTLinear()
fw_out = framework_model(*inputs)
compiled_model = pybuda.compile(framework_model, sample_inputs=inputs)
co_out = compiled_model(*[i.to("tt") for i in inputs])
co_out = co_out[0].to("cpu")
torch.allclose(fw_out, co_out) # This will do a proper failure. Should be replaced with compare_with_golden_pcc for valid verification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried running this test but it fails due to data type error in ttnn: ttnn.linear: Tensor must be of type {DataType::BFLOAT16, DataType::BFLOAT8_B, DataType::BFLOAT4_B}, but got DataType::FLOAT32
co_out = compiled_model(*inputs) | ||
|
||
co_out = [co.to("cpu") for co in co_out] | ||
assert compare_with_golden_pcc(golden=fw_out, calculated=co_out[0], pcc=0.99) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, can we update the rest of the test/mlir?
Also, please update pybuda/test/mlir/mnist/test_inference.py
. At the moment, it has the similar issues as other ops. However, the current implementation doesn't detect it as [False]
interpreted as valid assert case. In sum, code should look something like this:
def test_mnist_inference():
inputs = [torch.rand(1, 784)]
framework_model = MNISTLinear()
fw_out = framework_model(*inputs)
compiled_model = pybuda.compile(framework_model, sample_inputs=inputs)
co_out = compiled_model(*[i.to("tt") for i in inputs])
co_out = co_out[0].to("cpu")
torch.allclose(fw_out, co_out) # This will do a proper failure. Should be replaced with compare_with_golden_pcc for valid verification.
65ace5a
to
d6adb8b
Compare
d6adb8b
to
91ae3e5
Compare
Lowering reduce sum and mean ops from tt-forge to mlir. Adding end to end tests for those ops. Adding method for pcc comparison with torch golden instead of using torch.allclose since it fails on reduce ops when data format is float32.
I checked on metal reduce tests (tests/ttnn/unit_
tests/operations/test_mean.py::test_mean) and using bfloat16 gives output that matches on fourth decimal with golden, while float32 has differences on third and fourth decimal.