-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove as_tensor argument
of set_tensors_from_ndarray_1d
#2615
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2615 +/- ##
==========================================
- Coverage 99.98% 99.98% -0.01%
==========================================
Files 196 196
Lines 17372 17358 -14
==========================================
- Hits 17370 17356 -14
Misses 2 2 ☔ View full report in Codecov by Sentry. |
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making this change!
botorch/optim/utils/numpy_utils.py
Outdated
torch.as_tensor(vals, device=tnsr.device, dtype=tnsr.dtype) | ||
.to(tnsr) | ||
.view(tnsr.shape) | ||
.to(tnsr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that these .to()
are noops in this setup given that the tensor is initially constructed on the right device with the right dtype?
torch.as_tensor(vals, device=tnsr.device, dtype=tnsr.dtype) | |
.to(tnsr) | |
.view(tnsr.shape) | |
.to(tnsr) | |
torch.as_tensor(vals, device=tnsr.device, dtype=tnsr.dtype) | |
.view(tnsr.shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, let's do the reshaping first and then use from_numpy
: torch.from_numpy(vals.reshape(tnsr.shape)).to(tnsr)
. I did some lightweight benchmarking and this generally is a tad faster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The benchmarking explains the 22 minute gap between the two comments :D. I updated the diff directly rather than waiting for back & forth.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this doesn't work if vals is a scalar. Reverting to the previous solution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since there have been change after this comment, is there anything I can still do to assist? In any case, thanks for already implementing the necessary changes :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the PR is ready as is. I'll land it after the internal checks pass. Thanks for the contribution!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Glad that I could help :)
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@saitcakmak merged this pull request in 81c16ff. |
Motivation
This PR removes the
as_tensor
argument of theset_tensors_from_ndarray_1d
function innumpy_utils.py
.The reason for this change is that the previous implementation
float32
precision, this always transformed floats intofloat64
precisionThis change was discussed previously with @Balandat and @esantorella in #2596 .
Have you read the Contributing Guidelines on pull requests?
Yes, I have read it.
Test Plan
I checked the format using
ufmt format .
and verified that all tests are still running usingpytest -ra
.Related PRs
This PR is related to #2597 and #2596, fixing a similar issue (
float32
being broken by some internal calculations).