-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change to_array routines to fix pointer issues #175
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for digging @TomMelt
When @jwallwork23 was setting this up the argument was that if the tensor has only been created on the Torch side (as will happen during backprop etc.) then a user may want to map it back over to a Fortran array that has not yet been allocated - hence the logic required to check this and allocate if required, and size
being potentially 'unknown'. I agree we can discuss this, but may also want @jwallwork23's input as I am not sure how likely this situation is to arise (or not) with the autograd applications.
On 1) can you clarify what you you mean by "don't have much meaning" for my own understanding please? As far as I understood intent has no enforcement but can be monitored by compilers.
On 2) I agree that this is an issue, and if the lines were not removed then we would still need to add an assignment of the appropriate value to sizes
If you look at #174 and the corresponding logs from the CI then the shape of out_data
is being read as 95
at ingress, even when explicitly passed in as a rank-2 to match sizes=2
, and this occurs before we hit this point. So my thoughts were that there was perhaps an issue with how data_out
was coming in to the subroutine or being declared. I realise I didn't send you a link to this before, only linked it from the bottom of #166 apologies.
Screenshots and detail also in this comment: #166 (comment)
After a fresh look I think the issue may arise in the line
That for some reason causes the I should also note that I could not generate the error on my local machine. |
Yeah, I didn't do the best job of explaining...
|
Discussion on 14/10/2024 We discussed and agreed the following route forwards:
This means that people can send a pointer and tensor to a function and get the data back to Fortran without prior knowledge/allocation of the array, as was intended by @jwallwork23 but is done in a safer way with a bit more C++ (which may well be useful anyway). I will now close #174 #176 #177 #178 As this has been discussed and agreed. |
Before merging this PR, we need to merge: |
the tensor derived type now supports two methods: - `get_rank` - `get_shape` These methods return the shape and rank of the tensor
@jatkinson1000 , given comments in the related PRs (#180 #181) I plan to add a test of the new functionality to this PR to check the shape and size work as expected 👌 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @TomMelt
This looks great - nice to include testing of the new routines in the integration tests for now, and I like the "tests passed successfully" workaround for the regex ;)
I agree with @jwallwork23 that we would need to be careful in some instances with a matching condition if we were manipulating data, but given that in this particular case everything is pointing at the same memory I think we are OK in this instance.
Of course, as @jwallwork23 develops autograd further this may need to be revisited, or the components of this current example may become unit tests.
this fixes an issue seen in PRs #173 and #166 where the test fails with error:
Whilst it looks like I have fixed the problem by simply removing the lines... I can assure the reviewer that I didn't do that deliberately 😉
I think I have an idea why it doesn't work, and why those lines need to be removed. It essentially stems from 2 points:
data_out
pointer is declared asintent(out)
FTorch/src/ftorch.f90
Line 2415 in e0d8269
This means that the following lines don't have much meaning
FTorch/src/ftorch.f90
Lines 2426 to 2442 in e0d8269
sizes
is marked optionalFTorch/src/ftorch.f90
Line 2416 in e0d8269
But that would make the following line undefined, because if sizes is missing on input, it could have any value or just crash
FTorch/src/ftorch.f90
Line 2446 in e0d8269
I don't see a use case (other than convenience) where you wouldn't need to know the size of your output data. It can be passed in rather easily using
shape()
etc.Happy to discuss in our next meeting. I might be overlooking something, but I think this works (at least on my machine)
Update
I have merged in 2 PRs
These resolve the issues mentioned above.
I plan to add a test to the integration test
examples/6_Autograd/autograd.f90
to check the newrank
/shape
functionality I have added.get_shape()
andget_rank()
toexamples/6_Autograd/autograd.f90