-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deletion Candidate polar_transform_utils
#1015
Comments
if performance and functionality remains the same I don't care to be honest. The main purpose of the polar transform utility functions is to map cartesian images and templates expressed in the cartesian coordinate system to the polar domain as efficiently as possiblem so that a polar template can be slid across the polar image easily and fast for matching the in-plane Euler angle. At least: as efficiently as I could think of then; I'm convinced I could squeeze out quite a bit more performance now :). One can then sum the transformed image over the azimuthal direction, which I think is what you mean with azimuathal integration (?), but this is not the main purpose of the utilities. |
Just for reference the new method takes ~.3 msec to transform a 256x256 image into a polar image but around ~70 msec for a 2kx2k image. The radial method in |
could you point me to the function/method that replaces the |
@din14970 Sorry azimuthal integration is a very weird way to describe these things. We are actually doing the polar unwrapping. Very similar to what you are doing here, but instead of taking the interpolation we actually sum the intensities from each cartesian pixel and then divide by the fraction of that cartesian pixel in the azimuthal pixel. The azimuthal integration comes in because we can also account for the Ewald sphere by stretching the azimuthal pixels. We do this quickly by pre-computing things like the fraction of each cartesian pixel in the azimuthal pixel and which cartesian pixels intersect each azimuthal pixel. As far as speed goes I'm actually pretty confident in this approach. There are some histrogroamming approaches which might be slightly faster (factor of 2?), at the added cost of being less accurate. For the most part I just want to rewrite this so it can just dask-distributed/dask-cuda, reaches some degree of parity with the other method for speed and then leave any additional performance for later. |
This is the function: pyxem/pyxem/utils/_azimuthal_utils.py Lines 28 to 66 in da7c8f6
It's a bit weird because it is split into two parts. The first part is precomputing a bunch of stuff: pyxem/pyxem/utils/calibration_utils.py Lines 250 to 266 in da7c8f6
edit: (There is also a bit more of speed up when using multiple threads with Numba parallel=True) |
It's difficult for me to figure out exactly what this is doing or how it works just from a quick read, so for me, as long as you can reproduce the calculations in the paper and it's faster then I have nothing against changing/replacing stuff. |
@din14970 sounds good. Once I get the entire workflow inplace I can start to do some of the speed testing. I'm hopeful that we might be a little speed increase. It would also be good to run the performance metrics with %env OMP_NUM_THREADS=1 as discussed here pyxem/pyxem-demos#87 |
Just noting here that I'm not planning to touch this during my sweep of the utils. |
@pc494 This is probably good and I'd prefer removing this before the 1.0.0 release and after the 0.18.0 release |
Description
I think that we should look into profiling the
polar_transform_utils
with the new method for azimuthal integration to see if there is a large enough speed gain to justify having multiple methods for the same thing.In any case this module should be rewritten to use
center_direct_beam
and be a function of `get_azimuthal2d rather than how it currently exists.@din14970 any thoughts or reasons against this?
The text was updated successfully, but these errors were encountered: