-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimize computation and memory #43
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Maybe @ebrondol wants to comment on this.
Did you measure if there is any improvement? I would guess that not much since the copies were in the outer parts of the for loops, unless there is a huge number of points (?)
Hi @jmcarcell , I am testing the PR currently. The preliminary performance looks much better than before computing time wise, so I am confident it can be marged soon. |
well, if every time you copy all the points in a layer, the overhead of creating tiles becomes bigger than the clustering itself and you enter the paradox in which coarser tiling gives better performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes align with the expectation of about a factor of 50-100 improvements in computing time.
The tests I run locally worked out fine. |
Thank you, Erica. Sure, I'm not in a hurry to have it merged. |
Just for reference, I have produced the timing plots again with the latest commits from @felicepantaleo : |
I have tested this PR extensively with other detectors and I think it can be merged also before the CI is fixed (still under progress and being monitored in #44 ). |
This PR should improve the overhead of looping over tiles by avoiding multiple copies