Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed of "Precomputing fused input images" #115

Open
SternTomerGit opened this issue May 26, 2022 · 0 comments
Open

Speed of "Precomputing fused input images" #115

SternTomerGit opened this issue May 26, 2022 · 0 comments

Comments

@SternTomerGit
Copy link

Hi!

I'm a regular user of this fantastic toolbox.

I'm using a very strong GPU node with four GPUs, and each GPU has 40GB, which sums up to 160GB.
The deconvolved image has either four or six views of size ~2k1k200 pixels represented by uint16, which is fused into a single image of ~4GB.

When I use the Precomputing fused input images, this step either runs very fast (~20 seconds), or very slow (~30 minutes). The fast option happens only when it is the first image in the time-lapse. The remaining images (i.e. time points 2..end) will necessarily be the slow ones.

Since deconvolutions are computed independently, my sense is that there may be a GPU memory flush problem.
Is there any way to check that, or can you instead propose a simple fix for this?

Many thanks!
Tomer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant