-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SkeletonMergeTask occupies too much memory #126
Comments
Hi Jackie! The sharded format is a little harder to run because you need to consider how large files need to be downloaded, filtered, and aggregated. Here's some tips for making this easier:
cv = CloudVolume(path)
cv.skeleton.spatial_index.to_sql("spatial_index.db") It might give me some more insight if you can share a representative list of the fragment files with their size in bytes and the number of merge tasks generated. |
Sorry, I just realized you inserted and executed in the same script. Usually I do it in two steps so I confused myself. Getting an idea of the number of tasks and the sizes of the fragments will help a lot. I might be able to recommend some tweaks to the shard parameters. |
I just added |
Thanks for your reply! I am researching this database of the spatial index. |
Besides, I have tried more things about this and found even I set:
the running memory still grew suddenly at some point in ShardedSkeletonMergeTask.process_skeletons() |
Interesting. That says to me that the size of the skeletons themselves is very large... but you clipped them to <50000. You can try changing the settings for This is still really weird though. If this were on my machine, what I would do is start tracing the merge task to find out where all the memory usage was going using |
This makes a lot more sense to me. My skeletons have been (mostly) well behaved and I was able to screen out extremely large mergers while sparing the rest. If you can send me your fragment files, I might be able to do some debugging to figure out where that memory spike is coming from (I won't share them or use them for any other purpose). It might take me a bit to get to it though. If you figure out which messy segments are causing the problem, you can try filtering them out specifically by editing the merge code. That might be the best approach. |
OK, I will send you my Next, I am going to optimize the images and segments. Anyway, it's a pleasure to talk to you! |
Thank you! Looking forward to the frag files! |
Hi, @william-silversmith
I have tried a 100^3 um^3 EM stack with Igneous skeletonization.
Unfortunately, the memory usage exceeds 350 GB at the second pass of SkeletonMergeTask.
This is impossible on our server ...
Our code of skeletonization:
Our stack configure (skeletonize at mip=2 [40, 40, 40]):
I think it may not need so much memory.
Could you please tell me how to run it without occupying so much memory?
Many thanks!
The text was updated successfully, but these errors were encountered: