Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nucleus channel segmentation #6

Open
tischi opened this issue Apr 8, 2020 · 8 comments
Open

Nucleus channel segmentation #6

tischi opened this issue Apr 8, 2020 · 8 comments
Labels
enhancement New feature or request

Comments

@tischi
Copy link
Collaborator

tischi commented Apr 8, 2020

@Mava04 did @k-dominik get back to you about the feasibility of segmenting the nuclei with ilastik?

I think other options could be CellPose or StarDist (we would have to figure out for both of them how to launch them from ImageJ though). But feel free to try them anyway if you have some time!

@tischi tischi added the enhancement New feature or request label Apr 8, 2020
@Mava04
Copy link

Mava04 commented Apr 8, 2020

@tischi not yet. Thank you for the suggestion, I will try them and let you know

@Mava04
Copy link

Mava04 commented May 14, 2020

Hi @tischi,
Just a quick update on nuclei segmentation:

  • CellPose gives very poor predictions
  • StarDist would be easier to use as it recognises each nucleus as a ROI both in 2D and in 3D and it separates the nuclei well, but the demo ground truth is not suitable for our nuclei (in 2D it performs better than in 3D, but we are still far from a good recognition). I am generating my ground truth now and I will see how it performs. StarDist with more complex options is also less intuitive than Ilastik and it takes some time to learn, but at least there is a nice jupyter notebook which can be readily adapted. For clearer nuclei, I think it may well be the best choice; moreover, for 2D images it is available as a Fiji plugin and it is very intuitive.
  • Ilastik performs best at recognizing nuclei quickly in 2D, but the edges are not sharp and it might be hard to fill in the black spots in the middle without clear edges.

Finally, here are my next steps:

  1. annotate more images to generate the ground truth
  2. see how StarDist works with my ground truth, and in case expand the training set.
    If it does not work, I will try Ilastik again, possibly with the same ground truth (it is made slice by slice, so it can be used in 2D too).

All ideas are welcome! I will let you know how it goes, it may take a while.

@tischi tischi changed the title Nucleus channel segmentation in ilastik Nucleus channel segmentation May 18, 2020
@tischi
Copy link
Collaborator Author

tischi commented May 18, 2020

Great! I think this makes a lot of sense! I would have also recommended to train StarDist with dedicated ground truth from your own data!

  1. Do I get you right that StarDist does not run for 3D in Fiji?
  2. Could you please post here a download link to an example image of your nuclei?
  3. Do your nuclei overlap in 3D? If you would only have a single cell layer you could just train the central plane in 2D and then extend this along the z-axis (I could integrate this in our Fiji plugin)
  4. Did you watch this already https://www.youtube.com/watch?v=Amn_eHRGX5M?
  5. On which computer will you run the training? You need a GPU... I think you could use a VM from the ALMF...

@Mava04
Copy link

Mava04 commented May 20, 2020

Hi Tischi,

  1. yes, you got it right
  2. I am sharing a folder on ownCloud with the stacks. I suggest looking at "Series003_.._raw.tif" first.
  • Series 003 is the whole image from the nuclei channel from emrbyo 3
  • Series001 and Series004 are selections from other embryos
  • "raw" is the original image
  • "label" is the segmentation
  • "LabkitStardist" means that I annotated the nuclei myself using the Labkit plugin (I started looking at 1 slice only after your last message)
  • "Stardist" means that I ran the standard Stardist with overlap=0 from ImageJ using versatile (fluorescence) as the training set. I did it on the whole image and then took a selection that looked good and saved it.
  1. Nuclei do not overlap, generally. Great idea, it would make things much faster!
  2. Thank you for the suggestion, I watched and it was helpful to understand better some parts that were not so clear in the Jupyter Notebook
  3. Good question. My laptop could just withstand the demo... I think using a VM from the ALMF is the only way. Do you know if I need a specific training to get access?

@Mava04
Copy link

Mava04 commented May 20, 2020

Please let me know if you can access the files

@tischi
Copy link
Collaborator Author

tischi commented Jun 2, 2020

Good question. My laptop could just withstand the demo... I think using a VM from the ALMF is the only way. Do you know if I need a specific training to get access?

No, I can just give you booking rights. But you cannot install anything on the VM, thus I am not sure it will be helpful, because I think you would need a couple of python packages, right? I would have to discuss with my colleagues how to support you best in this regard!

I also looked at the data.

  1. Biology: Should this be a cell monolayer or can there be cells growing on top of each other?
  2. Microscope: You are loosing signal very quickly along the z-axis. Could this be improved? Which microscope are you using?

@Mava04
Copy link

Mava04 commented Jun 2, 2020

As for the Python packages, you are right.

For the data:

  1. It's the ventral side of a Drosophila embryo; the aim was to pick the outermost layer, but cells do not lie on the same plane. Under this layer there are still cells which we do not want to take into account. The issue is that cells do not lie on a plane and they are tilted.

  2. I have to admit I did not image the embryos myself yet, I hope to start soon. The embryos were imaged with the Leica SP8 STED (just for conventional fluorescence microscopy, because of the sensitivity of the detector; now I heard it was installed also on the Leica SP8 DLS). As for any strategies to minimize signal loss along the z-axis, I will first ask Gabriel, who did the imaging, and then come back to you.

@Mava04
Copy link

Mava04 commented Jun 4, 2020

Update on answer 2:
at least for the outermost layer of cells, the loss of signal seemed reasonable at first, so we did not try anything to improve it yet. We could try incubating with DAPI for longer or increasing the laser power (for the current images, 5-10% was used).

I was also wondering how we could make the signal more uniform whithin a nucleus, to make segmentation easier, as with the built-in training set the bright spots tend to be recognised as single nuclei. Do you think that selecting a narrow emission range might help?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants