-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow controlling intensity of RandomMotion transform #765
Comments
Well, controlling the Motion Artefact Severity is a difficult problem, because the exact same motion (let say a short transition of x mm) will induce very different artefact depending when this motion is occurring About Intensity of motion transform, the angle and the translation are directly related to it : I am currently working on quantification of motion artefact severity, and having an estimation from the motion time course would be nice but I did not find a simple way yet ... May be a better alternative, is to compute a difference metric (L1, L2 NCC ...) between the image before and after the motion artefact and used that metric to approximate the artefact severity |
This answer itself is useful too. I guess my formula for converting corruption into 0-10 scale is generally OK. It might need some fine-tuning. |
I do not agree, strongest artefact appears at the kspace center, so in the middle ... and actually it is not the motion onset that is important, but the motion duration ... (so the differenc time[i+1] - time[i] ... |
I thought I would only have one motion, instead of multiple: I think I am satisfied with how I handle artificial ghosting (ghosting = CustomGhosting(p=0.3, intensity=(0.2, 0.8)). What would be the most similar way to handle motion? |
ok, I see with only one motion (num_transforms=1) then I would still take Unfortunately motion can not be made similar to other artefact ... |
What happens if there is only 1 motion? Does it implicitly end on time=1? So motion around t=0.5 has the greatest effect? How would the effect be quantified? For example is motion with time=[0.1, 0.2] twice less noticeable or five times less noticeable than motion with time=[0.45, 0.55] (assuming everything else being equal)? I cannot explore it well using the Slicer plugin like I can for Ghosting. Hence I ask for |
1 motion means one change so 2 positions are average [0 t] and [t 1] (2 motion 3 postions [0 t1] [t1 t2] [t2 1]..) for the Slice plugin I don't know, (but the |
I coded this transform a long time ago reading Richard Shaw's paper. My version is a bit simplified, but works. I am now away at a conference, but I'll try to add some explanations to the docs when I'm back. For now, maybe you can just use a convex combination of the original image and the transformed one: import torch
import torchio as tio
class MyRandomMotion(tio.RandomMotion):
def __init__(self, *, intensity, **kwargs):
self.intensity = intensity
super().__init__(**kwargs)
def apply_transform(self, subject):
transformed = super().apply_transform(subject)
for image_name in self.get_images_dict(subject):
original = subject[image_name]
new = transformed[image_name]
alpha = self.intensity
composite_data = new.data * alpha + original.data * (1 - alpha)
transformed[image_name].set_data(composite_data)
return transformed
fpg = tio.datasets.FPG()
seed = 42
transform = MyRandomMotion(intensity=0)
torch.manual_seed(seed)
transform(fpg).t1.plot()
transform = MyRandomMotion(intensity=1)
torch.manual_seed(seed)
transform(fpg).t1.plot() |
If you like this approach, we can add this behavior to |
@fepegar would it be possible to add the |
Adding alpha-blending is a simple and effective way of controlling intensity. And its place is in the Motion transform, so the user only needs to pass the right parameter, and the right range of parameters to RandomMotion. Adding the non-random transforms to Slicer plugin would be useful for exploring the effects of parameters. |
Full results of my initial attempt to use ghosting and motion are now in: |
Awesome 💯 Happy to help if needed. I'll add the intensity kwarg soon. |
I not a big fan of this intenisty kwarg because it is not realist regard to the MRI acquisition process @fepegar would it be easy to add Motion transform in the Slicer pluggin ? (this would answer the initial need of exploration) and more generally it may be interesting for other transform too (ie not the Radom version) About motion, @dzenanz be aware that this tranformation can also induce some misalignment with the original volume, so depending on your application it may be a problem or not ... (what is your application ?) |
It would be easy, yes. But it would take a bit of time, which I don't really have now. Feel free to open a PR! |
🚀 Feature
Motivation
Intensity of RandomMotion transform seems to mostly depend on the "time" of the motion. And while RandomGhosting transform exposes
intensity
parameter, RandomMotion does not exposetimes
parameter, and does not haveintensity
.Pitch
Either provide
intensity
parameter, or allow setting the range oftimes
parameter used internally.Alternatives
Save the image before the transform is applied, then "manually" blend the transformed one into the original one with custom weight, thus emulating intensity.
Additional context
I am trying to augment training by creating bad images for an image quality estimator, because most images in my training set are good. I would like to control the degree of corruption, e.g. to have control whether I produce an image with rating of 1/10 or 4/10.
The text was updated successfully, but these errors were encountered: