-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrapper dev #77
base: main
Are you sure you want to change the base?
Wrapper dev #77
Conversation
…orithms as well. Including slight modification to testing script
Generally looking good though I haven't looked into much detail. Changing to a dict is going to break stuff, so that'll be painful but probably useful. It seems that you have a conflict in OsipiBase.py that needs resolution. The test failures are from dipy and seem like they should be resolved now or soon: dipy/dipy#3299 |
#fit = list(self.ivim_fit(*args, **kwargs)) | ||
#results[ijk] = fit | ||
|
||
for ijk in tqdm(np.ndindex(data.shape[:-1]), total=np.prod(data.shape[:-1])): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been thinking about parallelizing the fitting and have something similar running as a separate script. I'm thinking that this structure is fine for now but will be too rigid. I think we'd want each algorithm to determine what is input into each step. Perhaps some combine all directions in one input, for example. Perhaps there's a generic generator that does essentially this, but can be overridden by the algorithm to supply data as it sees fit.
I should just finish up my changes and push them maybe it would make more sense to have it all in context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was working on fixing the notebook yesterday evening and ran into the volume being too large and fitting would take 1 hour if I didn't use Paulien and Merels selective indexing (I tried to avoid the 1D data reformatting). I agree that a loop like this would be a good place for some parallelization, although I would like to not be the one doing it as it has been a great source of headaches in the past :)
But regarding multi-directional data, the only way I've seen and handled such data has been as a long list of b-values with separate header-info to discriminate the directions. And it is sort of the easiest way to do it. I think DIPY essentially forces you to do it that way to avoid these formatting issues.
I'm gonna push my fixes and we could hold off on this merge until you've pushed yours?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey! My commits already do parallel fits. I'm not sure whether it is the most efficient implementation, but it is a simple one.
TF2.4_IVIM-MRI_CodeCollection/src/original/OGC_AmsterdamUMC/LSQ_fitting.py
Lines 87 to 110 in 44051e4
# here we try parallel computing, but if fails, go back to computing one single core. | |
single = False | |
if njobs > 2: | |
try: | |
# define the parallel function | |
def parfun(i): | |
return fit_segmented(bvalues, dw_data[i, :], bounds=bounds, cutoff=cutoff,p0=p0) | |
output = Parallel(n_jobs=njobs)(delayed(parfun)(i) for i in tqdm(range(len(dw_data)), position=0, leave=True)) | |
Dt, Fp, Dp = np.transpose(output) | |
except: | |
# if fails, retry using single core | |
single = True | |
else: | |
# or, if specified, immediately go to single core | |
single = True | |
if single: | |
# initialize empty arrays | |
Dp = np.zeros(len(dw_data)) | |
Dt = np.zeros(len(dw_data)) | |
Fp = np.zeros(len(dw_data)) | |
for i in tqdm(range(len(dw_data)), position=0, leave=True): | |
# fill arrays with fit results on a per voxel base: | |
Dt[i], Fp[i], Dp[i] = fit_segmented(bvalues, dw_data[i, :], bounds=bounds, cutoff=cutoff,p0=p0) |
# here we try parallel computing, but if fails, go back to computing one single core.
single = False
if njobs > 2:
try:
# define the parallel function
def parfun(i):
return fit_segmented(bvalues, dw_data[i, :], bounds=bounds, cutoff=cutoff,p0=p0)
output = Parallel(n_jobs=njobs)(delayed(parfun)(i) for i in tqdm(range(len(dw_data)), position=0, leave=True))
Dt, Fp, Dp = np.transpose(output)
except:
# if fails, retry using single core
single = True
else:
# or, if specified, immediately go to single core
single = True
if single:
# initialize empty arrays
Dp = np.zeros(len(dw_data))
Dt = np.zeros(len(dw_data))
Fp = np.zeros(len(dw_data))
for i in tqdm(range(len(dw_data)), position=0, leave=True):
# fill arrays with fit results on a per voxel base:
Dt[i], Fp[i], Dp[i] = fit_segmented(bvalues, dw_data[i, :], bounds=bounds, cutoff=cutoff,p0=p0)
So I'd say this is ready to be merged once we've done two things:
|
I think the remaining changes are really small. See comments below.
I think that's here: https://github.com/OSIPI/TF2.4_IVIM-MRI_CodeCollection/blob/main/WrapImage/nifti_wrapper.py#L112
I think there's this: https://github.com/OSIPI/TF2.4_IVIM-MRI_CodeCollection/blob/main/tests/IVIMmodels/unit_tests/test_ivim_synthetic.py#L40 |
Thanks, Eric! I'd say this can be merged now. The basic tests work, the ivim_synthetic test works, Pauliens notebook works, and the nifti_wrapper seems to pass as well. Once this is in, all other PR's and current active branches will make sure they get the changes and adapt any usage of the algorithms accordingly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, I think we're good to merge. There may be some issues that come up at some point but we can fix that later.
A couple of new things:
osipi_fit_full_volume method added to OsipiBase. This method passes the entire array to the standardized version of the fitting algorithm, for those methods that require it. For it to work, the method ivim_fit_full_volume has to be defined in the standardized version. See LU_IAR_biexp as an example.
I've changed the outputs of each algorithm to a dictionary instead of an array. This help in cases where we try different IVIM models. Using a dictionary with defined keys is more robust than keeping track of array indices. I've made the relevant adjustment to every standardized algorithm to date, and the required adjustment in the testing function in test_ivim_fit.py
Finally, for some reason, this code doesn't pass the tests here on Github, and I don't really understand the errors. It seems to be my submissions specifically that fail. Although it all works fine on my machine running Python 3.11.5.
Checklist