Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrapper dev #77

Open
wants to merge 23 commits into
base: main
Choose a base branch
from
Open

Wrapper dev #77

wants to merge 23 commits into from

Conversation

IvanARashid
Copy link
Contributor

@IvanARashid IvanARashid commented Aug 14, 2024

A couple of new things:

  1. osipi_fit_full_volume method added to OsipiBase. This method passes the entire array to the standardized version of the fitting algorithm, for those methods that require it. For it to work, the method ivim_fit_full_volume has to be defined in the standardized version. See LU_IAR_biexp as an example.

  2. I've changed the outputs of each algorithm to a dictionary instead of an array. This help in cases where we try different IVIM models. Using a dictionary with defined keys is more robust than keeping track of array indices. I've made the relevant adjustment to every standardized algorithm to date, and the required adjustment in the testing function in test_ivim_fit.py

  3. Finally, for some reason, this code doesn't pass the tests here on Github, and I don't really understand the errors. It seems to be my submissions specifically that fail. Although it all works fine on my machine running Python 3.11.5.

Checklist

  • Self-review of changed code
  • Added automated tests where applicable
  • Update Docs & Guides

@etpeterson
Copy link
Contributor

Generally looking good though I haven't looked into much detail. Changing to a dict is going to break stuff, so that'll be painful but probably useful.

It seems that you have a conflict in OsipiBase.py that needs resolution.

The test failures are from dipy and seem like they should be resolved now or soon: dipy/dipy#3299

#fit = list(self.ivim_fit(*args, **kwargs))
#results[ijk] = fit

for ijk in tqdm(np.ndindex(data.shape[:-1]), total=np.prod(data.shape[:-1])):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been thinking about parallelizing the fitting and have something similar running as a separate script. I'm thinking that this structure is fine for now but will be too rigid. I think we'd want each algorithm to determine what is input into each step. Perhaps some combine all directions in one input, for example. Perhaps there's a generic generator that does essentially this, but can be overridden by the algorithm to supply data as it sees fit.

I should just finish up my changes and push them maybe it would make more sense to have it all in context.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was working on fixing the notebook yesterday evening and ran into the volume being too large and fitting would take 1 hour if I didn't use Paulien and Merels selective indexing (I tried to avoid the 1D data reformatting). I agree that a loop like this would be a good place for some parallelization, although I would like to not be the one doing it as it has been a great source of headaches in the past :)

But regarding multi-directional data, the only way I've seen and handled such data has been as a long list of b-values with separate header-info to discriminate the directions. And it is sort of the easiest way to do it. I think DIPY essentially forces you to do it that way to avoid these formatting issues.

I'm gonna push my fixes and we could hold off on this merge until you've pushed yours?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey! My commits already do parallel fits. I'm not sure whether it is the most efficient implementation, but it is a simple one.

# here we try parallel computing, but if fails, go back to computing one single core.
single = False
if njobs > 2:
try:
# define the parallel function
def parfun(i):
return fit_segmented(bvalues, dw_data[i, :], bounds=bounds, cutoff=cutoff,p0=p0)
output = Parallel(n_jobs=njobs)(delayed(parfun)(i) for i in tqdm(range(len(dw_data)), position=0, leave=True))
Dt, Fp, Dp = np.transpose(output)
except:
# if fails, retry using single core
single = True
else:
# or, if specified, immediately go to single core
single = True
if single:
# initialize empty arrays
Dp = np.zeros(len(dw_data))
Dt = np.zeros(len(dw_data))
Fp = np.zeros(len(dw_data))
for i in tqdm(range(len(dw_data)), position=0, leave=True):
# fill arrays with fit results on a per voxel base:
Dt[i], Fp[i], Dp[i] = fit_segmented(bvalues, dw_data[i, :], bounds=bounds, cutoff=cutoff,p0=p0)

# here we try parallel computing, but if fails, go back to computing one single core.
single = False
if njobs > 2:
    try:
        # define the parallel function
        def parfun(i):
            return fit_segmented(bvalues, dw_data[i, :], bounds=bounds, cutoff=cutoff,p0=p0)


        output = Parallel(n_jobs=njobs)(delayed(parfun)(i) for i in tqdm(range(len(dw_data)), position=0, leave=True))
        Dt, Fp, Dp = np.transpose(output)
    except:
        # if fails, retry using single core
        single = True
else:
    # or, if specified, immediately go to single core
    single = True
if single:
    # initialize empty arrays
    Dp = np.zeros(len(dw_data))
    Dt = np.zeros(len(dw_data))
    Fp = np.zeros(len(dw_data))
    for i in tqdm(range(len(dw_data)), position=0, leave=True):
        # fill arrays with fit results on a per voxel base:
        Dt[i], Fp[i], Dp[i] = fit_segmented(bvalues, dw_data[i, :], bounds=bounds, cutoff=cutoff,p0=p0)

@IvanARashid
Copy link
Contributor Author

So I'd say this is ready to be merged once we've done two things:

  1. Resolve that one check that does not pass. I'm not sure about what it is, maybe Eric can assist?
  2. Since we're going to dictionary outputs, we need to make sure everything uses it now. I've changed it in Pauliens notebook, and I've made it so that the dictionary outputs work with python -m pytest, but we also need it to be changed in the other testing functions. Eric, could you point me to which files I need to look into? E.g. for the recursive test.

@etpeterson
Copy link
Contributor

etpeterson commented Dec 10, 2024

I think the remaining changes are really small. See comments below.

So I'd say this is ready to be merged once we've done two things:

  1. Resolve that one check that does not pass. I'm not sure about what it is, maybe Eric can assist?

I think that's here: https://github.com/OSIPI/TF2.4_IVIM-MRI_CodeCollection/blob/main/WrapImage/nifti_wrapper.py#L112

  1. Since we're going to dictionary outputs, we need to make sure everything uses it now. I've changed it in Pauliens notebook, and I've made it so that the dictionary outputs work with python -m pytest, but we also need it to be changed in the other testing functions. Eric, could you point me to which files I need to look into? E.g. for the recursive test.

I think there's this: https://github.com/OSIPI/TF2.4_IVIM-MRI_CodeCollection/blob/main/tests/IVIMmodels/unit_tests/test_ivim_synthetic.py#L40

@IvanARashid
Copy link
Contributor Author

Thanks, Eric!

I'd say this can be merged now. The basic tests work, the ivim_synthetic test works, Pauliens notebook works, and the nifti_wrapper seems to pass as well.

Once this is in, all other PR's and current active branches will make sure they get the changes and adapt any usage of the algorithms accordingly.

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, I think we're good to merge. There may be some issues that come up at some point but we can fix that later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants