Fusilli v1.2.0
Fusilli v1.2.0 Release Notes
Choose your own metrics!
If you want to evaluate your classification model with AUPRC and precision instead of AUROC and accuracy, there's now an easy way to do that:
- Create a list of the metrics you want to use:
["auprc", "precision"]
- Input this list into the
metrics_list
argument tofusilli.train.train_and_save_models
It was possible to calculate your own metrics in your own experiment space before this release by accessing the trained models' validation labels and predictions - this is much simpler.
If you want the default metrics (AUROC and accuracy for classification, R2 and MAE for regression), never fear! Don't specify metrics_list
and nothing will change.
It's worth nothing that the first metric in the list will be used as the main metric:
- It will appear in evaluation figure titles
- It will be used to rank the fusion models in the figure output from
fusilli.eval.ModelComparison
Other changes:
- A new example notebook was added to show how to train and test a regression model with Fusilli
- Guidance on customising experimental configurations was moved from the examples to its own page in the documentation
Note: I've realised that the last release probably should have been Fusilli 2.0.0 because the API was changed by renaming a function and changing function inputs. Sorry for any confusion and I'll know for next time!