-
-
Notifications
You must be signed in to change notification settings - Fork 2
Home
The prior goal of this project is to implement a new fairness package that is well integrated into the mlr3 ecosystem. This new package could be used by many other mlr3 subsystems like mlr3pipelines, mlr3tuning and mlr3proba. Then further help the researchers and industrial users to detect the fairness of datasets or algorithmic bias more easily, which could benefit the accuracy and impartiality of their models built by mlr3.
To be more specifically, there are three milestones of this project:
- Create or integrate fairness metrics.
- Create new R6 Class called "FairnessMeasures" that could make use of current measures from "mlr3" or "mlr3measures". This new class will be the foundation to generate fairness metrics.
- Create other needed but non-existing fairness measures.
- Create a list of fairness metrics demo.
- Implement debiasing methods.
- Create appropriate visualizations.
The mlr3 package and ecosystem provide a generic, object-oriented, and extensible framework for classification, regression, survival analysis, and other machine learning tasks for the R language. Instead of implementing any learners, mlr3 provides a unified interface to many existing learners in R. This unified interface provides functionality to extend and combine existing learners, intelligently select and tune the most appropriate technique for a task, and perform large-scale comparisons that enable meta-learning. Examples of this advanced functionality include hyperparameter tuning, feature selection, and ensemble construction. Parallelization of many operations is natively supported.
During this project we want to extend the unified interface to fairness in machine learning. For example, investigating and correcting for differences in predictions in subgroups of the data. The demand here is to offer capabilities to analyze & visualize differences in algorithm performances between subgroups, as well as bias mitigation strategies in mlr3 interfaces. Currently some simple operations like confusion matrix are already supported. But we want to include more reporting metrics like Predictive rate parity, False Positive parity, False Negative parity and ROC AUC metrics.
Other than reporting metrics. The algorithms developed by the research community to help detect and mitigate bias in machine learning models would be helpful to solve the potential fairness problems. Which led to the second goal of this project. Include algorithms in the mlr3 package to mitigate bias in datasets and models.
The proposed final result of this project is that mlr3 users could use mlr3 fairness and other mlr3 packages to detect the impartiality and mitigate bias in the datasets and models.
- Extend the measure class to be able to investigate and quantify performances on a subgroup level.
- Implement popular fairness metrics by connecting to the fairness package or AI fairness 360.
- Define a clean API for fairness auditing in mlr3.
- Implement visualizations for auditing using ggplot2 or directly inherit from fairness packages.
- Implement debiasing strategies as pre and post processing PipeOperators in the style of the mlr3 pipelines package. One of the possible approaches would be inherited from AIF360, which supports some bias mitigation algorithms like reweighing, equalized Odds Postprocessing and Adversarial Debiasing.
- Create an introduction vignette and demos for debiasing algorithms to showcase the new package.