-
Notifications
You must be signed in to change notification settings - Fork 12
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #152 from PapaBravo/147-quality-explainability
#147-quality-explainability
- Loading branch information
Showing
4 changed files
with
60 additions
and
7 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
--- | ||
title: Explainability | ||
tags: safe suitable | ||
related: accountability, analysability, clarity | ||
permalink: /qualities/explainability | ||
--- | ||
Explainability is a quality sometimes required in the context of Artificial Intelligence (AI). The European's proposed AI Act will probably contain obligations for some AI systems to provide explanations about decisions that impact user rights. | ||
|
||
### Definitions | ||
|
||
> A system S is explainable with respect to an aspect X of S relative to an addressee A in context C if and only if there is an entity E (the explainer) who, by giving a corpus of information I (the explanation of X), enables A to understand X of S in C. | ||
> [L. Chazette, W. Brunotte and T. Speith, "Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue," 2021 IEEE 29th International Requirements Engineering Conference (RE), Notre Dame, IN, USA, 2021, pp. 197-208, doi: 10.1109/RE51729.2021.00025.](https://ieeexplore.ieee.org/document/9604587) | ||
<hr class="with-no-margin"/> | ||
|
||
> Explainable AI (XAI) [..] either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this. | ||
> [Wikipedia](https://en.wikipedia.org/wiki/Explainable_artificial_intelligence) | ||
<hr class="with-no-margin"/> | ||
|
||
> [..] local explainability helps answer the question, “for this particular example, why did the model make this particular decision?” | ||
> | ||
>Cohort explainability is the process of understanding to what degree your model’s features contribute to its predictions over a subset of your data. | ||
> | ||
> We consider an ML engineer to have access to global model explainability if across all predictions they are able to attribute which features contributed the most to the model’s decisions. | ||
> [Towards Datascience](https://towardsdatascience.com/a-look-into-global-cohort-and-local-model-explainability-973bd449969f) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
--- | ||
title: Global Explainability | ||
tags: suitable safe reliable | ||
related: explainability | ||
permalink: /requirements/global-explainability | ||
--- | ||
|
||
<div class="quality-requirement" markdown="1"> | ||
|
||
**Environment**: The system uses Artificial Intelligence (AI) to make decisions related to humans using their request data and large amounts of statistical data of the domain. | ||
|
||
**Response**: An administrator can request the average impact of specific features. | ||
|
||
**Background**: Using AI to generate decisions involves finding patterns in large sets of data. Some of these patterns are a result of biases in the data or represent information or characteristics (like sexual orientation or religious beliefs) that most modern legal frameworks explicitly forbid from being used in analyses and decision making. | ||
|
||
</div><br> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
--- | ||
title: Local Explainability | ||
tags: suitable safe reliable | ||
related: explainability | ||
permalink: /requirements/local-explainability | ||
--- | ||
|
||
<div class="quality-requirement" markdown="1"> | ||
|
||
**Stimulus**: The user was subject of a decision of the system. | ||
|
||
**Environment**: The user has submitted a request and the system used Artificial Intelligence (AI) to decide if the request should be approved or not. The user is then informed of the decision. | ||
|
||
**Response**: The user is informed about the 3 features that were most influential in the final decision. | ||
|
||
**Background**: Using AI to generate decisions involves finding patterns in large sets of data. Individual results are subject to statistical noise and may be erroneous. Specifically for systems in high risk scenarios, giving users an explanation is necessary to allow additional oversight and appeal processes. | ||
|
||
</div><br> |