Skip to content

Commit

Permalink
Merge pull request #152 from PapaBravo/147-quality-explainability
Browse files Browse the repository at this point in the history
#147-quality-explainability
  • Loading branch information
gernotstarke authored Jan 15, 2024
2 parents 8a27bbc + 87e7bfc commit 1f4760f
Show file tree
Hide file tree
Showing 4 changed files with 60 additions and 7 deletions.
7 changes: 0 additions & 7 deletions _todo-qualities/2099-01-01-explainability.md

This file was deleted.

26 changes: 26 additions & 0 deletions qualities/E/_posts/2022-12-28-explainability.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
---
title: Explainability
tags: safe suitable
related: accountability, analysability, clarity
permalink: /qualities/explainability
---
Explainability is a quality sometimes required in the context of Artificial Intelligence (AI). The European's proposed AI Act will probably contain obligations for some AI systems to provide explanations about decisions that impact user rights.

### Definitions

> A system S is explainable with respect to an aspect X of S relative to an addressee A in context C if and only if there is an entity E (the explainer) who, by giving a corpus of information I (the explanation of X), enables A to understand X of S in C.
> [L. Chazette, W. Brunotte and T. Speith, "Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue," 2021 IEEE 29th International Requirements Engineering Conference (RE), Notre Dame, IN, USA, 2021, pp. 197-208, doi: 10.1109/RE51729.2021.00025.](https://ieeexplore.ieee.org/document/9604587)
<hr class="with-no-margin"/>

> Explainable AI (XAI) [..] either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this.
> [Wikipedia](https://en.wikipedia.org/wiki/Explainable_artificial_intelligence)
<hr class="with-no-margin"/>

> [..] local explainability helps answer the question, “for this particular example, why did the model make this particular decision?”
>
>Cohort explainability is the process of understanding to what degree your model’s features contribute to its predictions over a subset of your data.
>
> We consider an ML engineer to have access to global model explainability if across all predictions they are able to attribute which features contributed the most to the model’s decisions.
> [Towards Datascience](https://towardsdatascience.com/a-look-into-global-cohort-and-local-model-explainability-973bd449969f)
16 changes: 16 additions & 0 deletions requirements/G/_posts/2024-01-14-global-explainability.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
title: Global Explainability
tags: suitable safe reliable
related: explainability
permalink: /requirements/global-explainability
---

<div class="quality-requirement" markdown="1">

**Environment**: The system uses Artificial Intelligence (AI) to make decisions related to humans using their request data and large amounts of statistical data of the domain.

**Response**: An administrator can request the average impact of specific features.

**Background**: Using AI to generate decisions involves finding patterns in large sets of data. Some of these patterns are a result of biases in the data or represent information or characteristics (like sexual orientation or religious beliefs) that most modern legal frameworks explicitly forbid from being used in analyses and decision making.

</div><br>
18 changes: 18 additions & 0 deletions requirements/L/_posts/2024-01-14-local-explainability.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
title: Local Explainability
tags: suitable safe reliable
related: explainability
permalink: /requirements/local-explainability
---

<div class="quality-requirement" markdown="1">

**Stimulus**: The user was subject of a decision of the system.

**Environment**: The user has submitted a request and the system used Artificial Intelligence (AI) to decide if the request should be approved or not. The user is then informed of the decision.

**Response**: The user is informed about the 3 features that were most influential in the final decision.

**Background**: Using AI to generate decisions involves finding patterns in large sets of data. Individual results are subject to statistical noise and may be erroneous. Specifically for systems in high risk scenarios, giving users an explanation is necessary to allow additional oversight and appeal processes.

</div><br>

0 comments on commit 1f4760f

Please sign in to comment.