diff --git a/qualities/E/_posts/2022-12-28-explainability.md b/qualities/E/_posts/2022-12-28-explainability.md
index 4085911..d4609f2 100644
--- a/qualities/E/_posts/2022-12-28-explainability.md
+++ b/qualities/E/_posts/2022-12-28-explainability.md
@@ -4,7 +4,7 @@ tags: safe suitable
related: accountability, analysability, clarity
permalink: /qualities/explainability
---
-Explainability is a quality sometimes required in the context of Artificial Intelligence (AI). The proposed AI Act of the European Union will probably contain obligations for some AI systems to provide explanations about decisions that impact user rights.
+Explainability is a quality sometimes required in the context of Artificial Intelligence (AI). The European's proposed AI Act will probably contain obligations for some AI systems to provide explanations about decisions that impact user rights.
### Definitions
diff --git a/requirements/G/_posts/2024-01-14-global-explainability.md b/requirements/G/_posts/2024-01-14-global-explainability.md
index 5ec45e3..5c15809 100755
--- a/requirements/G/_posts/2024-01-14-global-explainability.md
+++ b/requirements/G/_posts/2024-01-14-global-explainability.md
@@ -11,6 +11,6 @@ permalink: /requirements/global-explainability
**Response**: An administrator can request the average impact of specific features.
-**Background**: Using AI to generate decisions involves finding patterns in large sets of data. Some of these patterns are biases in the data or represent information that is not allowed to be used in the analysis (like sexual orientation or religious beliefs).
+**Background**: Using AI to generate decisions involves finding patterns in large sets of data. Some of these patterns are a result of biases in the data or represent information or characteristics (like sexual orientation or religious beliefs) that most modern legal frameworks explicitly forbid from being used in analyses and decision making.
diff --git a/requirements/L/_posts/2024-01-14-local-explainability.md b/requirements/L/_posts/2024-01-14-local-explainability.md
index 68198f3..0223bb9 100755
--- a/requirements/L/_posts/2024-01-14-local-explainability.md
+++ b/requirements/L/_posts/2024-01-14-local-explainability.md
@@ -9,10 +9,10 @@ permalink: /requirements/local-explainability
**Stimulus**: The user was subject of a decision of the system.
-**Environment**: The user has submitted a request and the system used an Artificial Intelligence (AI) to decide if the request should be approved or not. The user is then informed of the decision.
+**Environment**: The user has submitted a request and the system used Artificial Intelligence (AI) to decide if the request should be approved or not. The user is then informed of the decision.
-**Response**: The user is informed about the 3 features that were most impactful on the decision.
+**Response**: The user is informed about the 3 features that were most influential in the final decision.
-**Background**: Using AI to generate decisions involves finding patterns in large sets of data. Individual results are subject to statistical noise and may be unintended. Specifically for systems in high risk scenarios, giving users an explanation is necessary to allow additional oversight and appeal processes.
+**Background**: Using AI to generate decisions involves finding patterns in large sets of data. Individual results are subject to statistical noise and may be erroneous. Specifically for systems in high risk scenarios, giving users an explanation is necessary to allow additional oversight and appeal processes.