- {allValues.length > 5 &&
- groupNames.map((groupName) => {
- const byGroupOccurences = (a, b) =>
- (groups[groupName].valuesCount[b] || 0) - (groups[groupName].valuesCount[a] || 0);
+
+ Autonomy2 (human-on-loop): Does the system operate independently but with human
+ oversight, where the system makes decisions or takes actions but a human actively
+ observes the behavior and can override the system in real time?
+
- return (
-
-
{groupName}
-
-
-
- Category |
- Count |
-
- {allValues.sort(byGroupOccurences).map((value) => (
-
- {value} |
- {groups[groupName].valuesCount[value]} |
-
- ))}
-
-
-
- );
- })}
+
+ Autonomy3 (human-in-the-loop): Does the system provide inputs and suggested
+ decisions to a human that
+
+
+ {[
+ {
+ attributeShortName: 'Physical Objects',
+ titleDescription: 'Did the incident occur in a domain with physical objects?',
+ subtitle: (
+ <>
+
+ Incidents that involve physical objects are more likely to have damage or injury.
+ However, AI systems that do not operate in a physical domain can still lead to
+ harm.
+
+ >
+ ),
+ },
+ {
+ attributeShortName: 'Entertainment Industry',
+ titleDescription: 'Did the incident occur in the entertainment industry?',
+ subtitle: (
+
+ AI systems used for entertainment are less likely to involve physical objects and
+ hence unlikely to be associated with damage, injury, or loss. Additionally, there is
+ a lower expectation for truthful information from entertainment, making detrimental
+ content less likely (but still possible).
+
+ ),
+ },
+ {
+ attributeShortName: 'Report, Test, or Study of data',
+ titleDescription:
+ 'Was the incident about a report, test, or study of training data instead of the AI itself?',
+ subtitle: (
+
+ The quality of AI training and deployment data can potentially create harm or risks
+ in AI systems. However, an issue in the data does not necessarily mean the AI will
+ cause harm or increase the risk for harm. It is possible that developers or users
+ apply techniques and processes to mitigate issues with data.
+
+ ),
+ },
+ {
+ attributeShortName: 'Deployed',
+ titleDescription:
+ 'Was the reported system (even if AI involvement is unknown) deployed or sold to users?',
+ subtitle: <>>,
+ },
+ {
+ attributeShortName: 'Producer Test in Controlled Conditions',
+ titleDescription:
+ 'Was this a test or demonstration of an AI system done by developers, producers, or researchers (versus users) in controlled conditions?',
+ subtitle: (
+
+ AI tests or demonstrations by developers, producers, or researchers in controlled
+ environments are less likely to expose people, organizations, property,
+ institutions, or the natural environment to harm. Controlled environments may
+ include situations such as an isolated compute system, a regulatory sandbox, or an
+ autonomous vehicle testing range.
+
+ ),
+ },
+ {
+ attributeShortName: 'Producer Test in Operational Conditions',
+ titleDescription:
+ 'Was this a test or demonstration of an AI system done by developers, producers, or researchers (versus users) in operational conditions?',
+ subtitle: (
+
+ Some AI systems undergo testing or demonstration in an operational environment.
+ Testing in operational environments still occurs before the system is deployed by
+ end-users. However, relative to controlled environments, operational environments
+ try to closely represent real-world conditions that affect use of the AI system.{' '}
+
+ ),
+ },
+ {
+ attributeShortName: 'User Test in Controlled Conditions',
+ titleDescription:
+ 'Was this a test or demonstration of an AI system done by users in controlled conditions?',
+ subtitle: (
+
+ Sometimes, prior to deployment, the users will perform a test or demonstration of
+ the AI system. The involvement of a user (versus a developer, producer, or
+ researcher) increases the likelihood that harm can occur even if the AI system is
+ being tested in controlled environments because a user may not be as familiar with
+ the functionality or operation of the AI system.
+
+ ),
+ },
+ {
+ attributeShortName: 'User Test in Operational Conditions',
+ titleDescription:
+ 'Was this a test or demonstration of an AI system done by users in operational conditions?',
+ subtitle: (
+
+ The involvement of a user (versus a developer, producer, or researcher) increases
+ the likelihood that harm can occur even if the AI system is being tested. Relative
+ to controlled environments, operational environments try to closely represent
+ real-world conditions and end-users that affect use of the AI system. Therefore,
+ testing in an operational environment typically poses a heightened risk of harm to
+ people, organizations, property, institutions, or the environment.
+
+ ),
+ },
+ ].map(({ attributeShortName, titleDescription, subtitle }) => (
+
+ ))}