-
Notifications
You must be signed in to change notification settings - Fork 11
/
search.json
562 lines (498 loc) · 921 KB
/
search.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
[
{
"title" : "Introduction",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63/introduction/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Introduction\n\nThis section is informative.\n\nThe rapid proliferation of online services over the past few years has heightened the need for reliable, equitable, secure, and privacy-protective digital identity solutions. A digital identity is always unique in the context of an online service. However, a person may have multiple digital identities and while a digital identity may relay a unique and specific meaning within the context of an online service, the real-life identity of the individual behind the digital identity may not be known. When confidence in a person’s real-life identity is not required to provide access to an online service, organizations may use anonymous or pseudonymous accounts. In all other use cases, a digital identity is intended to demonstrate trust between the holder of the digital identity and the person, organization, or system on the other side of the online service. However, this process can present challenges. There are multiple opportunities for mistakes, miscommunication, impersonation, and other attacks that fraudulently claim another person’s digital identity. Additionally, given the broad range of individual needs, constraints, capacities, and preferences, online services must be designed with equity, usability, and flexibility to ensure broad and enduring participation and access to digital devices and services.\n\nDigital identity risks are dynamic and exist along a continuum; consequently, organizations’ digital identity risk management approach should seek to manage risks using outcome-based approaches that are designed to meet the organization’s unique needs. This guidance defines specific assurance levels which operate as baseline control sets designed to provide a common point for organizations seeking to address identity-related risks. Assurance levels provide multiple benefits, including a starting point for agencies in their risk management journey and a common structure for supporting interoperability between different entities. It is, however, impractical to create assurance levels that can comprehensively address the entire spectrum of risks, threats, or considerations and organization will face when deploying an identity solution. For this reason, these guidelines promote a risk-oriented approach to digital identity solution implementation rather than a compliance-oriented approach, and organizations are encouraged to tailor their control implementations based on the processes defined in these guidelines.\n\nAdditionally, risks associated with digital identity stretch beyond the potential impacts to the organization providing online services. These guidelines endeavor to account for risks to individuals, communities, and other organizations more robustly and explicitly. Organizations should consider how digital identity decisions that prioritize security might affect, or need to accommodate, the individuals who interact with the organization’s programs and services. Privacy, equity, and usability for individuals should be considered along with security. Additionally, organizations should consider their digital identity approach alongside other mechanisms for identity management, such as those used in call centers and in-person interactions. By taking a human-centric and continuously informed approach to mission delivery, organizations have an opportunity to incrementally build trust with the variety of populations they serve, improve customer satisfaction, identify issues more quickly, and provide individuals with culturally appropriate and effective redress options.\n\nThe composition, models, and availability of identity services has significantly changed since the first version of SP 800-63 was released, as have the considerations and challenges of deploying secure, private, usable, and equitable services to diverse user communities. This revision addresses these challenges by clarifying requirements based on the function that an entity may serve under the overall digital identity model.\n\nAdditionally, this publication provides instruction for credential service providers (CSPs), verifiers, and relying parties (RPs), that supplement the NIST Risk Management Framework [NISTRMF] and its component special publications. It describes the risk management processes that organizations should follow to implement digital identity services and expands upon the NIST RMF by outlining how equity and usability considerations should be incorporated. It also highlights the importance of considering impacts, not only on enterprise operations and assets, but also on individuals, other organizations, and — more broadly — society. Furthermore, digital identity management processes for identity proofing, authentication, and federation typically involve processing personal information, which can present privacy risks. Therefore, these guidelines include privacy requirements and considerations to help mitigate potential associated privacy risks.\n\nFinally, while these guidelines provide organizations with technical requirements and recommendations for establishing, maintaining, and authenticating the digital identity of subjects who access digital systems over a network, additional support options outside of the purview of information technology teams may be needed to address barriers and adverse impacts, foster equity, and successfully deliver on mission objectives.\n\nScope and Applicability\n\nThis guidance applies to all online services for which some level of digital identity is required, regardless of the constituency (e.g., residents, business partners, and government entities). For this publication, “person” refers only to natural persons.\n\nThese guidelines primarily focus on organizational services that interact with external users, such as residents accessing public benefits or private-sector partners accessing collaboration spaces. However, it also applies to federal systems accessed by employees and contractors. The Personal Identity Verification (PIV) of Federal Employees and Contractors standard [FIPS201] and its corresponding set of Special Publications and organization-specific instructions extend these guidelines for the federal enterprise, by providing additional technical controls and processes for issuing and managing Personal Identity Verification (PIV) Cards, binding additional authenticators as derived PIV credentials, and using federation architectures and protocols with PIV systems.\n\nOnline services not covered by this guidance include those associated with national security systems as defined in 44 U.S.C. § 3552(b)(6). Private-sector organizations and state, local, and tribal governments whose digital processes require varying levels of digital identity assurance may consider the use of these standards where appropriate.\n\nThese guidelines address logical access to online systems, services, and applications. They do not specifically address physical access control processes. However, the processes specified in these guidelines can be applied to physical access use cases where appropriate. Additionally, these guidelines do not explicitly address some subjects including, but not limited to, machine-to-machine authentication, interconnected devices (e.g., Internet of Things (IoT) devices), or access to Application Programming Interfaces (APIs) on behalf of subjects.\n\nHow to Use This Suite of SPs\n\nThese guidelines support the mitigation of the negative impacts of errors that occur during the identity system functions of identity proofing, authentication, and federation. Sec. 3, Digital Identity Risk Management, provides details on the risk assessment process and how the results of the risk assessment and additional context inform the selection of controls to secure the identity proofing, authentication, and federation processes. Controls are selected by determining the assurance level required to mitigate each applicable type of digital identity error for a particular service based on risk and mission.\n\nSpecifically, organizations are required to individually select assurance levels1 that correspond to each function being performed:\n\n\n Identity Assurance Level (IAL) refers to the identity proofing process.\n Authentication Assurance Level (AAL) refers to the authentication process.\n Federation Assurance Level (FAL) refers to the federation process when the RP is connected to a CSP or an IdP through a federated protocol.\n\n\nSP 800-63 is organized as the following suite of volumes:\n\n\n SP 800-63 Digital Identity Guidelines provides the digital identity models, risk assessment methodology, and process for selecting assurance levels for identity proofing, authentication, and federation. SP 800-63 contains both normative and informative material.\n [SP800-63A]: provides requirements for identity proofing and the enrollment of applicants, either remotely or in-person, that wish to gain access to resources at each of the three IALs. It details the responsibilities of CSPs with respect to establishing and maintaining subscriber accounts and binding CSP issued or subscriber-provided authenticators to the subscriber account.\nSP 800-63A contains both normative and informative material.\n [SP800-63B] provides requirements for authentication processes, including choices of authenticators, that may be used at each of the three AALs. It also provides recommendations on events that may occur during the lifetime of authenticators, including invalidation in the event of loss or theft.\nSP 800-63B contains both normative and informative material.\n [SP800-63C] provides requirements on the use of federated identity architectures and assertions to convey the results of authentication processes and relevant identity information to an agency application. This volume offers privacy-enhancing techniques for sharing information about a valid, authenticated subject, and describes methods that allow for strong multi-factor authentication (MFA) while the subject remains pseudonymous to the online service.\nSP 800-63C contains both normative and informative material.\n\n\nEnterprise Risk Management Requirements and Considerations\n\nEffective enterprise risk management is multidisciplinary by design and involves the consideration of diverse sets of factors and equities. In a digital identity risk management context, these factors include, but are not limited to, information security, privacy, equity, and usability. It is important for risk management efforts to weigh these factors as they relate to enterprise assets and operations, individuals, other organizations, and society.\n\nDuring the process of analyzing factors relevant to digital identity, organizations may determine that measures outside of those specified in this publication are appropriate in certain contexts (e.g., where privacy or other legal requirements exist or where the output of a risk assessment leads the organization to determine that additional measures or alternative procedural safeguards are appropriate). Organizations, including federal agencies, may employ compensating or supplemental controls that are not specified in this publication. They may also consider partitioning the functionality of an online service to allow less sensitive functions to be available at a lower level of assurance in order to improve equity and access without compromising security.\n\nThe considerations detailed below support enterprise risk management efforts and encourage informed, inclusive, and human-centered service delivery. While this list of considerations is not exhaustive, it highlights a set of cross-cutting factors that are likely to impact decision-making associated with digital identity management.\n\nSecurity, Fraud, and Threat Prevention\n\nIt is increasingly important for organizations to assess and manage digital identity security risks, such as unauthorized access due to impersonation. As organizations consult this guidance, they should consider potential impacts to the confidentiality, integrity, and availability of information and information systems that they manage and that their service providers and business partners manage on behalf of the individuals and communities that they serve.\n\nFederal agencies implementing these guidelines are required to meet statutory responsibilities, including those under the Federal Information Security Modernization Act (FISMA) of 2014 [FISMA] and related NIST standards and guidelines. NIST recommends that non-federal organizations implementing these guidelines follow comparable standards (e.g., ISO 27001) to ensure the secure operation of their digital systems.\n\nFISMA requires federal agencies to implement appropriate controls to protect federal information and information systems from unauthorized access, use, disclosure, disruption, or modification. The NIST RMF [NISTRMF] provides a process that integrates security, privacy, and cyber supply-chain risk management activities into the system development life cycle. It is expected that federal agencies and organizations that provide services under these guidelines have already implemented the controls and processes required under FISMA and associated NIST risk management processes and publications.\n\nThe controls and requirements encompassed by the identity, authentication, and Federation Assurance Levels under these guidelines augment, but do not replace or alter, the information and information system controls determined under FISMA and the RMF.\n\nIt is increasingly important for organizations to assess and manage identity-related fraud risks associated with identity proofing and authentication processes. As organizations consult this guidance, they should consider the evolving threat environment, the availability of innovative anti-fraud measures in the digital identity market, and the potential impact of identity-related fraud. This is particularly important with respect to public-facing online services where the impact of identity-related fraud on e-government service delivery, public trust, and agency reputation can be substantial. This version enhances measures to combat identity theft and identity-related fraud by repurposing IAL1 as a new assurance level, updating authentication risk and threat models to account for new attacks, providing new options for phishing resistant authentication, introducing requirements to prevent automated attacks against enrollment processes, and preparing for new technologies (e.g., mobile driver’s licenses and verifiable credentials) that can leverage strong identity proofing and authentication.\n\nPrivacy\n\nWhen designing, engineering, and managing digital identity systems, it is imperative to consider the potential of that system to create privacy-related problems for individuals when processing (e.g., collection, storage, use, and destruction) personally identifiable information (PII) and the potential impacts of problematic data actions. If a breach of PII or a release of sensitive information occurs, organizations need to ensure that the privacy notices describe, in plain language, what information was improperly released and, if known, how the information was exploited.\n\nOrganizations need to demonstrate how organizational privacy policies and system privacy requirements have been implemented in their systems. These guidelines recommend that organizations employ the full set of legal and regulatory mandates that may affect their users and technology providers including:\n\n\n The NIST Privacy Framework [NISTPF], which enables privacy engineering practices that support privacy by design concepts and helps organizations protect individuals’ privacy.\n The [PrivacyAct] of 1974, 2020 Edition which established a set of fair information practices for the collection, maintenance, use, and disclosure of information about individuals that is maintained by federal agencies in systems of records.\n OMB Guidance for Implementing the Privacy Provisions of the E-Government Act of 2002 [M-03-22], which describes the Privacy Impact Assessments that are supported by the privacy risk assessments that are required for PII processing or storing.\n [SP800-53] Security and Privacy Controls for Information Systems and Organizations , which lists privacy controls that can be implemented to mitigate the risks identified in the privacy risk and impact assessments.\n [SP800-122] Guide to Protecting the Confidentiality of Personally Identifiable Information (PII), which assists federal agencies in understanding what PII is, the relationship between protecting the confidentiality of PII, privacy, and the Fair Information Practices, and safeguards for protecting PII.\n\n\nFurthermore, each volume of SP 800-63, ([SP800-63A], [SP800-63B], and [SP800-63C]) contains a specific section providing detailed privacy guidance and considerations for the implementation of the processes, controls, and requirements presented in that volume as well as normative requirements on data collection, retention, and minimization.\n\nEquity\n\nEquity has been defined as “the consistent and systematic fair, just, and impartial treatment of all individuals, including individuals who belong to underserved communities that have been denied such treatment” [EO13985]. Incorporating equity considerations when designing or operating a digital identity service helps ensure a person’s ability to engage in an online service, such as accessing a critical service like healthcare. Accessing online services is often dependent on a person’s ability to present a digital identity and use the required technologies successfully and safely. Many populations are either unable to successfully present a digital identity or face a higher degree of burden in navigating online services than their more privileged peers. In a public service context, this poses a direct risk to successful mission delivery. In a broader societal context, challenges related to digital access can exacerbate existing inequities and continue systemic cycles of exclusion for historically marginalized and underserved groups.\n\nTo support the continuous evaluation and improvement program described in Sec. 3, it is important to maintain awareness of existing inequities faced by served populations and potential new inequities or disparities between populations that could be caused or exacerbated by the design or operation of digital identity systems. This can help identify the opportunities, processes, business partners, and multi-channel identity proofing and service delivery methods that best support the needs of those populations while also managing privacy, security, and fraud risks.\n\nFurther, section 508 of the Rehabilitation Act of 1973 (2011) [Section508] was enacted to eliminate barriers in information technology and require federal agencies to make electronic and information technologies accessible to people with disabilities. While these guidelines do not directly assert requirements from [Section508], federal agencies and their identity service providers are expected to design online services and systems with the experiences of people with disabilities in mind to ensure that accessibility is prioritized.\n\nUsability\n\nUsability refers to the extent to which a system, product, or service can be used to achieve goals with effectiveness, efficiency, and satisfaction in a specified context of use. Usability also supports major objectives such as equity, service delivery, and security. Like equity, usability requires an understanding of the people who interact with a digital identity system or process, as well as their unique goals and context of use.\n\nReaders of this guidance should take a holistic approach to considering the interactions that each user will engage in throughout the process of enrolling in and authenticating to a service. Throughout the design and development of a digital identity system or process, it is important to conduct usability evaluations with demographically representative users, from all communities served and perform realistic scenarios and tasks in appropriate contexts of use. Additionally, following usability guidelines and considerations can help organizations meet customer experience goals articulated in federal policy [EO14058]. Digital identity management processes should be designed and implemented so that it is easy for users to do the right thing, hard to do the wrong thing, and easy to recover when the wrong thing happens.\n\nNotations\n\nThis guideline uses the following typographical conventions in text:\n\n\n Specific terms in CAPITALS represent normative requirements. When these same terms are not in CAPITALS, the term does not represent a normative requirement.\n \n The terms “SHALL” and “SHALL NOT” indicate requirements to be followed strictly in order to conform to the publication and from which no deviation is permitted.\n The terms “SHOULD” and “SHOULD NOT” indicate that among several possibilities, one is recommended as particularly suitable without mentioning or excluding others, that a certain course of action is preferred but not necessarily required, or that (in the negative form) a certain possibility or course of action is discouraged but not prohibited.\n The terms “MAY” and “NEED NOT” indicate a course of action that is permissible within the limits of the publication.\n The terms “CAN” and “CANNOT” indicate a material, physical, or causal possibility and capability or — in the negative — the absence of that possibility or capability.\n \n \n\n\n\\clearpage\n\n\nDocument Structure\n\nThis document is organized as follows. Each section is labeled as either normative (i.e., mandatory for compliance) or informative (i.e., not mandatory).\n\n\n Section 1 provides an introduction to the document. This section is informative.\n Section 2 describes a general model for digital identity. This section is informative.\n Section 3 describes the digital identity risk model. This section is normative.\n The References section contains a list of publications that are cited in this document. This section is informative.\n Appendix A contains a selected list of abbreviations used in this document. This appendix is informative.\n Appendix B contains a glossary of selected terms used in this document. This appendix is informative.\n Appendix C contains a summarized list of changes in this document’s history. This appendix is informative.\n\n\n \n \n When described generically or bundled, these guidelines will refer to IAL, AAL, and FAL as xAL. ↩\n \n \n\n"
} ,
{
"title" : "Digital Identity Model",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63/model/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Digital Identity Model\n\nThis section is informative.\n\nOverview\n\nThe SP 800-63 guidelines use digital identity models that reflect technologies and architectures that already currently available in the market. These models have a variety of entities and functions and vary in complexity. Simple models group functions, such as creating subscriber accounts and providing attributes, under a single entity. More complex models separate these functions among a larger number of entities. The entities, and their associated functions, found in digital identity models include:\n\nSubject: In these guidelines, a subject is a person and is represented by one of three roles, depending on where they are in the digital identity process.\n\n\n Applicant — The subject to be identity-proofed and enrolled.\n Subscriber — - The subject who has successfully completed the identity proofing and enrollment process or authentication (i.e., when the subject is in an active on-line session).\n Claimant — The subject “making a claim” to be eligible for authentication.\n\n\nService provider: Service providers can perform any combination of functions involved in granting access to and delivering online services, such as a credential service provider, relyin party, verifier, and Identity provider.\n\nCredential service provider (CSP): CSP functions include identity proofing applicants to the identity service and registering authenticators to subscriber accounts. A subscriber account is the CSP’s established record of the subscriber, the subscriber’s attributes, and associated authenticators. CSP functions may be performed by an independent third party.\n\nRelying party (RP): RP functions rely on the information in the subscriber account from the CSP, typically to process a digital transaction or grant access to information or a system. When using federation, the RP accesses the information in the subscriber account through assertions from an identity provider.\n\nVerifier: The function of a verifier is to verify the claimant’s identity by verifying the claimant’s possession and control of one or more authenticators using an authentication protocol. To do this, the verifier needs to confirm the binding of the authenticators with the subscriber account and check that the subscriber account is active.\n\nIdentity provider (IdP): When using federation, the IdP manages the subscriber’s primary authenticators and issues assertions derived from the subscriber account.\n\nIdentity Proofing and Enrollment\n\nNormative requirements can be found in [SP800-63A], Identity Proofing and Enrollment.\n\n[SP800-63A] provides general information and normative requirements for the identity proofing and enrollment processes as well as requirements that are specific to IALs.\n\nFigure 1 shows a sample of interactions for identity proofing and enrollment.\n\nTo start, an applicant opts to enroll with a CSP by requesting access. The CSP or the entity fulfilling CSP functions requests identity evidence and attributes, which the applicant provides. If the applicant is successfully identity-proofed, they are enrolled in the identity service as a subscriber of that CSP. A unique subscriber account is then created and one or more authenticators are registered to the subscriber account.\n\nSubscribers have a responsibility to maintain control of their authenticators (e.g., guard against theft) and comply with CSP policies to remain in good standing with the CSP.\n\nFig. 1. Sample Identity Proofing and Enrollment Digital Identity Model\n\n\n\nSubscriber Accounts\n\nAt the time of enrollment, the CSP establishes a subscriber account to uniquely identify each subscriber and record any authenticators registered (bound) to that subscriber account. The CSP may:\n\n\n Issue and register one or more authenticators to the subscriber at the time of enrollment,\n Register authenticators provided by the subscriber to the subscriber account,\n Register additional authenticators to the subscriber account at a later time as needed, or\n Provision the subscriber account to one or more general-purpose or subscriber-controlled wallets, for use in a federated protocol system.\n\n\nSee Sec. 5 of [SP800-63A], Subscriber-Accounts, for more information and normative requirements.\n\nAuthentication and Authenticator Management\n\nNormative requirements can be found in [SP800-63B], Authentication and Authenticator Management.\n\nAuthenticators\n\n[SP800-63B] provides normative descriptions of permitted authenticator types, their characteristics (e.g., phishing resistance), and authentication processes appropriate for each AAL.\n\nThis guidance defines three types of authentication factors used for authentication:\n\n\n Something you know (e.g., a password)\n Something you have (e.g., a device containing a cryptographic key)\n Something you are (e.g., a fingerprint or other biometric characteristic data)\n\n\nSingle-factor authentication requires only one of the above factors, most often “something you know”. Multiple instances of the same factor still constitute single-factor authentication. For example, a user-generated PIN and a password do not constitute two factors as they are both “something you know.” Multi-factor authentication (MFA) refers to the use of more than one distinct factor.\n\nThis guidance specifies that authenticators always contain or comprise a secret. The secrets contained in an authenticator are based on either key pairs (i.e., asymmetric cryptographic keys) or shared secrets (including symmetric cryptographic keys, seeds for generating one-time passwords (OTP), and passwords). Asymmetric key pairs are comprised of a public key and a related private key. The private key is stored on the authenticator and is only available for use by the claimant who possesses and controls the authenticators. A verifier that has the subscriber’s public key (e.g., through a public key certificate) can use an authentication protocol to verify that the claimant is a subscriber who has possession and control of the associated private key contained in the authenticator. Symmetric keys are generally chosen at random, complex and long enough to thwart network-based guessing attacks, and stored in hardware or software that the subscriber controls. Passwords typically have fewer characters and less complexity than cryptographic keys resulting in increased vulnerabilities that require additional defenses to mitigate.\n\nPasswords used as activation factors for multi-factor authenticators are referred to as activation secrets. An activation secret is used to decrypt a stored key used for authentication or is compared against a locally held and stored verifier to provide access to the authentication key. In either of these cases, the activation secret remains within the authenticator and its associated user endpoint. An example of an activation secret would be the PIN used to activate a PIV card.\n\nBiometric characteristics are unique, personal attributes that can be used to verify the identity of a person who is physically present at the point of authentication. This includes, but is not limited to, facial features, fingerprints, and iris patterns. While biometric characteristics cannot be used for single-factor authentication, they can be used as an authentication factor for multi-factor authentication when used in combination with a physical authenticator (i.e., something you have).\n\nSome authentication methods used for in-person interactions do not apply directly to digital authentication. For example, a physical driver’s license is something you have and may be useful when authenticating to a human (e.g., a security guard), but it is not an authenticator for online services.\n\nSome commonly used authentication methods do not contain or comprise secrets and are therefore not acceptable for use under these guidelines. For example:\n\n\n Knowledge-based authentication, where the claimant is prompted to answer questions that are presumably known only by the claimant, does not constitute an acceptable secret for digital authentication.\n A biometric characteristic does not constitute a secret and cannot be used as a single-factor authenticator.\n\n\nAuthentication Process\n\nThe authentication process enables an RP to trust that a claimant is who they say they are. Some approaches are described in [SP800-63B], Authentication and Authenticator Management. The sample authentication process in Fig. 2 shows interactions between the RP, a claimant, and a verifier/CSP. The verifier is a functional role and is frequently implemented in combination with the CSP, the RP, or both (as shown in Fig. 4).\n\nFig. 2. Sample Authentication Process\n\n\n\nA successful authentication process demonstrates that the claimant has possession and control of one or more valid authenticators that are bound to the subscriber’s identity. In general, this is done using an authentication protocol that involves an interaction between the verifier and the claimant. The exact nature of the interaction is extremely important in determining the overall security of the system. Well-designed protocols can protect the integrity and confidentiality of communication between the claimant and the verifier both during and after the authentication and can help limit the damage done by an attacker masquerading as a legitimate verifier.\n\nAdditionally, mechanisms located at the verifier can mitigate online guessing attacks against lower entropy secrets (e.g., passwords and PINs) by limiting the rate at which an attacker can make authentication attempts, or otherwise delaying incorrect attempts. Generally, this is done by keeping track of and limiting the number of unsuccessful attempts, since the premise of an online guessing attack is that most attempts will fail.\n\nFederation and Assertions\n\nNormative requirements can be found in [SP800-63C], Federation and Assertions.\n\nSection III of OMB [M-19-17] Enabling Mission Delivery through Improved Identity, Credential, and Access Management directs agencies to support cross-government identity federation and interoperability. The term federation can be applied to several different approaches that involve the sharing of information between different trust domains. These approaches differ based on the kind of information that is being shared between the domains. These guidelines address the federation processes that allow for the conveyance of identity and authentication information based on trust agreements across a set of networked systems through federation assertions.\n\nThere are many benefits to using federated architectures including, but not limited to:\n\n\n Enhanced user experience (e.g., a subject can be identity proofed once but their subscriber account used at multiple RPs).\n Cost reduction to both the subscriber (e.g., reduction in authenticators) and the organization (e.g., reduction in information technology infrastructure and a streamlined architecture).\n Minimizing data in RPs that do not need to collect, store, or dispose of personal information.\n Minimizing data exposed to RPs by using pseudonymous identifiers and derived attribute values instead of copying account values to each application.\n Mission enablement, since organizations will need to focus fewer resources on complex identity management processes.\n\n\nWhile the federation process is generally the preferred approach to authentication when the RP and IdP are not administered together under a common security domain, federation can also be applied within a single security domain for a variety of benefits including centralized account management and technical integration.\n\nThe SP 800-63 guidelines are agnostic to the identity proofing, authentication, and federation architectures that an organization selects, and they allow organizations to deploy a digital identity scheme according to their own requirements. However, there are scenarios that an organization may encounter that make federation potentially more efficient and effective than establishing identity services that are local to the organization or individual applications. The following lists detailed potential scenarios in which the organization may consider federation to be a viable option:\n\n\n Potential users already have an authenticator at or above the required AAL.\n Multiple types of authenticators are required to cover all possible user communities.\n An organization does not have the necessary infrastructure to support management of subscriber accounts (e.g., account recovery, authenticator issuance, help desk).\n There is a desire to allow primary authenticators to be added and upgraded over time without changing the RP’s implementation.\n There are different environments to be supported, since federation protocols are network-based and allow for implementation on a wide variety of platforms and languages.\n Potential users come from multiple communities, each with its own existing identity infrastructure.\n The organization needs the ability to centrally manage account lifecycles, including account revocation and the binding of new authenticators.\n\n\nAn organization may want to consider accepting federated identity attributes if any of the following apply:\n\n\n Pseudonymity is required, necessary, feasible, or important to stakeholders accessing the service.\n Access to the service requires a defined list of attributes.\n Access to the service requires at least one derived attribute value.\n The organization is not the authoritative source or issuing source for required attributes.\n Attributes are only required temporarily during use (e.g., to make an access decision), and the organization does not need to retain the data.\n\n\nExamples of Digital Identity Models\n\nThe entities and interactions that comprise the non-federated digital identity model are illustrated in Fig. 3. The general-purpose federated digital identity model is illustrated in Fig. 4, and a federated digital identity model with a subscriber-controlled wallet is illustrated in Fig. 5.\n\nFig. 3. Non-Federated Digital Identity Model Example\n\n\n\nFigure 3 shows an example of a common sequence of interactions in the non-federated model. Other sequences could also achieve the same functional requirements. One common sequence of interactions for identity proofing and enrollment activities is as follows:\n\n\n Step 1: An applicant applies to a CSP through an identity proofing and enrollment process. The CSP identity proofs that applicant.\n Step 2: Upon successful identity proofing, the applicant is enrolled in the identity service as a subscriber.\n \n A subscriber account and corresponding authenticators are established between the CSP and the subscriber. The CSP maintains the subscriber account, its status, and the enrollment data. The subscriber maintains their authenticators.\n \n \n\n\nSteps 3 through 5 may immediately follow steps 1 and 2 or they may be done at a later time. The usual sequence of interactions involved in using one or more authenticators to perform digital authentication in the non-federated model is as follows:\n\n\n Step 3: The claimant initiates an online interaction with the RP and the RP requests that the claimant authenticate.\n Step 4: The claimant proves possession and control of the authenticators to the verifier through an authentication process:\n \n The verifier interacts with the CSP to verify the binding of the claimant’s identity to their authenticators in the subscriber account and to optionally obtain additional subscriber attributes.\n The CSP or verifier functions of the service provider give information about the subscriber. The RP requests the attributes it requires from the CSP. The RP optionally uses this information to make authorization decisions.\n \n \n Step 5: An authenticated session is established between the subscriber and the RP.\n\n\nFig. 4. Federated Digital Identity Model Example\n\n\n\nFigure 4 shows an example of those same common interactions in a federated model.\n\n\n Step 1: An applicant applies to a CSP through an identity proofing and enrollment process. The CSP identity proofs that applicant.\n Step 2: Upon successful identity proofing, the applicant is enrolled in the identity service as a subscriber.\n \n A subscriber account and corresponding authenticators are established between the CSP and the subscriber.\n Unlike in Fig. 3, the IdP is provisioned either directly by the CSP or indirectly through access to attributes of the subscriber account. The CSP maintains the subscriber account, its status, and the enrollment data collected in accordance with the record retention and disposal requirements described in Sec. 3.1.1 of [SP800-63A]. The subscriber maintains their authenticators. The IdP maintains its view of the subscriber account, any federated identifiers assigned to the subscriber account, and authorizations to RPs.\n \n \n\n\nThe usual sequence of interactions involved in using one or more authenticators in the federated model to perform digital authentication is as follows:\n\n\n Step 3: The RP requests that the claimant authenticate. This triggers a request for federated authentication to the IdP.\n Step 4: The claimant proves possession and control of the authenticators to the verifier function of the IdP through an authentication process.\n \n Within the IdP, the verifier and CSP functions interact to verify the binding of the claimant’s authenticators with those bound to the claimed subscriber account and optionally to obtain additional subscriber attributes.\n \n \n Step 5: The RP and the IdP communicate through a federation protocol. The IdP provides an assertion and optionally additional attributes to the RP through a federation protocol. The RP verifies the assertion to establish confidence in the identity and attributes of a subscriber for an online service at the RP. RPs may use a subscriber’s federated identity (pseudonymous or non-pseudonymous), IAL, AAL, FAL, and other factors to make authorization decisions.\n Step 6: An authenticated session is established between the subscriber and the RP.\n\n\nIn the two cases described in Fig. 3 and Fig. 4, the verifier does not always need to communicate in real time with the CSP to complete the authentication activity (e.g., digital certificates can be used). Therefore, the line between the verifier and the CSP represents a logical link between the two entities. In some implementations, the verifier, RP, and CSP functions may be distributed and separated. However, if these functions reside on the same platform, the interactions between the functions are signals between applications or application modules that run on the same system rather than using network protocols.\n\nFig. 5. Federated Digital Identity Model With Subscriber-Controlled Wallet Example\n\n\n\nFigure 5 shows an example of the interactions in a federated digital identity model in which the subscriber controls a device with software (i.e., a digital wallet) that acts as the IdP. In the terminology of the “three-party model”, the CSP is the issuer, the IdP is the holder, and the RP is the verifier. In this model, it is common for the RP to establish a trust agreement with the CSP through the use of a federation authority as defined in [SP800-63C]. This arrangement allows the RP to accept assertions from the subscriber-controlled wallet without needing a direct trust relationship with the wallet.\n\n\n Step 1: An applicant applies to a CSP identity proofing and enrollment process.\n Step 2: Upon successful identity proofing, the applicant goes through an onboarding process and is enrolled in the identity service as a subscriber.\n Step 3: The subscriber-controlled wallet is onboarded by the CSP.\n \n The subscriber authenticates to the CSP’s onboarding function.\n The subscriber activates the subscriber-controlled wallet using an activation factor.\n The wallet sends a request to the CSP, including proof of a key held by the wallet.\n The CSP creates an attribute bundle that contains a reference for the key of the wallet and any additional attributes.\n \n \n\n\nThe usual sequence of interactions involved in providing an assertion to the RP from a subscriber-controlled wallet is as follows:\n\n\n Step 4: The RP requests that the claimant authenticate. This triggers a request for federated authentication to the wallet.\n Step 5: The claimant proves possession and control of the subscriber-controlled wallet.\n \n The subscriber activates the wallet using an activation factor.\n The wallet prepares an assertion including the attribute bundle provided by the CSP for the subscriber account.\n \n \n Step 6: The RP and the wallet communicate through a federation protocol. The wallet provides an assertion and optionally additional attributes to the RP through a federation protocol. The RP verifies the assertion to establish confidence in the identity and attributes of a subscriber for an online service at the RP. RPs may use a subscriber’s federated identity (pseudonymous or non-pseudonymous), IAL, AAL, FAL, and other factors to make authorization decisions.\n Step 7: An authenticated session is established between the subscriber and the RP.\n\n\n\n Note: Other protocols and specifications often refer to attribute bundles as credentials. These guidelines use the term credentials for a different concept. To avoid a conflict, the term attribute bundle is used within these guidelines. Normative requirements for attribute bundles can be found including Sec. 3.11.1 of [SP800-63C].\n\n"
} ,
{
"title" : "Digital Identity Risk Management",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63/dirm/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Digital Identity Risk Management\n\nThis section is normative.\n\nThis section provides details on the methodology for assessing digital identity risks associated with online services and the residual risks to users of the online service, communities impacted by the service, the service provider organization, and its mission and business partners. It offers guidance on selecting usable, equitable, and privacy-enhancing security and anti-fraud controls that mitigate those risks. Additionally, it emphasizes the importance of continuously evaluating the performance of the selected controls.\n\nThe Digital Identity Risk Management (DIRM) process focuses on the identification and management of risks according to two dimensions: (1) risks to the online service that might be addressed by an identity system; and (2) risks from the identity system to be implemented.\n\nThe first dimension of risk informs initial assurance level selections and seeks to identify the risks associated with a compromise of the online service that might be addressed by an identity system. For example:\n\n\n Identity proofing: What negative impacts would reasonably be expected if an imposter were to gain access to a service or receive a credential using the identity of a legitimate user (e.g., an attacker successfully impersonates someone)?\n Authentication: What negative impacts would reasonably be expected if a false claimant accessed an account that was not rightfully theirs (e.g., an attacker who compromises or steals an authenticator)?\n Federation: What negative impacts would reasonably be expected if the wrong subject successfully accessed an online service, system, or data (e.g., compromising or replaying an assertion)?\n\n\nAll three types of errors can result in the wrong subject successfully accessing an online service, system, or data.\n\nIf it is determined that there are risks associated with a compromise of the online service that could be addressed by an identity system, an initial assurance level is selected and the second dimension of risk is then considered. The second dimension of risk seeks to identify the risks posed by the identity system and informs the tailoring process. Tailoring provides a process to modify an initially assessed assurance level, implement compensating or supplemental controls, or modify selected controls based on ongoing detailed risk assessments.\n\n\\clearpage\n\n\nFor example, assuming that aspects of the identity system are not sufficiently privacy-enhancing, usable, equitable, or able or necessary to address specific real-world threats:\n\n\n Identity proofing: What is the impact of not successfully identity proofing and enrolling a legitimate subject due to barriers faced by the subject throughout the process of identity proofing, including biases? What is the impact of falling victim to a breach of information that was excessively collected and retained to support identity proofing processes? What is the impact if the initial IAL does not completely address specific threats, threat actors, and fraud?\n Authentication: What is the impact of failing to authenticate the correct subject due to barriers faced by the subject in presenting their authenticator, including biases or usability issues? What is the impact if the initial AAL does not completely address targeted account takeover models or specific authenticator types fail to mitigate anticipated attacks?\n Federation: What is the impact of releasing subscriber attributes to the wrong online service or system?\n\n\nThe outcomes of the DIRM process depend on the role that an entity plays within the digital identity model.\n\n\n For relying parties, the intent of this process is to determine the assurance levels and any tailoring required to protect online services and the applications, transactions, and systems that comprise or are impacted by those services. This directly contributes to the selection, development, and procurement of CSP services. Federal RPs SHALL implement the DIRM process for all online services.\n For credential service providers and identity providers, the intent of this process is to design service offerings that meet the requirements of the defined assurance levels, continuously guard against compromises to the identity system, and meet the needs of RPs. Whenever a service offering deviates from normative guidance, those deviations must be clearly communicated to the RPs that utilize the service. All CSPs SHALL implement the DIRM process for the services they offer and SHALL make a Digital Identity Acceptance Statement (DIAS) for each offering available to all current or potential RPs. CSPs MAY base their assessment on anticipated or representative digital identity services they wish to support. In creating this risk assessment, CSPs SHOULD seek input from real-world RPs on their user populations and their anticipated context.\n\n\nThis process augments the risk management processes required by the Federal Information Security Modernization Act [FISMA]. The results of the DIRM impact assessment for the online service may be different from the FISMA impact level for the underlying application or system. Identity process failures may result in different levels of impact for various user groups. For example, the overall assessed FISMA impact level for a payment system may result in a ‘FISMA Moderate’ impact category due to sensitive financial data processed by the system. However, for individuals who are making guest payments where no persistent account is established, the authentication and proofing impact levels may be lower as associated data may not be retained or made accessible. Agency authorizing officials SHOULD require documentation demonstrating adherence to the DIRM process as a part of the Authority to Operate (ATO) for the underlying information system that supports an online service. Agency authorizing officials SHOULD require documentation from CSPs demonstrating adherence to the DIRM as part of procurement or ATO processes for integration with CSPs.\n\nThere are 5 steps in the DIRM process:\n\n\n Define the online service: As a starting point, the organization documents a description of the online service in terms of its functional scope, the user groups it is intended to serve, the types of online transactions available to each user group, and the underlying data that the online service processes through its interfaces. If the online service is one element of a broader business process, its role is documented, as are the impacts of any data collected and processed by the online service. Additionally, an organization needs to determine the entities that will be impacted by the online service and the broader business process of which it is a part. The outcome is a description of the online service, its users, and the entities that may be impacted by its functionality.\n Conduct initial impact assessment: In this step, organizations evaluate their user population and assess the impacts of a compromise of the online service that might be addressed by an identity system (i.e., identity proofing, authentication, or federation). Each function of the online service is assessed against a defined set of harms and impact categories. Each user group of the online service is considered separately based on the transactions available to that user group (i.e., the permissions that the group is granted relative to the data and functions of the online service). The outcome of this step is a documented set of impact categories and associated impact levels (i.e., Low, Moderate, or High) for each user group of the online service.\n Select initial assurance levels: In this step, the impact categories and impact levels are evaluated to determine the initial assurance levels to protect the online service from unauthorized access and fraud. Using the assurance levels, the organization identifies the baseline controls for the IAL, AAL, and FAL for each user group based on the requirements from companion volumes [SP800-63A], [SP800-63B], and [SP800-63C], respectively. The outcome of this step is an identified initial IAL, AAL, and FAL, as applicable, for each user group.\n Tailor and document assurance level determinations: In this step, detailed assessments are conducted or leveraged to determine the potential impact of the initially selected assurance levels and their associated controls on privacy, equity, usability, and resistance to the current threat environment. Tailoring may result in a modification of the initially assessed assurance level, the identification of compensating or supplemental controls, or both. All assessments and final decisions are documented and justified. The outcome is a DIAS (see Sec. 3.4.4) with a defined and implementable set of assurance levels and a final set of controls for the online service.\n Continuously evaluate and improve: In this step, information on the performance of the identity management approach is gathered and evaluated. This evaluation considers a diverse set of factors, including business impacts, effects on fraud rates, and impacts on user communities. This information is crucial in determining if the selected assurance level and controls meet mission, business, security, and — where applicable — program integrity needs. It also helps monitor for unintended harms that impact privacy and equitable access. Opportunities for improvement should also be considered by closely monitoring the evolving threat landscape and investigating new technologies and methodologies that can counter those threats or improve usability, equity, or privacy. The outcomes of this step are performance metrics, documented and transparent processes for evaluation and redress, and ongoing improvements to the identity management approach.\n\n\nFig. 6. High-level diagram of the Digital Identity Risk Management process flow\n\n\n\nFigure 6 illustrates the major actions and outcomes for each step of the DIRM process flow. While presented as a “stepwise” approach, there can be many points in the process that require divergence from the sequential order, including the need for iterative cycles between initial task execution and revisiting tasks. For example, the introduction of new regulations or requirements while an assessment is ongoing may require organizations to revisit a step in the process. Additionally, new functionality, changes in data usage, and changes to the threat environment may require an organization to revisit steps in the Digital Identity Risk Management process at any point, including potentially modifying the assurance level and/or the related controls of the online service.\n\nOrganizations SHOULD adapt and modify this overall approach to meet organizational processes, governance, and enterprise risk management practices. At a minimum, organizations SHALL execute and document each step, consult with a representative sample of the online service’s user population to inform the design and performance evaluation of the identity management approach, and complete and document the normative mandates and outcomes of each step regardless of operational approach or enabling tools.\n\nDefine the Online Service\nThe purpose of defining the online service is to establish a common understanding of the context and circumstances that influence the organization’s risk management decisions. The context-rich information ascertained during this step is intended to inform subsequent steps of the DIRM process. The role of the online service is contextualized as part of the broader business environment and associated processes, resulting in a documented description of the online service functionality, user groups and their expectations, data processed and other pertinent details.\n\nRPs SHALL develop a description of the online service that includes, at minimum:\n\n\n Organizational mission and business objectives supported by the online service\n Mission and business partner dependencies associated with the online service\n Legal, regulatory, and contractual requirements, including privacy and civil liberties obligations that apply to the online service\n Functionality of the online service and the underlying data that it is expected to process\n User groups that need to have access to the online service as well as the types of online transactions and privileges available to each user group\n User expectations for the online service, including functionality, features, identity verification and authentication options, accessibility and language requirements, and culturally responsive communication alternatives\n The results of any pre-existing DIRA assessments (as an input) and the current state of any pre-existing identity technologies (i.e., proofing, authentication, or federation)\n Across all users served, the estimated availability of forms of identity evidence to support the identity proofing process for services that require identity proofing.\n\n\nAdditionally, an organization needs to determine the entities that will be impacted by the online service and the broader business process of which it is a part. It is imperative to consider the unexpected and undesirable impacts on different entities, populations, or demographic groups that result from an unauthorized user gaining access to the online service due to a failure of the digital identity system. For example, if an attacker obtained unauthorized access to an application that controls a power plant, the actions taken by the bad actor could have devastating environmental impacts on the local populations that live near the facility as well as cause power outages for the localities served by the plant.\n\nIt is important to differentiate between user groups and impacted entities as described in this document. The online service will allow access to a set of users who may be partitioned into a few user groups based on the kind of functionality that is offered to that user group. For example, an income tax filing and review online service may have the following user groups: (i) citizens who need to check on the status of their personal tax returns; (2) tax preparers who file tax returns on behalf of their clients; and (3) system administrators who assign privileges to different groups of users or create new user groups as needed. In contrast, impacted entities include all populations impacted by the online service and its functionality. For example, an online service that allows remote access to control, operate and monitor a water treatment facility may have the following types of impacted entities: (1) populations that drink the water from that water treatment facility; (2) technicians who control and operate the water treatment facility; (3) the organization that owns and operates the facility; and (4) auditors and other officials who provide oversight of the facility and its compliance with applicable regulations.\n\nAccordingly, impact assessments SHALL include individuals who use the online application as well as the organization itself. Additionally, organizations SHOULD identify other entities (e.g., mission partners, communities, and those identified in [SP800-30]) that need to be specifically included based on mission and business needs. At a minimum, agencies SHALL document all impacted when conducting their impact assessments.\n\nThe output of this step is a documented description of the online service including a list of entities that are impacted by the functionality provided by the online service. This information will serve as a basis and establish the context for effectively applying the impact assessments as detailed in the following sections.\n\nConduct Initial Impact Assessment\n\nThis step of the DIRM process addresses the first dimension of risk (i.e., risks to the identity system) and seeks to identify the risks to the online service that might be addressed by an identity system.\n\nThe purpose of the initial impact assessment is to identify the potential adverse impacts of failures in identity proofing, authentication, and federation that are specific to an online service, yielding an initial set of assurance levels. RPs SHOULD consider historical data and results from user focus groups when performing this step.\nThe impact assessment SHALL include:\n\n\n Identifying a set of impact categories and the potential harms for each impact category,\n Identifying the levels of impact, and\n Assessing the level of impact for each user group.\n\n\nThe level of impact for each user group identified in Sec. 3.1 SHALL be considered separately based on the transactions available to that user group. Assessing the user groups separately allows organizations maximum flexibility in selecting and implementing an identity approach and assurance levels that are appropriate for each user group.\n\nThe output of this assessment is a defined impact level (i.e., Low, Moderate, or High) for each user group. This serves as the primary input to the initial assurance level selection. The effort focuses on defining and documenting the impact assessment to promote consistent application across an organization.\n\nIdentify Impact Categories and Potential Harms\n\nInitial assurance levels for online services SHALL be determined by assessing the potential impact of — at a minimum — each of the following categories:\n\n\n Degradation of mission delivery\n Damage to trust, standing or reputation\n Unauthorized access to information\n Financial loss or financial liability\n Loss of life or danger to human safety, human health, or environmental health\n\n\nOrganizations SHOULD include additional impact categories, as appropriate, based on their mission and business objectives. Each impact category SHALL be documented and consistently applied when implementing the DIRM process across different online services offered by the organization.\n\nHarms refer to any adverse effects that would be experienced by an entity. They provide a means to effectively understand the impact categories and how they may apply to specific entities impacted by the online service. For each impact category, agencies SHALL consider potential harms for each of the impacted entities identified in Sec. 3.1.\n\nExamples of harms associated with each category include, but are not limited to:\n\n\n Degradation of mission delivery:\n \n Harms to individuals may include the inability to access government services or benefits for which they are eligible.\n Harms to the organization (including the organization offering the online service as well as organizations supported by the online service) may include an inability to perform current mission/business functions in a sufficiently timely manner, with sufficient confidence and/or correctness, or within planned resource constraints or an inability or limited ability to perform mission/business functions in the future.\n \n \n Damage to trust, standing or reputation:\n \n Harms to individuals may include damage to image or reputation as a result of impersonation.\n Harms to the organization may include damage to reputation resulting in damage to existing trust relationships, image, or reputation or the inability to forge future, potential trust relationships.\n \n \n Unauthorized access to information:\n \n Harms to individuals may include breach of PII or other sensitive information, which may result in secondary harms such as financial loss, loss of life, physical or psychological injury, impersonation, identity theft, or persistent inconvenience.\n Harms to the organization may include exfiltration, deletion, degradation, or exposure of intellectual property or unauthorized disclosure of other information assets such as classified materials or controlled unclassified information (CUI).\n \n \n Financial loss or liability:\n \n Harms to individuals may include debts incurred or assets lost as a result of fraud or other harm, damage to or loss of credit, actual or potential employment, or sources of income, loss of accessible affordable housing and/or other financial loss.\n Harms to the organization may include costs related to fraud or other criminal activity, loss of assets, devaluation, or loss of business.\n \n \n Loss of life or danger to human safety, human health, or environmental health:\n \n Harms to individuals may include death; damage to or loss of physical, mental, or emotional well-being; or impact to environmental health that could result in uninhabitability of the local environment and require some level of intervention to address potential or actual damage.\n Harms to the organization may include damage to or loss of the organization’s workforce or the impact of unsafe conditions that render the organization unable to operate or reduce its capacity to operate.\n \n \n\n\nThe outcome of this activity will be a list of impact categories and harms that will be used to assess impacts on entities identified in Sec. 3.1.\n\nIdentify Potential Impact Levels\n\nInitial assurance levels for digital transactions are determined by assessing the potential level of impact caused by a compromise of the online service that might be addressed by an identity system for each of the impact categories selected for consideration by the organization (from Sec. 3.2.1). Impact levels can be assigned using one of the following potential impact values:\n\n\n Low: Could be expected to have a limited adverse effect\n Moderate: Could be expected to have a serious adverse effect\n High: Could be expected to have a severe or catastrophic adverse effect\n\n\nIn this step, the impact of access by an unauthorized individual SHALL be considered for each user group, each impact category, and each of the impacted entities. Examples of potential impacts in each of the categories are provided below. However, to provide a more objective basis for impact level assignments, organizations SHOULD develop thresholds and examples for the impact levels for each impact category. Where this is done, particularly with specifically defined quantifiable values, these thresholds SHALL be documented and used consistently in the DIRM assessments across an organization to allow for a common understanding of risks.\n\n\n Degradation of mission delivery:\n \n Low: Expected to result in limited mission capability degradation such that the organization is still able to perform its primary functions but with noticeably reduced effectiveness.\n Moderate: Expected to result in serious mission capability degradation such that the organization is still able to perform its primary functions but with significantly reduced effectiveness.\n High: Expected to result in severe or catastrophic mission capability degradation or loss over a duration such that the organization is unable to perform one or more of its primary functions.\n \n \n Damage to trust, standing or reputation:\n \n Low: Expected to result in limited, short-term inconvenience, distress, or embarrassment to any party.\n Moderate: Expected to result in serious short-term or limited long-term inconvenience, distress, or damage to the standing or reputation of any party.\n High: Expected to result in severe or serious long-term inconvenience, distress, or damage to the standing or reputation of any party; ordinarily reserved for situations with particularly severe effects or that potentially affect many individuals.\n \n \n Unauthorized access to information:\n \n Low: Expected to have a limited adverse effect on organizational operations, organizational assets, or individuals as defined in [FIPS199].\n Moderate: Expected to have a serious adverse effect on organizational operations, organizational assets, or individuals as defined in [FIPS199].\n High: Expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, or individuals as defined in [FIPS199].\n \n \n Financial loss or financial liability:\n \n Low: Expected to result in limited financial loss or liability to any party.\n Moderate: Expected to result in a serious financial loss or liability to any party.\n High: Expected to result in severe or catastrophic financial loss or liability to any party.\n \n \n Loss of life or danger to human safety, human health, or environmental health:\n \n Low: Expected to result in minor injury or an acute health issue that resolves itself and does not require medical attention, including mental health treatment; or an impact to environmental health that requires at most some limited intervention to prevent further or reverse existing damage.\n Moderate: Expected to result in moderate risk of minor injury or limited risk of injury that requires medical attention, including mental health treatment; an impact to environmental health that results in a period of uninhabitability and requires intervention to prevent further or reverse existing damage; or the compounding impacts of multiple low-impact events.\n High: Expected to result in serious injury, trauma, or death; impacts to environmental health that results in long-term or permanent uninhabitability and require significant intervention to prevent further or reverse existing damage, if possible; or the compounding impacts of multiple moderate impact events.\n \n \n\n\nThis guidance provides three impact levels. However, agencies MAY define more granular impact levels and develop their own methodologies for their initial impact assessment activities.\n\nImpact Analysis\n\nThe impact analysis considers the level of impact (i.e., Low, Moderate or High) of compromises of the online service that might be addressed by the identity system functions (i.e., identity proofing, authentication, and federation). The impact analysis considers the following dimensions:\n\n\n User groups Sec. 3.1\n Impacted entities Sec. 3.1\n Impact categories Sec. 3.2.1\n Impact levels Sec. 3.2.2\n\n\nIf there is no harm or impact for a given impact category for any entity, the impact level can be marked as None.\n\nFor each user group, the impact analysis SHALL consider the level of impact for each impact category for each type of impacted entity. Because different sets of transactions are available to each user group, it is important to consider each user group separately for this analysis.\n\nFor example, for an online service that allows for the control, operation and monitoring of a water treatment facility, each group of users (e.g., technicians who control and operate the facility, auditors and monitoring officials, system administrators, etc.) is considered separately based on the transactions available to that user group through the online service. In other words, the impact analysis tries to determine if a bad actor obtained unauthorized access to the online service as a member of a user group and performed some nefarious actions and the level of impact (i.e., Low, Moderate or High) on various impacted entities (e.g., citizens who drink the water, the organization that owns the facility, auditors, monitoring officials, etc.) for each of the impact categories being considered.\n\nThe impact analysis SHALL be performed for each user group that has access to the online service. For each impact category, the impact level is estimated for each impacted entity as a result of a compromise of the online service caused by failures in the identity management functions.\n\nThe output of this impact analysis is a set of impact levels for each user group that SHALL be documented in a suitable format for further analysis in accordance with the next subsection below.\n\nDetermine Combined Impact Level for Each User Group\nThe impact assessment level results for each user group generated from the previous step are combined to establish a single impact level for that user group. This single impact level represents the risks to impacted entities that result from a compromise of identity proofing, authentication, and/or federation functions for that user group.\n\nOrganizations can apply a variety of methods for this combinatorial analysis to determine the effective impact level for each user group. Some options include:\n\n\n Using a high-water mark approach across the various impact categories and impacted entities to derive the effective impact level\n Assigning different weights to different impact categories and/or impacted entities and taking an average to derive the effective impact level\n Some other combinatorial logic that aligns with the organization’s mission and priorities\n\n\nOrganizations SHALL document the approach they use to combine their impact assessment into an overall impact score for each of their defined user groups and SHALL apply it consistently across all its online services. At the conclusion of the combinatorial analysis, organizations SHALL document the impact for each user group.\n\nThe outcome of this step is an effective impact level for each user group due to a compromise of the identity management system functions (i.e., identity proofing, authentication, federation).\n\nSelect Initial Assurance Levels and Baseline Controls\nThe initial impact analysis of the last step yields an effective impact level (i.e., Low, Moderate, or High) that serves as a primary input to the process of selecting the initial assurance levels for identity proofing, authentication, and federation for each user group.\n\nThe purpose of the initial assurance level is to identify baseline digital identity controls (including process and technology elements) for each identity management function, from the requirements and guidelines in the companion volumes [SP800-63A], [SP800-63B], and [SP800-63C].\n\nThe initial set of digital identity controls and processes selected will be assessed and tailored in Step 4 based on potential risks generated by the identity management system.\n\nAssurance Levels\n\nDepending on the functionality and deployed architecture of the online service, it may require the support of one or more of the identity management functions (i.e., identity proofing, authentication, and federation). The strength of these functions is described in terms of assurance levels. The RP SHALL identify the types of assurance levels that apply to their online service from the following:\n\n\n IAL: The robustness of the identity proofing process to determine the identity of an individual. The IAL is selected to mitigate risks that result from potential identity proofing failures.\n AAL: The robustness of the authentication process itself, and the binding between an authenticator and a specific individual’s identifier. The AAL is selected to mitigate risks that result from potential authentication failures.\n FAL: The robustness of the federation process used to communicate authentication and attribute information to an RP from an IdP. The FAL is selected to mitigate risks that result from potential federation failures.\n\n\nAssurance Level Descriptions\n\nA summary of each of the xALs is provided below. While high-level descriptions of the assurance levels are provided in this subsection, readers of this guidance are encouraged to refer to companion volumes [SP800-63A], [SP800-63B], and [SP800-63C] for normative guidelines and requirements for each assurance level.\n\nIdentity Assurance Level\n\n\n \n IAL1:\nSupports the real-world existence of the claimed identity. Core attributes are obtained from identity evidence or asserted by the applicant. All core attributes are validated against authoritative or credible sources and steps are taken to link the attributes to the person undergoing the identity proofing process.\n \n \n IAL2:\nIAL2 adds rigor by requiring the collection of additional evidence and a more rigorous process for validating the evidence and verifying the identity.\n \n \n IAL3:\n IAL3 adds the requirement for a trained CSP representative (i.e., proofing agent) to interact directly with the applicant as part of an on-site attended identity proofing session as well as the collection of at least one biometric.\n \n\n\nTable 1. IAL Summary\n\n\n \n \n IAL\n Control Objectives\n \n \n \n \n IAL1\n Limit highly scalable attacks; provide protection against synthetic identity. Provide protections against attacks using compromised PII.\n \n \n IAL2\n Limit scaled and targeted attacks. Provide protections against basic evidence falsification and evidence theft. Provide protections against basic social engineering.\n \n \n IAL3\n Limit sophisticated attacks. Provide protections against advanced evidence falsification, theft, and repudiation. Provide protection against advanced social engineering attacks.\n \n \n\n\nAuthentication Assurance Level\n\n\n \n AAL1:\nAAL1 provides a basic level of confidence that the claimant controls an authenticator bound to the subscriber account being authenticated. AAL1 requires only single-factor authentication using a wide range of available authentication technologies. However, it is recommended that online services assessed at AAL1 offer multi-factor authentication options. Successful authentication requires that the claimant prove possession and control of the authenticator.\n \n \n AAL2:\nAAL2 provides high confidence that the claimant controls one or more authenticators bound to the subscriber account being authenticated. Proof of possession and control of two distinct authentication factors is required. A phishing-resistant authentication option must be offered for online services assessed at AAL2.\n \n \n AAL3:\nAAL3 provides very high confidence that the claimant controls one or more authenticators bound to the subscriber account being authenticated. Authentication at AAL3 is based on the proof of possession of a key through the use of a public-key cryptographic protocol. AAL3 authentication requires a hardware-based authenticator with a non-exportable private key and a phishing-resistant authenticator; the same device may fulfill both requirements. In order to authenticate at AAL3, claimants are required to prove possession and control of two distinct authentication factors.\n \n\n\nTable 2. AAL Summary\n\n\n \n \n AAL\n Control Objectives\n \n \n \n \n AAL1\n Provide minimal protections against attacks. Deter password focused attacks.\n \n \n AAL2\n Support multifactor authentication. Offer phishing-resistant options.\n \n \n AAL3\n Provide phishing resistance and verifier compromise protections.\n \n \n\n\nFederation Assurance Level\n\n\n \n FAL1:\nFAL1 allows a subscriber to authenticate to the RP using an assertion from an IdP in a federation protocol. FAL1 provides assurance that the assertion came from a specific IdP and was intended for a specific RP.\n \n \n FAL2:\nFAL2 additionally requires that the trust agreement between the IdP and RP be established prior to the federation transaction, and that the RP have robust protections against injection of assertions from attackers.\n \n \n FAL3:\nFAL3 additionally requires the subscriber to authenticate directly to the RP with a bound authenticator and present the assertion from the IdP. Additionally, the IdP and RP establish their identities and cryptographic key material with each other through a highly trusted process that is often manual.\n \n\n\nTable 3. AAL Summary\n\n\n \n \n FAL\n Control Objectives\n \n \n \n \n FAL1\n Provide protections against forged assertions.\n \n \n FAL2\n Provide protections against forged assertions and injection attacks.\n \n \n FAL3\n Provide protection against IdP compromise.\n \n \n\n\nInitial Assurance Level Selection\n\nThe overall impact level for each user group is used as the basis for the selection of the initial assurance level and related technical and process controls for the digital identity functions for the organization’s online service under assessment. These initial assurance levels and control selections are primarily based on the impacts arising from failures within the digital identity functions that allow an unauthorized entity to gain access to the online service. The initial assurance levels and controls will be further assessed and tailored, as appropriate, in the next step of the DIRM process.\n\nOrganizations SHALL develop and document a process and governance model for selecting initial assurance levels and controls based on the potential impact of failures in the digital identity approach. This section provides guidance on the major elements to include in that process.\n\nWhile online service providers must assess and determine the xALs that are appropriate for protecting their applications, the selection of these assurance levels does not mean that the online service provider must implement the controls independently. Based on the identity model that the online service provider chooses to implement, some or all of the assurance levels may be implemented by an external entity such as a third-party CSP or IdP.\n\nSelecting Initial IAL\n\nBefore selecting an initial assurance level, RPs must determine if identity proofing is needed for the users of their online services. Identity proofing is not required if the online service does not require any personal information to execute digital transactions. If personal information is needed, the RP needs to determine if validated attributes are required or if self-asserted attributes are acceptable. The system may also be able to operate without identity proofing if the potential harms from accepting self-asserted attributes are insignificant. In such cases, the identity proofing processes described in [SP800-63A] are not applicable to the system.\n\n\\clearpage\n\n\nIf the online service does require identity proofing, an initial IAL is selected through a simple mapping process, as follows:\n\n\n Low impact: IAL1\n Moderate impact: IAL2\n High impact: IAL3\n\n\nThe organization SHALL document whether identity proofing is required for their application and, if it is, SHALL select an initial IAL for each user group based on the effective impact level determination from Sec. 3.2.4.\n\nThe IAL reflects the level of assurance that an applicant holds the claimed real-life identity. The initial selection assumes that higher potential impacts of failures in the identity proofing process should be mitigated by higher assurance processes.\n\nSelecting Initial AAL\n\nNot all online services require authentication. Online services that offer access to public information and do not utilize subscriber accounts do not necessarily need to implement authentication mechanisms. However, authentication is needed for online services that do offer access to personal information, protected information, or subscriber accounts. In addition to the impact assessments mandated by these guidelines, when making decisions regarding the application of authentication assurance levels and authentication mechanisms, it is important that organizations consider legal, regulatory, or policy requirements that govern online services. For example, [EO13681] states “that all organizations making personal data accessible to citizens through digital applications require the use of multiple factors of authentication,” which requires a minimum selection of AAL2 for applications meeting those criteria.\n\nIf the online service requires an authenticator to be implemented, an initial AAL is selected through a simple mapping process, as follows:\n\n\n Low impact: AAL1\n Moderate impact: AAL2\n High impact: AAL3\n\n\nThe organization SHALL document whether authentication is needed for their online service and, if it is, SHALL select an initial AAL for each user group based on the effective impact level determination from Sec. 3.2.4.\n\nThe AAL reflects the level of assurance that the claimant is the same individual to whom the credential or authenticator was issued. The initial selection assumes that higher potential impacts of failures in the authentication process should be mitigated by higher assurance processes.\n\nSelecting Initial FAL\n\nIdentity federation brings many benefits including a convenient user experience that avoids redundant, costly, and often time-consuming identity processes. The benefits of federation through a general-purpose IdP model or a subscriber-controlled wallet model are covered in Sec. 5 of [SP800-63C]. However, not all online services will be able to make use of federation, whether for risk-based reasons or due to legal or regulatory requirements. Consistent with [M-19-17], federal agencies that operate online services SHOULD implement federation as an option for user access.\n\nIf the online service implements identity federation, an initial FAL is selected through a simple mapping process, as follows:\n\n\n Low impact: FAL1\n Moderate impact: FAL2\n High impact: FAL3\n\n\nThe organization SHALL document whether federation will be used for their online service and, if it is, SHALL select an initial FAL for each user group based on the effective impact level determination from Sec. 3.2.4.\n\nThe FAL reflects the level of assurance in identity assertions that convey the results of authentication processes and relevant identity information to RP online services. The preliminary selection assumes that higher potential impacts of failures in federated identity architectures should be mitigated by higher assurance processes.\n\nIdentify Baseline Controls\n\nThe selection of the initial assurance levels for each of the applicable identity functions (i.e., IAL, AAL, and FAL) serves as the basis for the selection of the baseline digital identity controls from the guidelines in companion volumes [SP800-63A], [SP800-63B], and [SP800-63C]. As described in Sec. 3.4, the baseline controls include technology and process controls that will be assessed against additional potential impacts.\n\nThe output of this step SHALL include the relevant xALs and controls for each user group, as follows:\n\n\n Initial IAL and related technology and process controls from [SP800-63A]\n Initial AAL and related technology and process controls from [SP800-63B]\n Initial FAL and related technology and process controls from [SP800-63C]\n\n\nTailor and Document Assurance Levels\n\nThe second dimension of risk addressed by the Digital Identity Risk Management process focuses on risks from the identity management system. These risks inform the tailoring process and seeks to identify the risks and unintended consequences that result from the initial selection of xALs and the related technical and process controls in Sec. 3.3.4.\n\nTailoring provides a process to modify an initially assessed assurance level and implement compensating or supplemental controls based on ongoing detailed risk assessments. It provides a pathway for flexibility and enables organizations to achieve risk management objectives that align with their specific context, users, and threat environment. This process focuses on assessing for unintended risks and equity, privacy, and usability impacts, and specific environmental threats. It does not prioritize any specific risk area or outcomes for agencies. Making decisions that balance different types of risks to meet organizational outcomes remains the responsibility of organizations. Organizations SHOULD employ tailoring with the objective of aligning of digital identity controls to their specific context, users, and threat environment.\n\nWithin the tailoring step, organizations SHALL focus on impacts to mission delivery due to the implementation of identity management controls that result in disproportionate impact on marginalized or historically underserved populations. Organizations SHALL consider not only the possibility of certain intended subjects failing to access the online service, but also the burdens, frustrations, and frictions experienced as a result of the identity management controls.\n\nAs a part of the tailoring process, organizations SHALL review the impact assessment documentation and practice statements1 from CSPs and IdPs that they use or intend to use. However, organizations SHALL also conduct their own analysis to ensure that the organization’s specific mission and the communities being served by the online service are given due consideration for tailoring purposes. As a result the organization may require their chosen CSP to strengthen or provide optionality in the implementation of certain controls to address risks and unintended impacts to the organization’s mission and the communities served.\n\nTo promote interoperability and consistency across organizations, third-party CSPs SHOULD implement their (assessed or tailored) xALs consistent with the normative guidance in this document. However, these guidelines provide flexibility to allow organizations to tailor the initial xALs and related controls to meet specific mission needs, address unique risk appetites, and provide secure and accessible online services. In doing so, CSPs MAY offer and organizations MAY utilize tailored sets of controls that differ from the normative statements in this guidance.\n\n\\clearpage\n\n\nTherefore, organizations SHALL establish and document an xAL tailoring process. At a minimum this process:\n\n\n SHALL follow a documented governance approach to allow for decision-making.\n SHALL document all decisions in the tailoring process, including the assessed xALs, modified xALs, and supplemental and compensating controls in the Digital Identity Acceptance Statement (see Sec. 3.4.4).\n SHALL justify and document all risk-based decisions or modifications to the initially assessed xALs in the Digital Identity Acceptance Statement (see Sec. 3.4.4).\n SHOULD establish a cross-functional capability to support subject matter analysis of xAL selection impacts in the tailoring process (e.g., subject matter experts who can speak about risks and considerations related to privacy, usability, fraud and impersonation impacts, equity, and other germane areas).\n SHOULD be a continuous process that incorporates real-world operational data to evaluate the impacts of selected xAL controls.\n\n\nThe tailoring process promotes a structured means of balancing risks and impacts in the furtherance of protecting online services, systems, and data in a manner that enables mission success while supporting equity, privacy, and usability for individuals.\n\nAssess Privacy, Equity, Usability and Threat Resistance\n\nWhen selecting and tailoring assurance levels for specific online services, it is critical that insights and inputs to the process extend beyond the initial impact assessment in Sec. 3.2. When transitioning from the initial assurance level selection in Sec. 3.3.4 to the final xAL selection and implementation, organizations SHALL conduct detailed assessments of the controls defined for the initially selected xALs to identify potential impacts in the operational environment.\n\nAt a minimum, organizations SHALL assess the impacts and potential unintended consequences related to the following areas:\n\n\n Privacy – Identify unintended consequences to the privacy of individuals that will be subject to the controls at an assessed xAL and of individuals affected by organizational or third-party practices related to the establishment, management, or federation of a digital identity. Privacy assessments SHOULD leverage existing Privacy Threshold Assessments (PTAs) and Privacy Impact Assessments (PIAs) as inputs to the privacy assessment process. However, as the goal of the privacy assessment is to identify privacy risks that arise from the initial assurance level selection, additional assessments and evaluations that are specific to the baseline controls for the assurance levels may be required for the underlying information system.\n Equity – Determine whether implementation of the initial assurance levels may create, maintain, or exacerbate inequities across communities. Equity assessments SHALL evaluate impacts on the communities being served by considering factors such as: proficiency with and access to technology, the availability of end devices with required technical capabilities (e.g., cameras), shared computing or device scenarios, housing status, access to internet, internet speed, family income bracket, credit score, disability status, sex, skin tone, age, native language, English fluency, and education. The intent of this assessment is to mitigate potential impacts on marginalized and historically underserved groups and limit disproportionate impacts from the requirements of the identity management functions.\n Usability – Determine whether implementation of the initial assurance levels will result in challenges to the end-user experience. Usability assessments SHALL consider usability impacts that result from the identity management controls to ensure that they do not cause undue burdens, frustrations, or frictions for the communities served and that there are pathways to provide accessibility to users of all capabilities.\n Threat Resistance – Determine whether the defined assurance level and related controls will address specific threats to the online service based on the operational environment, its threat actors, and known tactics, techniques, and procedures (TTPs). Threat assessments SHALL consider specific and known threats, threat actors, and TTPs within the implementation environment for the identity management functions. For example, certain benefits programs may be more subject to familial threats or collusion. Supplemental controls MAY need to be implemented to address specific threats within communities served by the online service. Conversely, agencies MAY tailor their assessed xAL down or modify their baseline controls if their threat assessment indicates that a reduced threat posture is appropriate based on their environment.\n\n\nOrganizations SHOULD leverage consultation and feedback to ensure that the tailoring process addresses the constraints of the entities and communities served. Organizations MAY establish mechanisms through which civil society organizations that work with marginalized groups can provide input on the impacts felt or likely to be felt.\n\nAdditionally, organizations SHOULD conduct additional business-specific assessments as appropriate to fully represent mission- and domain-specific considerations not captured here. These assessments SHALL be extended to any compensating or supplemental controls as defined in Sec. 3.4.2 and Sec. 3.4.3.\n\nThe outcome of this step is a set of risk assessments for privacy, equity, usability, threat resistance and other dimensions that informs the tailoring of the initial assurance levels and the selection of compensating and supplemental controls.\n\nIdentify Compensating Controls\n\nA compensating control is a management, operational, or technical control employed by an organization in lieu of a normative control in the defined xALs. They are intended to address the same risks as the baseline control is intended to address to the greatest degree practicable.\n\nOrganizations MAY choose to implement a compensating control when they are unable to implement a baseline control or when a risk assessment indicates that a compensating control sufficiently mitigates risk in alignment with organizational risk tolerance. This control MAY be a modification to the normative statements as defined in these guidelines, but MAY also be applied elsewhere in an application, digital transaction, or service lifecycle. For example:\n\n\n A federal agency could choose to use a federal background investigation and checks, as referenced by Personal Identity Verification [FIPS201], to compensate for the identity evidence validation with authoritative sources requirement under these guidelines.\n An organization could choose to implement stricter auditing and transactional review processes on a payment application where verification processes using weaker forms of identity evidence were accepted due to the lack of required evidence in the end-user population.\n\n\nWhere compensating controls are implemented, organizations SHALL document the compensating control, the rationale for the deviation, comparability of the chosen alternative, and resulting residual risk (if any). CSPs and IDPs who implement compensating controls SHALL communicate this information to all potential RPs prior to integration to allow the RP to assess and determine the acceptability of the compensating controls for their use cases.\n\nThe process of tailoring allows agencies and service providers to make risk-based decisions regarding how they implement their xALs and related controls. It also provides a mechanism for documenting and communicating decisions through the Digital Identity Acceptance Statement described in\nSec. 3.4.4.\n\nIdentify Supplemental Controls\n\nSupplemental controls are those that may be added to further strengthen the baseline controls specified for the organization’s selected assurance levels. Organizations SHOULD identify and implement supplemental controls to address specific threats in the operational environment that may not be addressed by the baseline controls. For example:\n\n\n To complete the proofing process, an organization could choose to verify an end user against additional pieces of identity evidence, beyond what is required by the assurance level, due to a high prevalence of fraudulent attempts.\n An organization could restrict users to only phishing-resistant authentication at AAL2.\n An organization could choose to implement risk-scoring analytics, coupled with re-proofing mechanisms, to confirm a user’s identity when their access attempts exhibit certain risk factors.\n\n\nAny supplemental controls SHALL be assessed for impacts based on the same factors used to tailor the organization’s assurance level and SHALL be documented.\n\nDigital Identity Acceptance Statement (DIAS)\n\nOrganizations SHALL develop a Digital Identity Acceptance Statement (DIAS) to document the results of the Digital Identity Risk Management process for each online service managed by the organization. A CSP/IdP SHALL make their DIAS and practice statements available to RPs. RPs who intend to use a particular CSP/IdP SHALL review the latter’s DIAS and practice statements and incorporate relevant information into the organization’s DIAS for each online service.\n\nThe DIAS SHALL include, at a minimum:\n\n\n Initial impact assessment results,\n Initially assessed xALs,\n Tailored xAL and rationale, if the tailored xAL differs from the initially assessed xAL,\n All compensating controls with their comparability or residual risk, and\n All supplemental controls.\n\n\nFederal agencies SHOULD include this information in the information system authorization package described in [NISTRMF].\n\nContinuously Evaluate and Improve\n\nThreat actors adapt; user capabilities, expectations, and needs shift; seasonal surges occur; and missions evolve. As such, risk assessments and identity solutions must be continuously improved. In addition to keeping pace with the threat and technology environment, continuous improvement is a critical tool for illustrating programmatic gaps that — if unaddressed — may hinder the implementation of identity management systems in a manner that balances risk management objectives. For instance, an organization may determine that a portion of the target population intended to be served by the online service does not have access to affordable high-speed internet services needed to support remote identity proofing. The organization could address this gap with a program that implements local proofing capabilities within the community or by offering appointments with proofing agents who will meet the individual at an address that is more accessible and convenient, such as their local community center, closest post office, an affiliated business partner facility, or the individual’s home.\n\nTo address the shifting environment in which they operate and more rapidly address service capability gaps, organizations SHALL implement a continuous evaluation and improvement program that leverages input from end users who have interacted with the identity management system as well as performance metrics for the online service. This program SHALL be documented, including the metrics that are collected, the sources of data required to enable performance evaluation, and the processes in place for taking timely actions based on the continuous improvement process. This program and its effectiveness SHOULD be assessed on a defined basis to ensure that outcomes are being achieved and that programs are addressing issues in a timely manner.\n\nAdditionally, organizations SHALL monitor the evolving threat landscape to stay informed of the latest threats and fraud tactics. Organizations SHALL regularly assess the effectiveness of current security measures and fraud detection capabilities against the latest threats and fraud tactics.\n\nEvaluation Inputs\nTo fully understand the performance of their identity system, organizations will need to identify critical inputs to their evaluation process. At a minimum these SHALL include:\n\n\n Integrated CSP, IdP, and authenticator functions as well as validation, verification, and fraud management systems as appropriate.\n Customer feedback mechanisms such as complaint processes, help-desk statistics, and other user feedback (e.g., surveys, interviews, or focus groups)\n Threat analysis, threat reporting, and threat intelligence feeds as available to the organization.\n Fraud trends, fraud investigation results, and fraud metrics as available to the organization.\n The results of ongoing equity assessments, privacy assessments, and usability assessments.\n\n\nOrganizations SHALL document their metrics, reporting requirements, and data inputs for any CSP, IdP, or other integrated identity services to ensure that expectations are appropriately communicated to partners and vendors.\n\nPerformance Metrics\nThe exact metrics available to organizations will vary based on the technologies, architectures, and deployment patterns they follow. Additionally, what is available and what is useful may vary over time. Therefore, these guidelines do not attempt to define a comprehensive set of metrics for all scenarios. Table 4 provides a set of recommended metrics that organizations SHOULD capture as part of their continuous evaluation program. However, organizations are not constrained by this table and SHOULD implement metrics that are not defined here based on their specific systems, technology, and program needs. In Table 4, all references to unique users include both legitimate users and imposters.\n\nTable 4. Performance Metrics\n\n\n \n \n Title\n Description\n Type\n \n \n \n \n Pass Rate (Overall)\n Percentage of unique users who successfully proof.\n Proofing\n \n \n Pass Rate (Per Proofing Type)\n Percentage of unique users who successfully proof for each offered type (i.e., Remote Unattended, Remote Attended, Onsite Attended, Onsite Unattended).\n Proofing\n \n \n Fail Rate (Overall)\n Percentage of unique users who start the identity proofing process but are unable to successfully complete all the steps.\n Proofing\n \n \n Estimated Adjusted Fail Rate\n Percentage adjusted to account for digital transactions that are terminated based on suspected fraud.\n Proofing\n \n \n Fail Rate (Per Proofing Type)\n Percentage of unique users who do not complete proofing due to a process failure for each offered type (i.e., Remote Unattended, Remote Attended, Onsite Attended, Onsite Unattended)\n Proofing\n \n \n Abandonment Rate (Overall)\n Percentage of unique users who start the identity proofing process, but do not complete it without failing a process.\n Proofing\n \n \n Abandonment Rate (Per Proofing Type)\n Percentage of unique users who start a specific type of identity proofing process, but do not complete it without failing a process.\n Proofing\n \n \n Failure Rates (Per Proofing Process Step)\n Percentage of unique users who are unsuccessful at completing each identity proofing step in a CSP process.\n Proofing\n \n \n Completion (Times Per Proofing Type)\n Average time that it takes a user to complete each defined proofing type offered as part of an identity service.\n Proofing\n \n \n Authenticator Type Usage\n Percentage of subscribers who have an active authenticator by each type available.\n Authentication\n \n \n Authentication Failures\n Percentage of authentication events that fail (not to include attempts that are successful after re-entry of an authenticator output).\n Authentication\n \n \n Account Recovery Attempts\n The number of account or authenticator recovery processes initiated by subscribers\n Authentication\n \n \n Confirmed Fraud\n Percentage of digital transactions that are confirmed to be fraudulent through investigation or self-reporting.\n Fraud\n \n \n Suspected Fraud\n Percentage of digital transactions that are suspected of being fraudulent.\n Fraud\n \n \n Reported Fraud\n Percentage of digital transactions reported to be fraudulent by users.\n Fraud\n \n \n Fraud (Per Proofing Type\n Number of digital transactions that are suspected, confirmed, and reported by each available type of proofing.\n Fraud\n \n \n Fraud (Per Authentication Type)\n Number of digital transactions suspected, confirmed, and reported by each available type of authentication\n Fraud\n \n \n Help Desk Calls\n Number of calls received by the CSP or identity service.\n Customer Support\n \n \n Help Desk Calls (Per Type)\n Number of calls received related to each offered service (e.g., proofing failures, authenticator resets, complaints)\n Customer Support\n \n \n Help Desk Resolution Times\n Average length of time it takes to resolve a complaint or help desk ticket.\n Customer Support\n \n \n Customer Satisfaction Surveys\n The results of customer feedback surveys conducted by CSPs, RP, or both.\n User Experience\n \n \n Redress requests\n The number of redress requests received related to the identity management system.\n User Experience\n \n \n Redress resolution times\n The average time it takes to resolve redress requests related to the identity management system.\n User Experience\n \n \n\n\nThe data used to generate continuous evaluation metrics may not always reside with the identity program or the organizational entity responsible for identity management systems. The intent of these metrics is not to establish redundant processes but to integrate with existing data sources whenever possible to collect information that is critical to identity program evaluation. For example, customer service representative (CSR) teams may already have substantial information on customer requests, complaints, or concerns. Identity management systems would be expected to coordinate with these teams to acquire the information needed to discern identity management system-related complaints or issues.\n\nMeasurement in Support of Equity Assessments and Outcomes\n\nA primary purpose of continuous improvement is to improve equity and accessibility outcomes for different user populations. As a result, the metrics collected by organizations SHOULD be further evaluated to provide insights into the performance of their identity management systems for their supported communities and demographics. Where possible, these efforts SHOULD avoid the collection of additional personal information and instead use informed analysis of proxy data to help provide indicators of potential disparities. This can include comparing and filtering the metrics to identify deviations in performance across different user populations based on other available data such as zip code, geographic region, age, or sex.\n\nOrganizations are encouraged to consult the OMB Report A Vision for Equitable Data: Recommendations from the Equitable Data Working Group [EO13985-vision] for guidance on incorporating performance metrics into equity assessments across demographic groups and generating disaggregated statistical estimates to assess equitable performance outcomes.\n\nRedress\n\nAn important part of designing services that support a wide range of populations is the inclusion of processes to adjudicate issues and provide redress2 as warranted. Service failures, disputes, and other issues tend to arise as part of normal operations, and their impact can vary broadly, from minor inconveniences to major disruptions or damage. Barriers to access, as well as cybersecurity incidents and data breaches, have real-world consequences for affected individuals. Furthermore, the same issue experienced by one person or community as an inconvenience can have a disproportionately damaging impacts on other individuals and communities, particularly those that are currently experiencing other harms or barriers. Left unchecked, these issues can result in harms that exacerbate existing inequities and allow systemic cycles of exclusion to continue.\n\nTo enable equitable access to critical services while deterring identity-related fraud and cybersecurity threats, it is essential for organizations to plan for potential issues and to design redress approaches that aim to be fair, transparent, easy for legitimate claimants to navigate, and resistant to exploitation attempts.\n\nUnderstanding when and how harms might be occurring is a critical first step for organizations to take informed action. Continuous evaluation and improvement programs can play a key role in identifying instances and patterns of potential harm. Moreover, there may be business processes in place outside of those established to support identity management that can be leveraged as part of a comprehensive approach to issue adjudication and redress. Beyond these activities, additional practices can be implemented to ensure that users of identity management systems are able to voice their concerns and have a path to redress. Requirements for these practices include:\n\n\n RPs and CSPs SHALL enable people to convey grievances and seek redress through an issue handling process that is documented, accessible, trackable, and usable by all people, and whose instructions are easy to find on a public-facing website.\n RPs and CSPs SHALL institute a governance model, including documented roles and responsibilities, for implementing this issue handling process.\n The issue handling process SHALL be implemented as a dedicated function that includes:\n \n Procedures for the impartial review of evidence pertinent to issues;\n Procedures for requesting and collecting additional evidence that informs the issues; and\n Procedures to expeditiously resolve issues and determine corrective action.\n \n \n RPs and CSPs SHALL make human support personnel available to intervene and override issue adjudication outputs generated by algorithmic support mechanisms, such as chatbots.\n RPs and CSPs SHALL educate support personnel on issue handling procedures for the digital identity management system, the avenues for redress, and the alternatives available to gain access to services.\n RPs and CSPs SHALL implement a process for personnel and technologies that provides support functions to report major barriers that end users face and commonly expressed grievances. This process SHALL enable tracing (e.g., who/what is reported) and tracking (e.g. progress/state of action taken).\n RPs and CSPs SHALL incorporate findings derived from the issue handling process into continuous evaluation and improvement activities.\n\n\nOrganizations are encouraged to consider these and other emerging redress practices. Prior to adopting any new redress practice, including supporting technology, organizations SHOULD test the practice with target populations to avoid the introduction of unintended consequences, particularly those that may counteract or contradict the goals associated with redress.\n\nCybersecurity, Fraud, and Identity Program Integrity\n\nIdentity solutions should not operate in a vacuum. Close coordination of identity functions with teams that are responsible for cybersecurity, privacy, threat intelligence, fraud detection, and program integrity can enable a more complete protection of business capabilities, while constantly improving identity solution capabilities. For example, payment fraud data collected by program integrity teams could provide indicators of compromised subscriber accounts and potential weaknesses in identity proofing implementations. Similarly, threat intelligence teams may learn of new TTPs that Could impact identity proofing, authentication, and federation processes. Organizations SHALL establish consistent mechanisms for the exchange of information between critical internal security and fraud stakeholders. Organizations SHOULD do the same for external stakeholders and identity services that are part of the protection plan for their online services.\n\nWhen supporting identity service providers (e.g., CSPs) are external to an organization, the exchange of data related to security, fraud, and other RP functions may be complicated by regulation or policy. However, establishing the necessary mechanisms and guidelines to enable effective information-sharing SHOULD be considered in contractual and legal mechanisms. All data collected, transmitted, or shared SHALL be minimized and subject to a detailed privacy and legal assessment by the generating entity.\n\nThis section is meant to address coordination and integration with various organizational functional teams to achieve better outcomes for the identity functions. Ideally, such coordination is performed throughout the risk management process and operations lifecycle. Companion volumes [SP800-63A], [SP800-63B], and [SP800-63C] provide specific fraud mitigation requirements related to each of the identity functions.\n\nArtificial Intelligence (AI) and Machine Learning (ML) in Identity Systems\n\nIdentity solutions have used and will continue to use AI and ML for multiple purposes, such as improving the performance of biometric matching systems, documenting authentication, detecting fraud, and even assisting users (e.g., chatbots). The potential applications of AI/ML are extensive. They also introduce distinct risks and potential issues, including disparate outcomes, biased outputs, and the exacerbation of existing inequities and access issues.\n\nThe following requirements apply to all uses of AI and ML regardless of how they are used in identity systems:\n\n\n All uses of AI and ML SHALL be documented and communicated to organizations that rely on these systems. The use of integrated technologies that leverage AI and ML by CSPs, IdPs, or verifiers SHALL be disclosed to all RPs that make access decisions based on information from these systems.\n All organizations that use AI and ML SHALL provide information to any entities that use their technology on the methods and techniques used for training their models, a description of the data sets used in training, information on the frequency of model updates, and the results of all testing completed on their algorithms.\n All organizations that use AI and ML systems or rely on services that use these systems SHALL implement NIST AI Risk Management Framework ([NISTAIRMF]) to evaluate the risks that may be introduced by the use of AI and ML.\n \n All organizations that use AI and ML SHALL consult [SP1270], Towards a Standard for Managing Bias in Artificial Intelligence.\n \n \n\n\nNIST continues to advance efforts to promote safe and trustworthy AI implementations through a number of venues. In particular, the U.S. AI Safety Institute, housed at NIST [US-AI-Safety-Inst], is creating a portfolio of safety-focused resources, guidance, and tools that can improve how organizations assess, deploy, and manage their AI systems. Organizations are encouraged to follow the U.S. AI Safety Institute’s efforts and make use of their resources.\n\n \n \n Further information on practice statements and their contents can be found in Section 3.1 of SP800-63A. ↩\n \n \n Redress generally refers to a remedy that is made after harm occurs. ↩\n \n \n\n"
} ,
{
"title" : "List of Symbols, Abbreviations, and Acronyms",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63/abbreviations/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "List of Symbols, Abbreviations, and Acronyms\n\n\n 1:1 Comparison\n One-to-One Comparison\n ABAC\n Attribute-Based Access Control\n AAL\n Authentication Assurance Level\n CAPTCHA\n Completely Automated Public Turing test to tell Computers and Humans Apart\n CSP\n Credential Service Provider\n CSRF\n Cross-Site Request Forgery\n XSS\n Cross-Site Scripting\n DNS\n Domain Name System\n FACT Act\n Fair and Accurate Credit Transaction Act of 2003\n FAL\n Federation Assurance Level\n FEDRAMP\n Federal Risk and Authorization Management Program\n FMR\n False Match Rate\n FNMR\n False Non-Match Rate\n IAL\n Identity Assurance Level\n IdP\n Identity Provider\n JOSE\n JSON Object Signing and Encryption\n JWT\n JSON Web Token\n KBA\n Knowledge-Based Authentication\n KBV\n Knowledge-Based Verification\n KDC\n Key Distribution Center\n MAC\n Message Authentication Code\n MFA\n Multi-Factor Authentication\n NARA\n National Archives and Records Administration\n OTP\n One-Time Password\n PAD\n Presentation Attack Detection\n PIA\n Privacy Impact Assessment\n PII\n Personally Identifiable Information\n PIN\n Personal Identification Number\n PKI\n Public Key Infrastructure\n PSTN\n Public Switched Telephone Network\n RMF\n Risk Management Framework\n RP\n Relying Party\n SA&A\n Security Authorization & Accreditation\n SAML\n Security Assertion Markup Language\n SAOP\n Senior Agency Official for Privacy\n SSL\n Secure Sockets Layer\n SSO\n Single Sign-On\n SMS\n Short Message Service\n SORN\n System of Records Notice\n TEE\n Trusted Execution Environment\n TLS\n Transport Layer Security\n TPM\n Trusted Platform Module\n TTP\n Tactics, Techniques, and Procedures\n VOIP\n Voice-Over-IP\n XSS\n Cross-Site Scripting\n\n"
} ,
{
"title" : "SP 800-63",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63/abstract/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "\nABSTRACT\n\nThese guidelines cover identity proofing and authentication of users (such as employees, contractors, or private individuals) interacting with government information systems over networks. They define technical requirements in each of the areas of identity proofing, registration, authenticators, management processes, authentication protocols, federation, and related assertions. They also offer technical recommendations and other informative text intended as helpful suggestions. The guidelines are not intended to constrain the development or use of standards outside of this purpose. This publication supersedes NIST Special Publication (SP) 800-63-3.\n\nKeywords\n\nauthentication; authentication assurance; authenticator; assertions; credential service provider; digital authentication; digital credentials; identity proofing; federation; passwords; PKI.\n"
} ,
{
"title" : "Change Log",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63/changelog/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Change Log\n\nSP 800-63-1\n\nNIST SP 800-63-1 updated NIST SP 800-63 to reflect current authenticator (then referred to as “token”) technologies and restructured it to provide a better understanding of the digital identity architectural model used here. Additional (minimum) technical requirements were specified for the CSP, protocols used to transport authentication information, and assertions if implemented within the digital identity model.\n\nSP 800-63-2\n\nNIST SP 800-63-2 was a limited update of SP 800-63-1 and substantive changes were made only in Sec. 5, Registration and Issuance Processes. The substantive changes in the revised draft were intended to facilitate the use of professional credentials in the identity proofing process, and to reduce the need to send postal mail to an address of record to issue credentials for level 3 remote registration. Other changes to Sec. 5 were minor explanations and clarifications.\n\nSP 800-63-3\n\nNIST SP 800-63-3 is a substantial update and restructuring of SP 800-63-2. SP 800-63-3 introduces individual components of digital authentication assurance — AAL, IAL, and FAL — to support the growing need for independent treatment of authentication strength and confidence in an individual’s claimed identity (e.g., in strong pseudonymous authentication). A risk assessment methodology and its application to IAL, AAL, and FAL has been included in this guideline. It also moves the whole of digital identity guidance covered under SP 800-63 from a single document describing authentication to a suite of four documents (to separately address the individual components mentioned above) of which SP 800-63-3 is the top-level document.\n\nOther areas updated in 800-63-3 include:\n\n\n Renamed to Digital Identity Guidelines to properly represent the scope includes identity proofing and federation, and to support expanding the scope to include device identity, or machine-to-machine authentication in future revisions.\n Changed terminology, including the use of authenticator in place of token to avoid conflicting use of the word token in assertion technologies.\n Updated authentication and assertion requirements to reflect advances in both security technology and threats.\n Added requirements on the storage of long-term secrets by verifiers.\n Restructured identity proofing model.\n Updated requirements regarding remote identity proofing.\n Clarified the use of independent channels and devices as “something you have”.\n Removed pre-registered knowledge tokens (authenticators), with the recognition that they are special cases of (often very weak) passwords.\n Added requirements regarding account recovery in the event of loss or theft of an authenticator.\n Removed email as a valid channel for out-of-band authenticators.\n Expanded discussion of reauthentication and session management.\n Expanded discussion of identity federation; restructuring of assertions in the context of federation.\n\n\nSP 800-63-4\n\nNIST SP 800-63-4 has substantial updates and re-organization from SP 800-63-3. Updates to 800-63-4 include:\n\n\n Expanded security and privacy considerations and added equity and usability considerations.\n Updated digital identity models and added a user-controlled wallet federation model that addresses the increased attention and adoption of digital wallets and attribute bundles.\n Expanded digital identity risk management process to include definition of the protected online services, user groups, and impacted entities.\n A more descriptive introduction to establish the context of the DIRM process, the two dimensions of risk it addresses, and the intended outcomes. This context-setting step includes defining and understanding the online service that the organization is offering and intending to protect with identity systems.\n Expanded digital identity risk management process to include definition of the protected online services, user groups, and impacted entities.\n Updated digital identity risk management process for additional assessments for tailoring initial baseline control selections.\n Added performance metrics for the continuous evaluation of digital identity systems.\n Added a new subsection on redress processes and requirements.\n Added a new Artificial Intelligence subsection to address the use of Artificial Intelligence in digital identity services.\n\n"
} ,
{
"title" : "Glossary",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63/glossary/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Glossary\n\nThis section is informative.\n\nA wide variety of terms are used in the realm of digital identity. While many definitions are consistent with earlier versions of SP 800-63, some have changed in this revision. Many of these terms lack a single, consistent definition, warranting careful attention to how the terms are defined here.\n\n\n account linking\n The association of multiple federated identifiers with a single RP subscriber account, or the management of those associations.\n account recovery\n The ability to regain ownership of a subscriber account and its associated information and privileges.\n account resolution\n The association of an RP subscriber account with information already held by the RP prior to the federation transaction and outside of a trust agreement.\n activation\n The process of inputting an activation factor into a multi-factor authenticator to enable its use for authentication.\n activation factor\n An additional authentication factor that is used to enable successful authentication with a multi-factor authenticator.\n activation secret\n A password that is used locally as an activation factor for a multi-factor authenticator.\n allowlist\n A documented list of specific elements that are allowed, per policy decision. In federation contexts, this is most commonly used to refer to the list of RPs allowed to connect to an IdP without subscriber intervention. This concept has historically been known as a whitelist.\n applicant\n A subject undergoing the processes of identity proofing and enrollment.\n\n\n\\clearpage\n\n\n\n applicant reference\n A representative of the applicant who can vouch for the identity of the applicant, specific attributes related to the applicant, or conditions relative to the context of the individual (e.g., emergency status, homelessness).\n approved cryptography\n An encryption algorithm, hash function, random bit generator, or similar technique that is Federal Information Processing Standard (FIPS)-approved or NIST-recommended. Approved algorithms and techniques are either specified or adopted in a FIPS or NIST recommendation.\n assertion\n A statement from an IdP to an RP that contains information about an authentication event for a subscriber. Assertions can also contain identity attributes for the subscriber.\n assertion reference\n A data object, created in conjunction with an assertion, that is used by the RP to retrieve an assertion over an authenticated protected channel.\n assertion presentation\n The method by which an assertion is transmitted to the RP.\n asymmetric keys\n Two related keys, comprised of a public key and a private key, that are used to perform complementary operations such as encryption and decryption or signature verification and generation.\n attestation\n Information conveyed to the CSP, generally at the time that an authenticator is bound, describing the characteristics of a connected authenticator or the endpoint involved in an authentication operation.\n attribute\n A quality or characteristic ascribed to someone or something. An identity attribute is an attribute about the identity of a subscriber.\n attribute bundle\n A package of attribute values and derived attribute values from a CSP. The package has necessary cryptographic protection to allow validation of the bundle independent from interaction with the CSP or IdP. Attribute bundles are often used with subscriber-controlled wallets.\n attribute provider\n The provider of an identity API that provides access to a subscriber’s attributes without necessarily asserting that the subscriber is present to the RP.\n attribute validation\n The process or act of confirming that a set of attributes are accurate and associated with a real-life identity. See validation.\n attribute value\n A complete statement that asserts an identity attribute of a subscriber, independent of format. For example, for the attribute “birthday,” a value could be “12/1/1980” or “December 1, 1980.”\n audience restriction\n The restriction of a message to a specific target audience to prevent a receiver from unknowingly processing a message intended for another recipient. In federation protocols, assertions are audience restricted to specific RPs to prevent an RP from accepting an assertion generated for a different RP.\n authenticate\n See authentication.\n authenticated protected channel\n An encrypted communication channel that uses approved cryptography where the connection initiator (client) has authenticated the recipient (server). Authenticated protected channels are encrypted to provide confidentiality and protection against active intermediaries and are frequently used in the user authentication process. Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) [RFC9325] are examples of authenticated protected channels in which the certificate presented by the recipient is verified by the initiator. Unless otherwise specified, authenticated protected channels do not require the server to authenticate the client. Authentication of the server is often accomplished through a certificate chain that leads to a trusted root rather than individually with each server.\n authenticated session\n See protected session.\n authentication\n The process by which a claimant proves possession and control of one or more authenticators bound to a subscriber account to demonstrate that they are the subscriber associated with that account.\n Authentication Assurance Level (AAL)\n A category that describes the strength of the authentication process.\n authentication factor\n The three types of authentication factors are something you know, something you have, and something you are. Every authenticator has one or more authentication factors.\n authentication intent\n The process of confirming the claimant’s intent to authenticate or reauthenticate by requiring user intervention in the authentication flow. Some authenticators (e.g., OTPs) establish authentication intent as part of their operation. Others require a specific step, such as pressing a button, to establish intent. Authentication intent is a countermeasure against use by malware at the endpoint as a proxy for authenticating an attacker without the subscriber’s knowledge.\n authentication protocol\n A defined sequence of messages between a claimant and a verifier that demonstrates that the claimant has possession and control of one or more valid authenticators to establish their identity, and, optionally, demonstrates that the claimant is communicating with the intended verifier.\n authentication secret\n A generic term for any secret value that an attacker could use to impersonate the subscriber in an authentication protocol.\n\n These are further divided into short-term authentication secrets, which are only useful to an attacker for a limited period of time, and long-term authentication secrets, which allow an attacker to impersonate the subscriber until they are manually reset. The authenticator secret is the canonical example of a long-term authentication secret, while the authenticator output — if it is different from the authenticator secret — is usually a short-term authentication secret.\n \n authenticator\n Something that the subscriber possesses and controls (e.g., a cryptographic module or password) and that is used to authenticate a claimant’s identity. See authenticator type and multi-factor authenticator.\n authenticator binding\n The establishment of an association between a specific authenticator and a subscriber account that allows the authenticator to be used to authenticate for that subscriber account, possibly in conjunction with other authenticators.\n authenticator output\n The output value generated by an authenticator. The ability to generate valid authenticator outputs on demand proves that the claimant possesses and controls the authenticator. Protocol messages sent to the verifier depend on the authenticator output, but they may or may not explicitly contain it.\n authenticator secret\n The secret value contained within an authenticator.\n authenticator type\n A category of authenticators with common characteristics, such as the types of authentication factors they provide and the mechanisms by which they operate.\n authenticity\n The property that data originated from its purported source.\n authoritative source\n An entity that has access to or verified copies of accurate information from an issuing source such that a CSP has high confidence that the source can confirm the validity of the identity attributes or evidence supplied by an applicant during identity proofing. An issuing source may also be an authoritative source. Often, authoritative sources are determined by a policy decision of the agency or CSP before they can be used in the identity proofing validation phase.\n authorize\n A decision to grant access, typically automated by evaluating a subject’s attributes.\n authorized party\n In federation, the organization, person, or entity that is responsible for making decisions regarding the release of information within the federation transaction, most notably subscriber attributes. This is often the subscriber (when runtime decisions are used) or the party operating the IdP (when allowlists are used).\n back-channel communication\n Communication between two systems that relies on a direct connection without using redirects through an intermediary such as a browser.\n bearer assertion\n An assertion that can be presented on its own as proof of the identity of the presenter.\n\n\n\\clearpage\n\n\n\n biometric reference\n One or more stored biometric samples, templates, or models attributed to an individual and used as the object of biometric comparison in a database, such as a facial image stored digitally on a passport, fingerprint minutiae template on a National ID card or Gaussian Mixture Model for speaker recognition.\n biometric sample\n An analog or digital representation of biometric characteristics prior to biometric feature extraction, such as a record that contains a fingerprint image.\n biometrics\n Automated recognition of individuals based on their biological or behavioral characteristics. Biological characteristics include but are not limited to fingerprints, palm prints, facial features, iris and retina patterns, voiceprints, and vein patterns. Behavioral characteristics include but are not limited to keystrokes, angle of holding a smart phone, screen pressure, typing speed, mouse or mobile phone movements, and gyroscope position.\n blocklist\n A documented list of specific elements that are blocked, per policy decision. This concept has historically been known as a blacklist.\n challenge-response protocol\n An authentication protocol in which the verifier sends the claimant a challenge (e.g., a random value or nonce) that the claimant combines with a secret (e.g., by hashing the challenge and a shared secret together or by applying a private-key operation to the challenge) to generate a response that is sent to the verifier. The verifier can independently verify the response generated by the claimant (e.g., by re-computing the hash of the challenge and the shared secret and comparing to the response or performing a public-key operation on the response) and establish that the claimant possesses and controls the secret.\n claimant\n A subject whose identity is to be verified using one or more authentication protocols.\n claimed address\n The physical location asserted by a subject where they can be reached. It includes the individual’s residential street address and may also include their mailing address.\n claimed identity\n An applicant’s declaration of unvalidated and unverified personal attributes.\n compensating controls\n Alternative controls to the normative controls for the assessed and selected xALs of an organization based on that organization’s mission, risk tolerance, business processes, and risk assessments and considerations for the privacy, usability, and equity of the populations served by the online service.\n controls\n Policies, procedures, guidelines, practices, or organizational structures that manage security, privacy, and other risks.\nSee supplemental controls and compensating controls\n core attributes\n The set of identity attributes that the CSP has determined and documented to be required for identity proofing.\n credential\n An object or data structure that authoritatively binds an identity — via an identifier — and (optionally) additional attributes, to at least one authenticator possessed and controlled by a subscriber.\n\n A credential is issued, stored, and maintained by the CSP. Copies of information from the credential can be possessed by the subscriber, typically in the form of one or more digital certificates that are often contained in an authenticator along with their associated private keys.\n \n credential service provider (CSP)\n A trusted entity whose functions include identity proofing applicants to the identity service and registering authenticators to subscriber accounts. A CSP may be an independent third party.\n credible source\n An entity that can provide or validate the accuracy of identity evidence and attribute information. A credible source has access to attribute information that was validated through an identity proofing process or that can be traced to an authoritative source, or it maintains identity attribute information obtained from multiple sources that is checked for data correlation for accuracy, consistency, and currency.\n cross-site request forgery (CSRF)\n An attack in which a subscriber who is currently authenticated to an RP and connected through a secure session browses an attacker’s website, causing the subscriber to unknowingly invoke unwanted actions at the RP.\n\n For example, if a bank website is vulnerable to a CSRF attack, it may be possible for a subscriber to unintentionally authorize a large money transfer by clicking on a malicious link in an email while a connection to the bank is open in another browser window.\n \n cross-site scripting (XSS)\n A vulnerability that allows attackers to inject malicious code into an otherwise benign website. These scripts acquire the permissions of scripts generated by the target website to compromise the confidentiality and integrity of data transfers between the website and clients. Websites are vulnerable if they display user-supplied data from requests or forms without sanitizing the data so that it is not executable.\n cryptographic authenticator\n An authenticator that proves possession of an authentication secret through direct communication with a verifier through a cryptographic authentication protocol.\n cryptographic key\n A value used to control cryptographic operations, such as decryption, encryption, signature generation, or signature verification. For the purposes of these guidelines, key requirements shall meet the minimum requirements stated in Table 2 of [SP800-57Part1]. See asymmetric keys or symmetric keys.\n cryptographic module\n A set of hardware, software, or firmware that implements approved security functions including cryptographic algorithms and key generation.\n data integrity\n The property that data has not been altered by an unauthorized entity.\n derived attribute value\n A statement that asserts a limited identity attribute of a subscriber without containing the attribute value from which it is derived, independent of format. For example, instead of requesting the attribute “birthday,” a derived value could be “older than 18”. Instead of requesting the attribute for “physical address,” a derived value could be “currently residing in this district.” Previous versions of these guidelines referred to this construct as an “attribute reference.”\n digital authentication\n The process of establishing confidence in user identities that are digitally presented to a system. In previous editions of SP 800-63, this was referred to as electronic authentication.\n digital identity\n An attribute or set of attributes that uniquely describes a subject within a given context.\n Digital Identity Acceptance Statement (DIAS)\n Documents the results of the digital identity risk management process. This includes the impact assessment, initial assurance level selection, and tailoring process.\n digital signature\n An asymmetric key operation in which the private key is used to digitally sign data and the public key is used to verify the signature. Digital signatures provide authenticity protection, integrity protection, and non-repudiation support but not confidentiality or replay attack protection.\n digital transaction\n A discrete digital event between a user and a system that supports a business or programmatic purpose.\n disassociability\n Enabling the processing of PII or events without association to individuals or devices beyond the operational requirements of the system. [NISTIR8062]\n electronic authentication (e-authentication)\n See digital authentication.\n endpoint\n Any device that is used to access a digital identity on a network, such as laptops, desktops, mobile phones, tablets, servers, Internet of Things devices, and virtual environments.\n enrollment\n The process through which a CSP/IdP provides a successfully identity-proofed applicant with a subscriber account and binds authenticators to grant persistent access.\n entropy\n The amount of uncertainty that an attacker faces to determine the value of a secret. Entropy is usually stated in bits. A value with n bits of entropy has the same degree of uncertainty as a uniformly distributed n-bit random value.\n equity\n The consistent and systematic fair, just, and impartial treatment of all individuals, including individuals who belong to underserved communities that have been denied such treatment, such as Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders, and other persons of color; members of religious minorities; lesbian, gay, bisexual, transgender, and queer (LGBTQ+) persons; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality. [EO13985]\n factor\n See authentication factor\n Federal Information Processing Standard (FIPS)\n Under the Information Technology Management Reform Act (Public Law 104-106), the Secretary of Commerce approves the standards and guidelines that the National Institute of Standards and Technology (NIST) develops for federal computer systems. NIST issues these standards and guidelines as Federal Information Processing Standards (FIPS) for government-wide use. NIST develops FIPS when there are compelling federal government requirements, such as for security and interoperability, and there are no acceptable industry standards or solutions. See background information for more details.\n\n FIPS documents are available online on the FIPS home page: https://www.nist.gov/itl/fips.cfm\n \n federated identifier\n The combination of a subject identifier within an assertion and an identifier for the IdP that issued that assertion. When combined, these pieces of information uniquely identify the subscriber in the context of a federation transaction.\n federation\n A process that allows for the conveyance of identity and authentication information across a set of networked systems.\n Federation Assurance Level (FAL)\n A category that describes the process used in a federation transaction to communicate authentication events and subscriber attributes to an RP.\n federation protocol\n A technical protocol that is used in a federation transaction between networked systems.\n federation proxy\n A component that acts as a logical RP to a set of IdPs and a logical IdP to a set of RPs, bridging the two systems with a single component. These are sometimes referred to as “brokers.”\n federation transaction\n A specific instance of processing an authentication using a federation process for a specific subscriber by conveying an assertion from an IdP to an RP.\n front-channel communication\n Communication between two systems that relies on passing messages through an intermediary, such as using redirects through the subscriber’s browser.\n hash function\n A function that maps a bit string of arbitrary length to a fixed-length bit string. Approved hash functions satisfy the following properties:\n\n \n \n One-way — It is computationally infeasible to find any input that maps to any pre-specified output.\n \n \n Collision-resistant — It is computationally infeasible to find any two distinct inputs that map to the same output.\n \n \n \n identifier\n A data object that is associated with a single, unique entity (e.g., individual, device, or session) within a given context and is never assigned to any other entity within that context.\n identity\n See digital identity\n identity API\n A protected API accessed by an RP to access the attributes of a specific subscriber.\n Identity Assurance Level (IAL)\n A category that conveys the degree of confidence that the subject’s claimed identity is their real identity.\n identity evidence\n Information or documentation that supports the real-world existence of the claimed identity. Identity evidence may be physical (e.g., a driver’s license) or digital (e.g., a mobile driver’s license or digital assertion). Evidence must support both validation (i.e., confirming authenticity and accuracy) and verification (i.e., confirming that the applicant is the true owner of the evidence).\n identity proofing\n The processes used to collect, validate, and verify information about a subject to establish assurance in the subject’s claimed identity.\n identity provider (IdP)\n The party in a federation transaction that creates an assertion for the subscriber and transmits the assertion to the RP.\n identity resolution\n The process of collecting information about an applicant to uniquely distinguish an individual within the context of the population that the CSP serves.\n identity verification\n See verification\n injection attack\n An attack in which an attacker supplies untrusted input to a program. In the context of federation, the attacker presents an untrusted assertion or assertion reference to the RP in order to create an authenticated session with the RP.\n issuing source\n An authority responsible for the generation of data, digital evidence (i.e., assertions), or physical documents that can be used as identity evidence.\n knowledge-based verification (KBV)\n A process of validating knowledge of personal or private information associated with an individual for the purpose of verifying the claimed identity of an applicant. KBV does not include collecting personal attributes for the purposes of identity resolution.\n legal person\n An individual, organization, or company with legal rights.\n login\n Establishment of an authenticated session between a person and a system. Also known as “sign in”, “log on”, and “sign on.”\n manageability\n Providing the capability for the granular administration of personally identifiable information, including alteration, deletion, and selective disclosure. [NISTIR8062]\n memorized secret\n See password.\n message authentication code (MAC)\n A cryptographic checksum on data that uses a symmetric key to detect both accidental and intentional modifications of the data. MACs provide authenticity and integrity protection, but not non-repudiation protection.\n mobile code\n Executable code that is normally transferred from its source to another computer system for execution. This transfer is often through the network (e.g., JavaScript embedded in a web page) but may transfer through physical media as well.\n multi-factor authentication (MFA)\n An authentication system that requires more than one distinct type of authentication factor for successful authentication. MFA can be performed using a multi-factor authenticator or by combining single-factor authenticators that provide different types of factors.\n multi-factor authenticator\n An authenticator that provides more than one distinct authentication factor, such as a cryptographic authentication device with an integrated biometric sensor that is required to activate the device.\n natural person\n A real-life human being, not synthetic or artificial.\n network\n An open communications medium, typically the Internet, used to transport messages between the claimant and other parties. Unless otherwise stated, no assumptions are made about the network’s security; it is assumed to be open and subject to active (e.g., impersonation, session hijacking) and passive (e.g., eavesdropping) attacks at any point between the parties (e.g., claimant, verifier, CSP, RP).\n nonce\n A value used in security protocols that is never repeated with the same key. For example, nonces used as challenges in challenge-response authentication protocols must not be repeated until authentication keys are changed. Otherwise, there is a possibility of a replay attack. Using a nonce as a challenge is a different requirement than a random challenge, because a nonce is not necessarily unpredictable.\n non-repudiation\n The capability to protect against an individual falsely denying having performed a particular transaction.\n offline attack\n An attack in which the attacker obtains some data (typically by eavesdropping on an authentication transaction or by penetrating a system and stealing security files) that the attacker is able to analyze in a system of their own choosing.\n one-to-one (1:1) comparison\n The process in which a biometric sample from an individual is compared to a biometric reference to produce a comparison score.\n online attack\n An attack against an authentication protocol in which the attacker either assumes the role of a claimant with a genuine verifier or actively alters the authentication channel.\n online guessing attack\n An attack in which an attacker performs repeated logon trials by guessing possible values of the authenticator output.\n online service\n A service that is accessed remotely via a network, typically the internet.\n pairwise pseudonymous identifier\n A pseudonymous identifier generated by an IdP for use at a specific RP.\n passphrase\n A password that consists of a sequence of words or other text that a claimant uses to authenticate their identity. A passphrase is similar to a password in usage but is generally longer for added security.\n password\n A type of authenticator consisting of a character string that is intended to be memorized or memorable by the subscriber to permit the claimant to demonstrate something they know as part of an authentication process. Passwords are referred to as memorized secrets in the initial release of SP 800-63B.\n personal identification number (PIN)\n A password that typically consists of only decimal digits.\n personal information\n See personally identifiable information.\n personally identifiable information (PII)\n Information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other information that is linked or linkable to a specific individual. [A-130]\n personally identifiable information processing\n An operation or set of operations performed upon personally identifiable information that can include the collection, retention, logging, generation, transformation, use, disclosure, transfer, or disposal of personally identifiable information.\n pharming\n An attack in which an attacker corrupts an infrastructure service such as DNS (e.g., Domain Name System [DNS]) and causes the subscriber to be misdirected to a forged verifier/RP, which could cause the subscriber to reveal sensitive information, download harmful software, or contribute to a fraudulent act.\n phishing\n An attack in which the subscriber is lured (usually through an email) to interact with a counterfeit verifier/RP and tricked into revealing information that can be used to masquerade as that subscriber to the real verifier/RP.\n phishing resistance\n The ability of the authentication protocol to prevent the disclosure of authentication secrets and valid authenticator outputs to an impostor verifier without reliance on the vigilance of the claimant.\n physical authenticator\n An authenticator that the claimant proves possession of as part of an authentication process.\n possession and control of an authenticator\n The ability to activate and use the authenticator in an authentication protocol.\n practice statement\n A formal statement of the practices followed by the parties to an authentication process (e.g., CSP or verifier). It usually describes the parties’ policies and practices and can become legally binding.\n predictability\n Enabling reliable assumptions by individuals, owners, and operators about PII and its processing by an information system. [NISTIR8062]\n private key\n In asymmetric key cryptography, the private key (i.e., a secret key) is a mathematical key used to create digital signatures and, depending on the algorithm, decrypt messages or files that are encrypted with the corresponding public key. In symmetric key cryptography, the same private key is used for both encryption and decryption.\n processing\n Operation or set of operations performed upon PII that can include, but is not limited to, the collection, retention, logging, generation, transformation, use, disclosure, transfer, and disposal of PII. [NISTIR8062]\n presentation attack\n Presentation to the biometric data capture subsystem with the goal of interfering with the operation of the biometric system.\n presentation attack detection (PAD)\n Automated determination of a presentation attack. A subset of presentation attack determination methods, referred to as liveness detection, involves the measurement and analysis of anatomical characteristics or voluntary or involuntary reactions, to determine if a biometric sample is being captured from a living subject that is present at the point of capture.\n process assistant\n An individual who provides support for the proofing process but does not support decision-making or risk-based evaluation (e.g., translation, transcription, or accessibility support).\n proofing agent\n An agent of the CSP who is trained to attend identity proofing sessions and can make limited risk-based decisions – such as physically inspecting identity evidence and making physical comparisons of the applicant to identity evidence.\n Privacy Impact Assessment (PIA)\n A method of analyzing how personally identifiable information (PII) is collected, used, shared, and maintained. PIAs are used to identify and mitigate privacy risks throughout the development lifecycle of a program or system. They also help ensure that handling information conforms to legal, regulatory, and policy requirements regarding privacy.\n protected session\n A session in which messages between two participants are encrypted and integrity is protected using a set of shared secrets called “session keys.”\n\n A protected session is said to be authenticated if — during the session — one participant proves possession of one or more authenticators in addition to the session keys, and if the other party can verify the identity associated with the authenticators. If both participants are authenticated, the protected session is said to be mutually authenticated.\n \n Provisioning API\n A protected API that allows an RP to access identity attributes for multiple subscribers for the purposes of provisioning and managing RP subscriber accounts.\n pseudonym\n A name other than a legal name.\n pseudonymity\n The use of a pseudonym to identify a subject.\n pseudonymous identifier\n A meaningless but unique identifier that does not allow the RP to infer anything regarding the subscriber but that does permit the RP to associate multiple interactions with a single subscriber.\n public key\n The public part of an asymmetric key pair that is used to verify signatures or encrypt data.\n public key certificate\n A digital document issued and digitally signed by the private key of a certificate authority that binds an identifier to a subscriber’s public key. The certificate indicates that the subscriber identified in the certificate has sole control of and access to the private key. See also [RFC5280].\n public key infrastructure (PKI)\n A set of policies, processes, server platforms, software, and workstations used to administer certificates and public-_private key_ pairs, including the ability to issue, maintain, and revoke public key certificates.\n reauthentication\n The process of confirming the subscriber’s continued presence and intent to be authenticated during an extended usage session.\n registration\n See enrollment.\n relying party (RP)\n An entity that relies upon a verifier’s assertion of a subscriber’s identity, typically to process a transaction or grant access to information or a system.\n remote\n A process or transaction that is conducted through connected devices over a network, rather than in person.\n replay attack\n An attack in which the attacker is able to replay previously captured messages (between a legitimate claimant and a verifier) to masquerade as that claimant to the verifier or vice versa.\n replay resistance\n The property of an authentication process to resist replay attacks, typically by the use of an authenticator output that is valid only for a specific authentication.\n resolution\n See identity resolution.\n restricted\n An authenticator type, class, or instantiation that has additional risk of false acceptance associated with its use and is therefore subject to additional requirements.\n risk assessment\n The process of identifying, estimating, and prioritizing risks to organizational operations (i.e., mission, functions, image, or reputation), organizational assets, individuals, and other organizations that result from the operation of a system. A risk assessment is part of risk management, incorporates threat and vulnerability analyses, and considers mitigations provided by security controls that are planned or in-place. It is synonymous with “risk analysis.”\n risk management\n The program and supporting processes that manage information security risk to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, and other organizations and includes (i) establishing the context for risk-related activities, (ii) assessing risk, (iii) responding to risk once determined, and (iv) monitoring risk over time.\n RP subscriber account\n An account established and managed by the RP in a federated system based on the RP’s view of the subscriber account from the IdP. An RP subscriber account is associated with one or more federated identifiers and allows the subscriber to access the account through a federation transaction with the IdP.\n salt\n A non-secret value used in a cryptographic process, usually to ensure that the results of computations for one instance cannot be reused by an attacker.\n Secure Sockets Layer (SSL)\n See Transport Layer Security (TLS).\n security domain\n A set of systems under a common administrative and access control.\n Senior Agency Official for Privacy (SAOP)\n Person responsible for ensuring that an agency complies with privacy requirements and manages privacy risks. The SAOP is also responsible for ensuring that the agency considers the privacy impacts of all agency actions and policies that involve PII.\n session\n A persistent interaction between a subscriber and an endpoint, either an RP or a CSP. A session begins with an authentication event and ends with a session termination event. A session is bound by the use of a session secret that the subscriber’s software (e.g., a browser, application, or OS) can present to the RP to prove association of the session with the authentication event.\n session hijack attack\n An attack in which the attacker is able to insert themselves between a claimant and a verifier subsequent to a successful authentication exchange between the latter two parties. The attacker is able to pose as a subscriber to the verifier or vice versa to control session data exchange. Sessions between the claimant and the RP can be similarly compromised.\n shared secret\n A secret used in authentication that is known to the subscriber and the verifier.\n side-channel attack\n An attack enabled by the leakage of information from a physical cryptosystem. Characteristics that could be exploited in a side-channel attack include timing, power consumption, and electromagnetic and acoustic emissions.\n single-factor\n A characteristic of an authentication system or an authenticator that requires only one authentication factor (i.e., something you know, something you have, or something you are) for successful authentication.\n single sign-on (SSO)\n An authentication process by which one account and its authenticators are used to access multiple applications in a seamless manner, generally implemented with a federation protocol.\n social engineering\n The act of deceiving an individual into revealing sensitive information, obtaining unauthorized access, or committing fraud by associating with the individual to gain confidence and trust.\n subject\n A person, organization, device, hardware, network, software, or service. In these guidelines, a subject is a natural person.\n subscriber\n An individual enrolled in the CSP identity service.\n subscriber account\n An account established by the CSP containing information and authenticators registered for each subscriber enrolled in the CSP identity service.\n supplemental controls\n Controls that may be added, in addition to those specified in the organization’s tailored assurance level, in order to address specific threats or attacks.\n symmetric key\n A cryptographic key used to perform both the cryptographic operation and its inverse. (e.g., to encrypt and decrypt or create a message authentication code and to verify the code).\n sync fabric\n Any on-premises, cloud-based, or hybrid service used to store, transmit, or manage authentication keys generated by syncable authenticators that are not local to the user’s device.\n syncable authenticators\n Software or hardware cryptographic authenticators that allow authentication keys to be cloned and exported to other storage to sync those keys to other authenticators (i.e., devices).\n synthetic identity fraud\n The use of a combination of personally identifiable information (PII) to fabricate a person or entity in order to commit a dishonest act for personal or financial gain.\n system of record (SOR)\n An SOR is a collection of records that contain information about individuals and are under the control of an agency. The records can be retrieved by the individual’s name or by an identifying number, symbol, or other identifier.\n System of Record Notice (SORN)\n A notice that federal agencies publish in the Federal Register to describe their systems of records.\n tailoring\n The process by which xALs and specified controls are modified by: considerations for the impacts on privacy, usability, and equity on the user population, identifying and designating common controls, applying scoping considerations on the applicability and implementation of specified controls, selecting any compensating controls, assigning specific values to organization-defined security control parameters, supplementing xAL controls with additional controls or control enhancements, and providing additional specification information for control implementation.\n token\n See authenticator.\n transaction\n See digital transaction\n Transport Layer Security (TLS)\n An authentication and security protocol widely implemented in browsers and web servers. TLS is defined by [RFC5246]. TLS is similar to the older SSL protocol, and TLS 1.0 is effectively SSL version 3.1. SP 800-52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations [SP800-52], specifies how TLS is to be used in government applications.\n trust agreement\n A set of conditions under which a CSP, IdP, and RP are allowed to participate in a federation transaction for the purposes of establishing an authentication session between the subscriber and the RP.\n trust anchor\n A public or symmetric key that is trusted because it is built directly into hardware or software or securely provisioned via out-of-band means rather than because it is vouched for by another trusted entity (e.g., in a public key certificate). A trust anchor may have name or policy constraints that limit its scope.\n trusted referee\n An agent of the CSP who is trained to make risk-based decisions regarding an applicant’s identity proofing case when that applicant is unable to meet the expected requirements of a defined IAL proofing process.\n usability\n The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use. [ISO/IEC9241-11]\n validation\n The process or act of checking and confirming that the evidence and attributes supplied by an applicant are authentic, accurate and associated with a real-life identity. Specifically, evidence validation is the process or act of checking that the presented evidence is authentic, current, and issued from an acceptable source. See also attribute validation.\n verification\n The process or act of confirming that the applicant undergoing identity proofing holds the claimed real-life identity represented by the validated identity attributes and associated evidence. Synonymous with “identity verification.”\n verifier\n An entity that verifies the claimant’s identity by verifying the claimant’s possession and control of one or more authenticators using an authentication protocol. To do this, the verifier needs to confirm the binding of the authenticators with the subscriber account and check that the subscriber account is active.\n verifier impersonation\n See phishing.\n zeroize\n Overwrite a memory location with data that consists entirely of bits with the value zero so that the data is destroyed and unrecoverable. This is often contrasted with deletion methods that merely destroy references to data within a file system rather than the data itself.\n zero-knowledge password protocol\n A password-based authentication protocol that allows a claimant to authenticate to a verifier without revealing the password to the verifier. Examples of such protocols are EKE, SPEKE and SRP.\n\n"
} ,
{
"title" : "Note to Reviewers",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63/reviewers/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Note to Reviewers\n\nIn December 2022, NIST released the Initial Public Draft (IPD) of SP 800-63, Revision 4. Over the course of a 119-day public comment period, the authors received exceptional feedback from a broad community of interested entities and individuals. The input from nearly 4,000 specific comments has helped advance the improvement of these Digital Identity Guidelines in a manner that supports NIST’s critical goals of providing foundational risk management processes and requirements that enable the implementation of secure, private, equitable, and accessible identity systems. Based on this initial wave of feedback, several substantive changes have been made across all of the volumes. These changes include but are not limited to the following:\n\n\n Updated text and context setting for risk management. Specifically, the authors have modified the process defined in the IPD to include a context-setting step of defining and understanding the online service that the organization is offering and intending to potentially protect with identity systems.\n Added recommended continuous evaluation metrics. The continuous improvement section introduced by the IPD has been expanded to include a set of recommended metrics for holistically evaluating identity solution performance. These are recommended due to the complexities of data streams and variances in solution deployments.\n Expanded fraud requirements and recommendations. Programmatic fraud management requirements for credential service providers and relying parties now address issues and challenges that may result from the implementation of fraud checks.\n Restructured the identity proofing controls. There is a new taxonomy and structure for the requirements at each assurance level based on the means of providing the proofing: Remote Unattended, Remote Attended (e.g., video session), Onsite Unattended (e.g., kiosk), and Onsite Attended (e.g., in-person).\n Integrated syncable authenticators. In April 2024, NIST published interim guidance for syncable authenticators. This guidance has been integrated into SP 800-63B as normative text and is provided for public feedback as part of the Revision 4 volume set.\n Added user-controlled wallets to the federation model. Digital wallets and credentials (called “attribute bundles” in SP 800-63C) are seeing increased attention and adoption. At their core, they function like a federated IdP, generating signed assertions about a subject. Specific requirements for this presentation and the emerging context are presented in SP 800-63C-4.\n\n\nThe rapid proliferation of online services over the past few years has heightened the need for reliable, equitable, secure, and privacy-protective digital identity solutions.\nRevision 4 of NIST Special Publication SP 800-63, Digital Identity Guidelines, intends to respond to the changing digital landscape that has emerged since the last major revision of this suite was published in 2017, including the real-world implications of online risks. The guidelines present the process and technical requirements for meeting digital identity management assurance levels for identity proofing, authentication, and federation, including requirements for security and privacy as well as considerations for fostering equity and the usability of digital identity solutions and technology.\n\nBased on the feedback provided in response to the June 2020 Pre-Draft Call for Comments, research into real-world implementations of the guidelines, market innovation, and the current threat environment, this draft seeks to:\n\n\n Address comments received in response to the IPD of Revision 4 of SP 800-63\n Clarify the text to address the questions and issues raised in the public comments\n Update all four volumes of SP 800-63 based on current technology and market developments, the changing digital identity threat landscape, and organizational needs for digital identity solutions to address online security, privacy, usability, and equity\n\n\nNIST is specifically interested in comments and recommendations on the following topics:\n\n\n \n Risk Management and Identity Models\n\n \n Is the “user controlled” wallet model sufficiently described to allow entities to understand its alignment to real-world implementations of wallet-based solutions such as mobile driver’s licenses and verifiable credentials?\n Is the updated risk management process sufficiently well-defined to support an effective, repeatable, real-world process for organizations seeking to implement digital identity system solutions to protect online services and systems?\n \n \n \n Identity Proofing and Enrollment\n\n \n Is the updated structure of the requirements around defined types of proofing sufficiently clear? Are the types sufficiently described?\n Are there additional fraud program requirements that need to be introduced as a common baseline for CSPs and other organizations?\n Are the fraud requirements sufficiently described to allow for appropriate balancing of fraud, privacy, and usability trade-offs?\n Are the added identity evidence validation and authenticity requirements and performance metrics realistic and achievable with existing technology capabilities?\n \n \n \n Authentication and Authenticator Management\n\n \n Are the syncable authenticator requirements sufficiently defined to allow for reasonable risk-based acceptance of syncable authenticators for public and enterprise-facing uses?\n Are there additional recommended controls that should be applied? Are there specific implementation recommendations or considerations that should be captured?\n Are wallet-based authentication mechanisms and “attribute bundles” sufficiently described as authenticators? Are there additional requirements that need to be added or clarified?\n \n \n \n Federation and Assertions\n\n \n Is the concept of user-controlled wallets and attribute bundles sufficiently and clearly described to support real-world implementations? Are there additional requirements or considerations that should be added to improve the security, usability, and privacy of these technologies?\n \n \n \n General\n\n \n What specific implementation guidance, reference architectures, metrics, or other supporting resources could enable more rapid adoption and implementation of this and future iterations of the Digital Identity Guidelines?\n What applied research and measurement efforts would provide the greatest impacts on the identity market and advancement of these guidelines?\n \n \n\n\nReviewers are encouraged to comment and suggest changes to the text of all four draft volumes of the SP 800-63-4 suite. NIST requests that all comments be submitted by 11:59pm Eastern Time on October 7th, 2024. Please submit your comments to [email protected]. NIST will review all comments and make them available on the NIST Identity and Access Management website. Commenters are encouraged to use the comment template provided on the NIST Computer Security Resource Center website for responses to these notes to reviewers and for specific comments on the text of the four-volume suite.\n"
} ,
{
"title" : "Preface",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63/preface/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Preface\n\nThis publication and its companion volumes, [SP800-63A], [SP800-63B], and [SP800-63C], provide technical guidelines to organizations for the implementation of digital identity services.\n"
} ,
{
"title" : "References",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63/references/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "References\n\nThis section is informative.\n\n[A-130] Office of Management and Budget (2016) Managing Information as a Strategic Resource. (The White House, Washington, DC), OMB Circular A-130, July 28, 2016. Available at https://obamawhitehouse.archives.gov/sites/default/files/omb/assets/OMB/circulars/a130/a130revised.pdf\n\n[EO13985] Biden J (2021) Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. (The White House, Washington, DC), Executive Order 13985, January 25, 2021. https://www.federalregister.gov/documents/2021/01/25/2021-01753/advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government\n\n[EO13985-vision] Office of Management and Budget (2022) A Vision for Equitable Data: Recommendations from the Equitable Data Working Group. (The White House, Washington, DC), OMB Report Pursuant to Executive Order 13985, April 22, 2022. https://www.whitehouse.gov/wp-content/uploads/2022/04/eo13985-vision-for-equitable-data.pdf\n\n[EO14012] Biden J (2021) Restoring Faith in Our Legal Immigration Systems and Strengthening Integration and Inclusion Efforts for New Americans. (The White House, Washington, DC), Executive Order 14012, February 02, 2021. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/02/02/executive-order-restoring-faith-in-our-legal-immigration-systems-and-strengthening-integration-and-inclusion-efforts-for-new-americans/\n\n[EO14058] Biden J (2021) Transforming Federal Customer Experience and Service Delivery to Rebuild Trust in Government. (The White House, Washington, DC), Executive Order 14058, December 13, 2021. https://www.federalregister.gov/documents/2021/12/16/2021-27380/transforming-federal-customer-experience-and-service-delivery-to-rebuild-trust-in-government\n\n[EO14091] Biden J (2023) Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. (The White House, Washington, DC), Executive Order 14091, February 16, 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/02/16/executive-order-on-further-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/\n\n[FIPS199] National Institute of Standards and Technology (2004) Standards for Security Categorization of Federal Information and Information Systems. (U.S. Department of Commerce, Washington, DC), Federal Information Processing Standards Publication (FIPS) 199. https://doi.org/10.6028/NIST.FIPS.199\n\n\\clearpage\n\n\n[FIPS201] National Institute of Standards and Technology (2022) Personal Identity Verification (PIV) of Federal Employees and Contractors. (U.S. Department of Commerce, Washington, DC), Federal Information Processing Standards Publication (FIPS) 201-3. https://doi.org/10.6028/NIST.FIPS.201-3\n\n[FISMA] Federal Information Security Modernization Act of 2014, Pub. L. 113-283, 128 Stat. 3073. https://www.govinfo.gov/app/details/PLAW-113publ283\n\n[ISO/IEC9241-11] International Standards Organization (2018) ISO/IEC 9241-11 Ergonomics of human-system interaction – Part 11: Usability: Definitions and concepts (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/63500.html\n\n[M-03-22] Office of Management and Budget (2003) OMB Guidance for Implementing the Privacy Provisions of the E-Government Act of 2002. (The White House, Washington, DC), OMB Memorandum M-03-22, September 26, 2003. Available at https://georgewbush-whitehouse.archives.gov/omb/memoranda/m03-22.html\n\n[M-19-17] Office of Management and Budget (2019) Enabling Mission Delivery through Improved Identity, Credential, and Access Management. (The White House, Washington, DC), OMB Memorandum M-19-17, May 21, 2019. Available at https://www.whitehouse.gov/wp-content/uploads/2019/05/M-19-17.pdf\n\n[NISTAIRMF] Tabassi E (2023) Artificial Intelligence Risk Management Framework (AI RMF 1.0). (National Institute of Standards and Technology (U.S.), Gaithersburg, MD), NIST AI 100-1. https://doi.org/10.6028/NIST.AI.100-1\n\n[NISTIR8062] Brooks SW, Garcia ME, Lefkovitz NB, Lightman S, Nadeau EM (2017) An Introduction to Privacy Engineering and Risk Management in Federal Systems. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Interagency or Internal Report (IR) 8062. https://doi.org/10.6028/NIST.IR.8062\n\n[NISTRMF] Joint Task Force (2018) Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-37, Rev. 2. https://doi.org/10.6028/NIST.SP.800-37r2\n\n[NISTPF] National Institute of Standards and Technology (2020) NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, Version 1.0. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Cybersecurity White Paper (CSWP) NIST CSWP 10. https://doi.org/10.6028/NIST.CSWP.10\n\n[PrivacyAct] Privacy Act of 1974, Pub. L. 93-579, 5 U.S.C. § 552a, 88 Stat. 1896 (1974). https://www.govinfo.gov/content/pkg/USCODE-2018-title5/pdf/USCODE-2018-title5-partI-chap5-subchapII-sec552a.pdf\n\n[RFC5246] Rescorla E, Dierks T (2008) The Transport Layer Security (TLS) Protocol Version 1.2. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 5246. https://doi.org/10.17487/RFC5246\n\n[RFC5280] Cooper D, Santesson S, Farrell S, Boeyen S, Housley R, Polk W (2008) Internet X.509 Public Key Infrastructure Certification and Certificate Revocation List (CRL) Profile. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 5280. https://doi.org/10.17487/RFC5280\n\n[RFC9325] Sheffer Y, Saint-Andre P, Fossati T (2022) Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS). (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 9325. https://doi.org/10.17487/RFC9325\n\n[Section508] Section 508 of the Rehabilitation Act of 1973 (2011), 29 U.S.C. § 794(d). https://www.govinfo.gov/content/pkg/USCODE-2011-title29/html/USCODE-2011-title29-chap16-subchapV-sec794d.htm\n\n[SP800-30] Blank R, Gallagher P (2012) Guide for Conducting Risk Assessments. (National Institute of Standards and Technology, Gaithersburg, MD) NIST Special Publication (SP) 800-30 Revision 1. https://doi.org/10.6028/NIST.SP.800-30r1\n\n[SP800-52] McKay K, Cooper D (2019) Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations. (National Institute of Standards and Technology), NIST Special Publication (SP) 800-52 Rev. 2. https://doi.org/10.6028/NIST.SP.800-52r2\n\n[SP800-53] Joint Task Force (2020) Security and Privacy Controls for Information Systems and Organizations. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-53 Rev. 5, Includes updates as of December 10, 2020. https://doi.org/10.6028/NIST.SP.800-53r5\n\n[SP800-57Part1] Barker EB (2020) Recommendation for Key Management: Part 1 – General. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-57 Part 1, Rev. 5. https://doi.org/10.6028/NIST.SP.800-57pt1r5\n\n[SP800-63A] Temoshok D, Abruzzi C, Choong YY, Fenton JL, Galluzzo R, LaSalle C, Lefkovitz N, Regenscheid A (2024) Digital Identity Guidelines: Identity Proofing and Enrollment. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63A-4 2pd. https://doi.org/10.6028/NIST.SP.800-63a-4.2pd\n\n[SP800-63B] Temoshok D, Fenton JL, Choong YY, Lefkovitz N, Regenscheid A, Galluzzo R, Richer JP (2024) Digital Identity Guidelines: Authentication and Authenticator Management. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63B-4 ipd. https://doi.org/10.6028/NIST.SP.800-63b-4.2pd\n\n[SP800-63C] Temoshok D, Richer JP, Choong YY, Fenton JL, Lefkovitz N, Regenscheid A, Galluzzo R (2024) Digital Identity Guidelines: Federation and Assertions. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63C-4 2pd. https://doi.org/10.6028/NIST.SP.800-63c-4.2pd\n\n[SP800-122] McCallister E, Grance T, Scarfone KA (2010) Guide to Protecting the Confidentiality of Personally Identifiable Information (PII). (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-122. https://doi.org/10.6028/NIST.SP.800-122\n\n[SP1270] Schwartz R, Vassilev A, Greene K, Perine L, Burt A, Hall P (2022) Towards a standard for identifying and managing bias in artificial intelligence. (National Institute of Standards and Technology (U.S.), Gaithersburg, MD), NIST SP 1270. https://doi.org/10.6028/NIST.SP.1270\n\n[US-AI-Safety-Inst] U.S. Artificial Intelligence Safety Institute (2023) NIST. Available at https://www.nist.gov/aisi\n"
} ,
{
"title" : "Introduction",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/introduction/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Introduction\n\nThis section is informative.\n\nOne of the challenges of providing online services is being able to associate a set of activities with a single, known individual. While there are situations where this is not necessary, there are other situations where it is important to reliably establish an association with a real-life subject. Examples of this include accessing government services and executing financial transactions. There are also situations where association with a real-life subject is required by regulations (e.g., the financial industry’s ‘Customer Identification Program’ requirements) or to establish accountability for high-risk actions (e.g., changing the release rate of water from a dam).\n\nThis guidance defines identity proofing as the process of establishing, to some degree of assurance, a relationship between a subject accessing online services and a real-life person. This document provides guidance for Federal Agencies, third-party Credential Service Providers (CSP), and other organizations that provide or use identity proofing services.\n\nExpected Outcomes of Identity Proofing\n\nThe expected outcomes of identity proofing include:\n\n\n Identity resolution: Determine that the claimed identity corresponds to a single, unique individual within the context of the population of users served by the CSP or online service.\n Evidence validation: Confirm that supplied evidence is genuine, authentic, and valid.\n Attribute validation: Confirm the accuracy of the core attributes. Core attributes are the minimum set required for identity proofing.\n Identity verification: Confirm that the claimant is the genuine owner of the presented evidence and attributes.\n Identity enrollment: Enroll the identity proofed applicant in the CSP’s identity service as a subscriber.\n Fraud mitigation: Detect, respond to, and prevent access to benefits, services, data, or assets using a fraudulent identity.\n\n\nIdentity proofing services are expected to incorporate privacy-enhancing principles, such as data minimization, as well as employ good usability practices, to minimize the burden on applicants while still accomplishing the expected outcomes.\n\nIdentity Assurance Levels\n\nAssurance (confidence) in a subscriber’s identity is established using the processes associated with the defined Identity Assurance Levels (IAL). Each successive IAL builds on the requirements of lower IALs in order to achieve increased assurance.\n\nNo identity proofing: There is no requirement to link the applicant to a specific, real-life person. Any attributes provided in conjunction with the subject’s activities are self-asserted or are treated as self-asserted. Evidence is not validated and attributes are neither validated nor verified.\n\nIAL1: The identity proofing process supports the real-world existence of the claimed identity and provides some assurance that the applicant is associated with that identity. Core attributes are obtained from identity evidence or self-asserted by the applicant. All core attributes (see Sec. 2.2) are validated against authoritative or credible sources and steps are taken to link the attributes to the person undergoing the identity proofing process. Identity proofing is performed using remote or onsite processes, with or without the attendance of a CSP representative (proofing agent or trusted referee). Upon the successful completion of identity proofing, the applicant is enrolled into a subscriber account and any authenticators, including subscriber-provided authenticators, can then be bound to the account. IAL1 is designed to limit highly scalable attacks, provide protection against synthetic identities, and provide protections against attacks using compromised PII.\n\nIAL2: IAL2 adds additional rigor to the identity proofing process by requiring the collection of additional evidence and a more rigorous process for validating the evidence and verifying the identity. In addition to those threats addressed by IAL1, IAL2 is designed to limit scaled and targeted attacks, provide protections against basic evidence falsification and evidence theft, and provide protections against basic social engineering tactics.\n\nIAL3: IAL3 adds the requirement for a trained CSP representative (proofing agent) to interact directly with the applicant, as part of an on-site attended identity proofing session, and the collection of at least one biometric. The successful on-site identity proofing session concludes with the enrollment of the applicant into a subscriber account and the delivery of one or more authenticators associated (bound) to that account. IAL3 is designed to limit more sophisticated attacks, provide protections against advanced evidence falsification, theft, and repudiation, and provide protection against more advanced social engineering tactics.\n\n\\clearpage\n\n\nNotations\n\nThis guideline uses the following typographical conventions in text:\n\n\n Specific terms in CAPITALS represent normative requirements. When these same terms are not in CAPITALS, the term does not represent a normative requirement.\n \n The terms “SHALL” and “SHALL NOT” indicate requirements to be followed strictly in order to conform to the publication and from which no deviation is permitted.\n The terms “SHOULD” and “SHOULD NOT” indicate that among several possibilities, one is recommended as particularly suitable without mentioning or excluding others, that a certain course of action is preferred but not necessarily required, or that (in the negative form) a certain possibility or course of action is discouraged but not prohibited.\n The terms “MAY” and “NEED NOT” indicate a course of action permissible within the limits of the publication.\n The terms “CAN” and “CANNOT” indicate a possibility and capability—whether material, physical, or causal—or, in the negative, the absence of that possibility or capability.\n \n \n\n\nDocument Structure\n\nThis document is organized as follows. Each section is labeled as either normative (i.e., mandatory for compliance) or informative (i.e., not mandatory).\n\n\n Section 1 provides an introduction to the document. This section is informative.\n Section 2 describes requirements for identity proofing. This section is normative.\n Section 3 describes general requirements for IALs. This section is normative.\n Section 4 describes requirements for specific IALs. This section is normative.\n Section 5 describes subscriber accounts. This section is normative.\n Section 6 provides security considerations. This section is informative.\n Section 7 provides privacy considerations. This section is informative.\n Section 8 provides usability considerations. This section is informative.\n Section 9 provides equity considerations. This section is informative.\n References contains a list of publications referred to from this document. This section is informative.\n Appendix A provides a non-exhaustive list of types of identity evidence, grouped by strength. This appendix is informative.\n Appendix B contains a selected list of abbreviations used in this document. This appendix is informative.\n Appendix C contains a glossary of selected terms used in this document. This appendix is informative.\n Appendix D contains a summarized list of changes in this document’s history. This appendix is informative.\n\n"
} ,
{
"title" : "Identity Proofing Overview",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/proofing/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Identity Proofing Overview\n\nThis section is normative.\n\nThis section provides an overview of the identity proofing and enrollment process, as well as requirements to support the resolution, validation, and verification of the identity claimed by an applicant. It also provides guidelines on additional aspects of the identity proofing process. These requirements are intended to ensure that the claimed identity exists in the real world and that the applicant is the individual associated with that identity.\n\nAdditionally, these guidelines provide for multiple methods by which resolution, validation, and verification can be accomplished, as well as providing the multiple types of identity evidence that support the identity proofing process. CSPs and organizations SHALL provide options when implementing their identity proofing services and processes to promote access for applicants with different means, capabilities, and technology access. These options SHOULD include accepting multiple types and combinations of identity evidence; supporting multiple data validation sources; enabling multiple methods for verifying identity; providing multiple channels for engagement (e.g., onsite, remote); and offering assistance mechanisms for applicants (e.g., applicant references).\n\nCSPs SHALL evaluate the risks associated with each identity proofing option offered (e.g., identity proofing types, validation sources, assistance mechanisms) and implement mitigating fraud controls, as appropriate. At a minimum, CSPs SHALL design each option such that, in aggregate, the options provide comparable assurance.\n\nIdentity Proofing and Enrollment\n\nThe objective of identity proofing is to ensure that, to a stated level of certainty, the applicant involved in the identity proofing process is who they claim to be. This document presents a three-step process for CSPs to identity proof applicants at designated assurance levels. The first step, identity resolution, consists of collecting appropriate identity evidence and attribute information to determine that the applicant is a unique identity in the population served by the CSP and is a real-life person. The second step, identity validation, validates the genuineness, accuracy, and validity of the evidence and attribute information collected in the first step. The third step, identity verification, confirms that the applicant presenting the identity evidence is the same individual to whom the validated evidence was issued and with whom the validated attributes are associated. In most cases, upon successfully identity proofing an applicant to the designated IAL, the CSP establishes a unique subscriber account for the applicant (now a subscriber in the identity service), which allows one or more authenticators to be bound to the proven identity in the account.\n\nIdentity proofing can be part of an organization’s business processes that support the determination of suitability or entitlement to a benefit or service. While these guidelines provide guidance for appropriate levels of identity assurance, suitability and eligibility determinations for benefits or services are distinct business process decisions from these identity proofing processes and are outside the scope of these guidelines.\n\nProcess Flow\n\nThis subsection is informative.\n\nFigure 1 provides an illustrative example of the three-step identity proofing process.\n\nFig. 1. Identity Proofing Process\n\n\n\nThe following steps present a common workflow example for IAL2 remote identity proofing, which is intended to illustrate the workflow steps for this example. These steps are not intended to represent a normative processing workflow model.\n\n\n \n Resolution\n\n \n The CSP captures one or more pieces of identity evidence, such as a driver’s license, mobile driver’s license, or passport.\n The CSP collects any additional attributes, as needed, from the applicant to supplement those contained on the presented identity evidence.\n \n \n \n Validation\n\n \n The CSP confirms the presented evidence is authentic, accurate, and valid (e.g., not revoked).\n The CSP validates the attributes obtained in step 1 by checking them against authoritative or credible validation sources.\n \n \n \n Verification\n\n \n The CSP employs one of the IAL2 Verifcation Pathways to confirm the applicant is the genuine owner of the presented identity evidence.\n \n\n Enrollment\n\n Upon the successful completion of the three identity proofing steps, a notification of proofing is sent to a validated address, and the applicant can be enrolled into a subscriber account with the CSP, as described in Section 5. A subscriber account includes at least one validated address (e.g., phone number, mailing address) that can be used to communicate with the subscriber about their account. Additionally, one or more authenticators are bound to the proven identity in the subscriber account.\n \n\n\nIdentity Proofing Roles\n\nTo support the delivery of identity proofing that meets the various needs of applicants and risk scenarios, different individuals would be expected to play different roles within the proofing process. To support the consistent implementation of these guidelines, the following identity proofing roles are defined:\n\n\n Proofing Agent - An agent of the CSP who is trained to attend identity proofing sessions, either onsite or remotely, and make limited, risk-based decisions – such as visually inspecting identity evidence and making a determination that the evidence has not been altered.\n Trusted Referee - An agent of the CSP who is trained to make risk-based decisions regarding an applicant’s identity proofing case when that applicant is unable to meet expected requirements of a defined IAL proofing process. Unlike a Proofing Agent (although a trusted referee may also fulfill this role), the level of training is expected to be more substantial to include training to detect deception and signs of social engineering, in addition to the ability to support validation and verification through physical inspection of the evidence and visual comparison of the applicant to a reference facial image. Requirements for trusted referees are contained in Sec. 3.1.13.1.\n \n Note: Trusted referees differ from proofing agents in that trusted referees receive additional training and resources to support exception handling scenarios, including when applicants do not possess the required identity evidence or the attributes on the evidence do not all match the claimed identity (e.g., due to a recent name or address change).\n \n \n Applicant Reference - A representative of the applicant who can vouch for the identity of the applicant, specific attributes related to the applicant, or conditions relative to the context of the individual (e.g., emergency status, homelessness). This individual does not act on behalf of the applicant in the identity proofing process but is a resource that can be called on to support claims of identity. Requirements for applicant references are contained in Sec. 3.1.13.3.\n Process Assistants - An individual who provides support for the proofing process but does not support decision making or risk-based evaluation (e.g., translation, transcription, or accessibility support). Process assistants may be provided by the CSP or the applicant.\n\n\nCSPs SHALL identify which of above roles are applicable to their identity service and SHALL provide training and support resources consistent with the requirements and expectations provided in Sec. 3.\n\nIdentity Proofing Types\n\nThe ability to provide resolution, validation, and verification as part of an identity proofing process is delivered through a combination of technologies, communication channels, and identity proofing roles to support the diverse users, communities, and relying parties CSPs serve. The types of proofing can be categorized based on two specific factors – whether they are attended and where they take place.\n\n\n Remote Unattended Identity Proofing – Identity proofing conducted where the resolution, validation, and verification processes are completely automated and interaction with a proofing agent is not required. The location and devices used in the proofing process are not controlled by the CSP.\n Remote Attended Identity Proofing – Identity proofing conducted where the applicant completes resolution, validation, and verification steps through a secure video session with a proofing agent. The location and devices used in the proofing process are not controlled by the CSP.\n Onsite Unattended Identity Proofing - Identity proofing conducted where an individual interacts with a controlled workstation or kiosk, but interaction with a proofing agent is not required. The process is fully automated, but at a physical location and on devices approved by the CSP.\n Onsite Attended Identity Proofing - Identity proofing conducted in a physical setting where the applicant completes the entire identity proofing process - to include resolution, validation, and verification – in the presence of a proofing agent. The proofing agent may be co-located with the user or interact with the user via a kiosk or device. The physical location and devices are all approved by the CSP.\n\n\nRequirements at each assurance level are structured to allow CSPs to implement different combinations of proofing types to meet the requirements of different assurance levels (as appropriate). CSPs that offer IAL1 & IAL2 services SHALL provide a Remote Unattended identity proofing process and SHALL offer at-least one attended identity proofing process option. CSPs that offer IAL1 & IAL2 services SHOULD support identity proofing processes that allow for the applicant to transition between proofing types in the event an applicant is unsuccessful with one type (e.g., allow an applicant who fails remote unattended to transition to remote attended).\n\nCore Attributes\n\nThe identity proofing process involves the presentation and validation of the minimum attributes necessary to accomplish identity proofing - this includes what is needed to complete resolution, validation, and verification. While the necessary core attributes for a given use case will change based on the nature of the community being served, the following attributes SHOULD be collected by CSPs to support the proofing process:\n\n\n First Name: The applicant’s given name.\n Middle Name or Initial: The applicant’s middle name or initial if available.\n Last Name: The applicant’s last name or family name as appropriate.\n Government Identifier: A unique identifier which is associated with the applicant in government records (e.g., SSN, TIN, Driver’s License #).\n Physical Address: A physical address to which the applicant can receive communications related to the proofing process; or a Digital Address: A digital address (e.g., phone or email) to which the applicant can receive communications related to the proofing process.\n\n\nAdditional attributes may be added to these as required by the CSP and RP. The CSP and RP SHALL document all core attributes in trust agreements and practice statements. Following a privacy risk assessment, a CSP MAY request additional attributes that are not required to complete identity proofing, but that may support other RP business processes. See Sec. 3.1.3 for details on privacy requirements for requesting additional attributes.\n\nIdentity Resolution\n\nThe goal of identity resolution is to use the smallest possible set of attributes to uniquely and accurately distinguish an individual within a given population or context. This step involves comparing an applicant’s collected attributes to those stored in records for users served by the CSP. While identity resolution is the starting point in the overall identity proofing process, to include the initial detection of potential fraud, it in no way represents a complete and successful identity proofing process.\n\nIdentity Validation and Identity Evidence Collection\n\nThe goal of identity validation is to collect the most appropriate identity evidence from the applicant and determine that it is genuine (not altered or forged), accurate (the pertinent data is correct, current, and related to the applicant), and valid.\n\n\n Note: This document uses the term “valid” rather than expired in recognition that evidence can remain a useful means to prove identity, even if it is expired or was issued outside a determined timeframe.\n\n\nIdentity evidence collection supports the identity validation process and consists of two steps: 1) the presentation of identity evidence by the identity proofing applicant to the CSP and 2) the determination by the CSP that the presented evidence meets the applicable strength requirements.\n\nEvidence Strength Requirements\n\nThis section defines the requirements for identity evidence at each strength. The strength of a piece of identity evidence is determined by:\n\n\n The issuing rigor,\n The ability to provide confidence in validation, including accuracy and authenticity checks, and\n The ability to provide confidence in the verification of the applicant presenting the evidence.\n\n\nAppendix A of this document provides a non-exhaustive list of possible evidence types, grouped by strength.\n\nFair Evidence Requirements\n\nTo be considered FAIR, identity evidence SHALL meet all the following requirements:\n\n\n The issuing source of the evidence confirmed the claimed identity through a process designed to enable it to form a belief that it knows the real-life identity of the person. For example, evidence issued by financial institutions that have customer identity verification obligations under the Customer Identification Program (CIP) Rule implementing Section 326 of the USA PATRIOT Act of 2001, or that have obligations to establish an Identity Theft Prevention Program under the Red Flags Rule and Guidelines, implemented under Sec. 114 of the Fair and Accurate Credit Transaction Act of 2003 (FACT Act).\n\n\n\\clearpage\n\n\n\n It is likely that the evidence-issuing process would result in the delivery of the evidence to the person to whom it relates, such as delivery to a postal address.\n The evidence contains the name of the claimed identity.\n The evidence contains at least one reference number, a facial portrait, or sufficient attributes to uniquely identify the person to whom it relates.\n The evidence contains physical (e.g., security printing, optically variable features, holograms) or digital security features that make it difficult to reproduce.\n The information on the evidence is able to be validated by an authoritative or credible source.\n The evidence is able to be verified through an approved method, as provided in Sec. 2.4.2.2.\n\n\nStrong Evidence Requirements\n\nIn order to be considered STRONG, identity evidence SHALL meet all the following requirements:\n\n\n The issuing source of the evidence confirmed the claimed identity by following written procedures designed to enable it to have high confidence that it knows the real-life identity of the subject. Additionally, these procedures are subject to recurring oversight by regulatory or publicly accountable institutions, such as states, the federal government, and some regulated industries. Such procedures would include, but not be limited to, identity proofing at IAL2 or above.\n It is likely that the evidence-issuing process would result in the delivery of the evidence to the person to whom it relates, such as delivery to a postal address.\n The evidence contains the name of the claimed identity.\n The evidence contains a reference number or other attributes that uniquely identify the person to whom it relates.\n The evidence contains a facial portrait or other biometric characteristic of the person to whom it relates.\n The evidence includes physical security features or digital security features that make it difficult to copy or reproduce.\n The information on the evidence is able to be validated by an authoritative or credible source.\n The evidence is able to be validated through an approved method, as provided in Sec. 2.4.2.2.\n\n\nSuperior Evidence Requirements\n\nIn order to be considered SUPERIOR, identity evidence SHALL meet all the following requirements:\n\n\n The issuing source of the evidence confirmed the claimed identity by following written procedures designed to enable it to have high confidence that the source knows the real-life identity of the subject. Additionally, these procedures are subject to recurring oversight by regulatory or publicly accountable institutions, such as states and the federal government, and some regulated industries. Such procedures would include, but not be limited to, identity proofing at IAL2 or above.\n The identity evidence contains attributes and data objects that are cryptographically protected and can be validated through verification of a digital signature applied by the issuing source.\n The issuing source had the subject participate in an attended enrollment and identity proofing process that confirmed their physical existence.\n It is likely that the evidence-issuing process would result in the delivery of the evidence to the person to whom it relates, such as delivery to a postal address.\n The evidence contains the name of the claimed identity.\n The evidence contains at least one reference number that uniquely identifies the person to whom it relates.\n The evidence contains a facial portrait or other biometric characteristic of the person to whom it relates.\n If the evidence is physical, then evidence includes security features that make it difficult to copy or reproduce.\n The evidence is able to be verified through an approved method, as provided in Sec. 2.4.2.2.\n\n\nIdentity Evidence and Attribute Validation\n\nIdentity evidence validation involves examining the presented evidence to confirm it is authentic (not forged or altered), accurate (the information on the evidence is correct), and valid (unexpired or within the CSP’s defined timeframe for issuance or expiration). Attribute validation involves confirming the accuracy of the core attributes, whether obtained from presented evidence or self-asserted. The following subsections provide the acceptable methods for evidence and attribute validation.\n\nEvidence Validation\n\nThe CSP SHALL validate the authenticity, accuracy, and validity of presented evidence by confirming:\n\n\n The evidence is in the correct format and includes complete information for the identity evidence type;\n The evidence is not counterfeit and that it has not been tampered with;\n Any security features; and\n The information on the evidence is accurate.\n\n\nEvidence Validation Methods\n\nAcceptable methods for validating presented evidence include:\n\n\n Visual and tactile inspection by trained personnel for onsite identity proofing;\n Visual inspection by trained personnel for remote identity proofing;\n Automated document validation processes using appropriate technologies; and\n Cryptographic verification of the source and integrity of digital evidence, or attribute data objects.\n\n\nAttribute Validation\n\nThe CSP SHALL validate all core attributes, as described in Sec. 2.2, whether obtained from identity evidence or self-asserted by the applicant, with an authoritative or credible source, as in Sec. 2.4.2.4.\n\nValidation Sources\n\nThe CSP SHALL use authoritative or credible sources that meet the following criteria.\n\nAn authoritative source is the issuing source of identity evidence or attributes, or has direct access to the information maintained by issuing sources, such as state DMVs for driver’s license data and the Social Security Administration for Social Security Cards and Social Security Numbers. An authoritative source may also be one that provides or enables direct access to issuing sources of evidence or attributes, such as the American Association of Motor Vehicle Administrators’ Driver’s License Data Verification (DLDV) Service.\n\nA credible source is an entity that can provide or validate the accuracy of identity evidence and attribute information. In addition to being subject to regulatory oversight (such as the Fair Credit Reporting Act (FCRA)), a credible source has access to attribute information that can be traced to an authoritative source, or maintains identity attribute information obtained from multiple sources that is correlated for accuracy, consistency, and currency. Examples of credible sources are credit bureaus that are subject to the FCRA.\n\nIdentity Verification\n\nThe goal of identity verification is to establish, to a specified level of confidence, the linkage between the claimed validated identity and the real-life applicant engaged in the identity proofing process. In other words, verification provides assurance that the applicant presenting the evidence is the rightful owner of that evidence.\n\nIdentity Verification Methods\n\nThe CSP SHALL verify the linkage of the claimed identity to the applicant engaged in the identity proofing process through one or more of the following methods. Section 4 provides acceptable verification methods at each IAL.\n\n\n Confirmation Code Verification. The individual is able to demonstrate control of a piece of identity evidence through the return of a confirmation code, consistent with the requirements specified in Sec. 3.1.8.\n Authentication and Federation Protocols. The individual is able to demonstrate control of a digital account (e.g., online bank account) or signed digital assertion (e.g., verifiable credentials) through the use of authentication or federation protocols. This may be done in person, through presentation of the credential to a device or reader, but can also be done during remote identity proofing sessions.\n Micro Transaction. An individual is able to demonstrate control of a piece of evidence by returning a value based on a micro transaction made between the CSP and the issuing source of the evidence (e.g., bank account).\n Onsite-In-person (Attended) visual facial image comparison. The proofing agent and applicant interact for the identity proofing event. The proofing agent performs a visual comparison of the facial portrait presented on identity evidence to the face of the applicant engaged in the identity proofing event.\n Remote (Attended or Unattended) visual facial image comparison. The proofing agent performs a visual comparison of the facial portrait presented on identity evidence, or stored by the issuing source, to the facial image of the applicant engaged in the identity proofing event. The proofing agent may interact directly with the applicant during some or all of the identity proofing event (attended) or may conduct the comparison at a later time (unattended) using a captured video or photograph and the uploaded copy of the evidence. If the comparison is performed at a later time, steps are taken to ensure the captured video or photograph was taken from the live applicant present during the identity proofing event.\n Automated (Unattended) biometric comparison. Automated biometric comparison, such as facial recognition or other fully automated algorithm-driven biometric comparison, MAY be performed for onsite or remote identity proofing events. The facial image or other biometric characteristic (e.g., fingerprints, palm prints, iris and retina patterns, voiceprints, or vein patterns) on the identity evidence, or stored in authoritative records, is compared to the facial image in a photograph of the live applicant or other biometric live sample collected from the applicant during the identity proofing event.\n\n\nKnowledge-based verification (KBV) or knowledge-based authentication SHALL NOT be used for identity verification.\n"
} ,
{
"title" : "Identity Proofing Requirements",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/ial-general/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Identity Proofing Requirements\n\nThis section is normative.\n\nThis section provides requirements for CSPs that operate identity proofing and enrollment services, including requirements for identity proofing at each of the IALs. This section also includes additional requirements for federal agencies regardless of whether they operate their own identity service or use an external CSP.\n\nSections 4.1, 4.2, and 4.3 provide the requirements and guidelines for identity proofing at a specific IAL. Section 4.4 includes a summarized list of these requirements by IAL in Table 1.\n\nGeneral Requirements\n\nThe requirements in this section apply to all CSPs performing identity proofing at any IAL.\n\nIdentity Service Documentation and Records\n\nThe CSP SHALL conduct its operations according to a practice statement that details all identity proofing processes as they are implemented to achieve the defined IAL. The practice statement SHALL include, at a minimum:\n\n\n A complete service description including the particular steps the CSP follows to identity proof applicants at each offered assurance level;\n The CSP’s policy for providing notice to applicants about the types of identity proofing processes available, the evidence and attribute collection requirements for the specified IAL, the purpose of PII collection (per Sec. 3.1.3.2), and for the collection, use, and retention of biometrics (see Sec. 3.1.11);\n The CSP’s policy for ensuring the identity proofing process is concluded in a timely manner, once the applicant has met all of the requirements;\n Types of identity evidence the CSP accepts to meet the evidence strength requirements;\n The CSP’s policy and process for validating and verifying identity evidence, including training and qualification requirements for personnel who have validation and verification responsibilities, as well as specific technologies the CSP employs for evidence validation and verification;\n Alternative processes for the CSP to complete identity proofing for an individual applicant who does not possess the required identity evidence to complete the identity proofing process1;\n The attributes that the CSP considers to be core attributes, and the authoritative and credible sources it uses for validating those attributes. Core attributes include the minimum set of attributes that the CSP needs to perform identity resolution as well as any additional attributes that the CSP collects and validates for the purposes of identity proofing, fraud mitigation, complying with laws or legal process, or conveying to relying parties (RPs) through attribute assertions;\n The CSP’s policy and process for addressing identity proofing errors;\n The CSP’s policy and process for identifying and remediating suspected or confirmed fraudulent accounts, including communicating with RPs and affected individuals;\n The CSP’s policy for managing and communicating service changes (e.g., changes in data sources, integrated vendors, or biometric algorithms) to RPs;\n The CSP’s policy for any conditions that would require re-verification of the user (e.g., account recovery, account abandonment, regulatory “recertification” requirements);\n The CSP’s policy for conducting privacy risk assessments, including the timing of its periodic reviews and specific conditions that will trigger an updated privacy risk assessment (see Sec. 3.1.3.1);\n The CSP’s policy for conducting assessments to determine potential equity impacts, including the timing of its periodic reviews and any specific conditions that will trigger an out-of-cycle review (see Sec. 3.1.4); and,\n The CSP’s policy for the collection, purpose, retention, protection, and deletion of all personal and sensitive data, including the treatment of all personal information if the CSP ceases operation or merges or transfers operations to another CSP.\n\n\n\n Note: 800-63C references the use of trust agreements to define requirements between an IdP, CSP, and RP in a federated relationship. CSP practice statements MAY be included directly in these agreements.\n\n\nFraud Management\n\nA critical aspect of the identity proofing process is to mitigate fraudulent attempts to gain access to benefits, services, data, or assets protected by identity management systems. Resolution, Validation, and Verification processes in many instances can mitigate many attacks. However, with the constantly changing threat environment, the layering of additional checks and controls can provide increased confidence in proofed identities and additional protections against attacks intended to defeat other controls. The ability to identify, detect, and resolve instances of potential fraud is a critical functionality for CSPs and RPs alike.\n\nCSP Fraud Management\n\n\n CSPs SHALL establish and maintain a fraud management program that provides fraud identification, detection, investigation, reporting, and resolution capabilities. The specific capabilities and details of this program SHALL be documented within their CSP practice statement.\n CSPs SHALL conduct a Privacy Risk Assessment of all fraud checks and fraud mitigation technologies prior to implementation.\n The CSP SHALL establish a self-reporting mechanism and investigation capability for subjects who believe they have been the victim of fraud or an attempt to compromise their involvement in the identity proofing processes.\n The CSP SHALL take measures to prevent unsuccessful applicants from inferring the accuracy of any self-asserted information with that confirmed by authoritative or credible sources.\n NOTE: This is often called “data washing” and can be prevented through a number of methods, depending on the interfaces deployed by a CSP. As such, these guidelines do not dictate specific mechanisms to prevent this practice.\n CSPs SHALL implement the following fraud check for all identity proofing processes:\n \n Date of Death Check – Confirm with a credible, authoritative, or issuing source that the applicant is not deceased. Such checks can aid in preventing synthetic identity fraud, the use of stolen identity information, and exploitation by a close associate or relative.\n \n \n CSPs SHOULD implement - but are not limited to - the following fraud checks for their identity proofing processes based on their available identity proofing types, selected technologies, evidence, and user base:\n \n SIM Swap Detection – Confirm that the phone number used in an identity proofing process has not been recently ported to a new user or device. Such checks can provide an indication that a phone or device was compromised by a targeted attack.\n Device or Account Tenure Check – Evaluate the length of time a phone or other account has existed without substantial modifications or changes. Such checks can provide additional confidence in the reliability of a device or piece of evidence used in the identity proofing process.\n Transaction Analytics – Evaluate anticipated transaction characteristics – such as IP Addresses, geolocations, and transaction velocities – to identify anomalous behaviors or activities that can indicate a higher risk or a potentially fraudulent event. Such checks can provide protections against scaled and automated attacks, as well as give indications of specific attack patterns being executed on identity systems.\n Fraud Indicator Check – Evaluate records, such as reported, confirmed, or historical fraud events to determine if there is an elevated risk related to a specific applicant, applicant’s data, or device. Such checks can give an indication of identity theft or compromise. Where such information is collected, aggregated, or exchanged across commercial platforms and made available for use to RPs and other CSPs, users SHALL be made aware of this practice. This also applies to all websites that report user activity to Federal RPs.\n \n \n CSPs MAY employ knowledge-based verification (KBV) as part of its fraud management program.\n CSPs SHOULD consider the recency of fraud-related data when factoring it into fraud prevention capabilities and decisions.\n For attended proofing processes, CSPs SHALL train proofing agents to detect indicators of fraud and SHALL provide proofing agents and trusted referees with tools to flag suspected fraudulent events for further treatment and investigation.\n CSPs SHALL continuously monitor the performance of their fraud checks and fraud mitigation technologies to identify and remediate issues related to disparate performance across their platforms or between the demographic groups served by their identity service.\n CSPs SHALL establish a technical or process-based mechanism to communicate suspected and confirmed fraudulent events to RPs.\n CSPs SHOULD implement shared signaling, as described in NIST SP 800-63C-4, to communicate fraud events in real time to RPs.\n CSPs MAY implement fraud mitigation measures as compensating controls. When this is done, these SHALL be documented as deviations from the normative guidance of these guidelines and SHALL be conveyed to all RPs through a Digital Identity Acceptance Statement prior to integration.\n\n\nRP Fraud Management\n\n\n RPs SHALL establish a point of contact with whom CSPs interact and communicate fraud data.\n Pursuant to a privacy risk assessment, the RP MAY also request additional attributes beyond what a CSP provides as its core attributes to combat fraud or to support other business processes.\n Pursuant to applicable laws and regulations, RPs SHOULD establish a mechanism to communicate the outcomes of fraud reports and investigations, including both positive and negative results, to CSPs and other partners in order to allow them to improve their own fraud identification, mitigation, and reporting capabilities.\n RPs SHOULD establish a fraud management program consistent with their mission, regulatory environment, systems, applications, data, and resources.\n RPs SHALL conduct a privacy risk assessment of any CSP fraud checks and mitigation technologies to identify potential privacy risks or unintended harms. Federal agency RPs SHALL implement this consistent with the requirements contained in Sec. 3.1.7.\n RPs SHALL include any requirements for fraud checks and fraud mitigation technologies in trust agreements with their CSPs.\n RPs SHALL conduct periodic reviews of their CSP’s fraud management program, fraud checks, and fraud technologies to adjust thresholds, review investigations into fraud events, and make determinations about the effectiveness and efficacy of fraud controls.\n RPs SHALL review all fraud mitigation measures that have been deployed as compensating or supplemental controls by CSPs for alignment to their internal risk tolerance and acceptance. The RP SHALL record the CSPs compensating controls in their own Digital Identity Acceptance Statement prior to integration.\n\n\nTreatment of Fraud Check Failures\nThe effectiveness of fraud checks and mitigation technologies will vary based on numerous contributing factors including the data sources used, the technologies used, and – perhaps most importantly – the applicant population. It is therefore critical to have well-structured and documented processes for how to handle failures arising from the fraud management measures. The following requirements apply to handling these failures:\n\n\n CSPs SHALL establish and document thresholds and actions related to each of the fraud checks they implement and provide these thresholds to RPs.\n CSPs SHALL establish procedures for redress to allow applicants to resolve issues associated with fraud checks and mitigation technologies. See (see Sec. 3.6 of [SP800-63]) for a more information about redress.\n The CSP SHALL offer trusted referee services to those who fail fraud checks in unattended remote processes. Trusted referees SHALL be provided with a summary of the results of the fraud failures to inform their risk-based decisioning processes.\n\n\n\\clearpage\n\n\nGeneral Privacy Requirements\n\nThe following privacy requirements apply to all CSPs providing identity services at any IAL.\n\nPrivacy Risk Assessment\n\n\n The CSP SHALL conduct and document a privacy risk assessment for the processes used for identity proofing and enrollment.2 At a minimum, the privacy risk assessment SHALL assess the risks associated with:\n \n Any processing of PII - including identity attributes, biometrics, images, video, scans, or copies of identity evidence - for the purposes of identity proofing, enrollment, or fraud management;\n Any additional steps that the CSP takes to verify the identity of an applicant beyond the mandatory requirements specified herein;\n Any processing of PII for purposes outside the scope of identity proofing and enrollment, except to comply with law or legal processes;\n The retention schedule for identity records and PII;\n Any non-PII that, when aggregated or processed by an algorithm ( e.g., artificial intelligence or machine learning tools), could be used to identify a person; and,\n Any PII that is processed by a third-party service on behalf of the CSP.\n \n \n Based on the results of its privacy risk assessment, the CSP SHALL document the measures it takes to maintain the disassociability, predictability, manageability, confidentiality, integrity, and availability of the PII it processes. 3 In determining such measures, the CSP SHOULD apply relevant guidance and standards, such as the NIST Privacy Framework [NIST-Privacy] and NIST Special Publication [SP800-53].\n The CSP SHALL re-assess privacy risks and update its privacy risk assessment any time it makes changes to its identity service that affect the processing of PII.\n The CSP SHALL review its privacy risk assessment periodically, as documented in its practice statement, to ensure that it accurately reflects the current risks associated with the processing of PII.\n The CSP SHALL make a summary of its privacy risk assessment available to any organizations that use its services. The summary SHALL be in sufficient detail to enable such organizations to do a due diligence investigation.\n The CSP SHALL perform a privacy risk assessment for the processing of any personal information maintained in the subscriber account Sec. 5.\n\n\nAdditional Privacy Protective Measures\n\n\n The processing of PII SHALL be limited to the minimum necessary to validate the existence of the claimed identity, associate the claimed identity with the applicant, mitigate fraud, and provide RPs with attributes that they may use to make authorization decisions.\n The CSP SHALL provide privacy training to all personnel and any third-party service providers who have access to sensitive information associated with the CSP’s identity service.\n The CSP MAY collect the Social Security Number (SSN) as an attribute when necessary for identity resolution, in accordance with the privacy requirements in Sec. 3.1.3. If SSNs are collected, CSPs SHOULD implement privacy protective techniques (e.g., transmitting and accepting derived attribute values rather than full attribute values) to limit the proliferation and retention of SSN data. Knowledge of an SSN is not sufficient to act as evidence of identity nor is it considered an acceptable method of verifying possession of the Social Security Card when used as evidence. If the SSN is collected on behalf of a federal, state, or local government agency, the CSP SHALL provide notice to the applicant for the collection in accordance with applicable laws.\n At the time of collection, the CSP SHALL provide explicit notice to the applicant regarding the purpose for collecting the PII and attributes necessary for identity proofing, enrollment, and fraud mitigation, including whether such PII and attributes are voluntary or mandatory to complete the identity proofing process, the specific attributes and other sensitive data that the CSP intends to store in the applicant’s subsequent subscriber account, the consequences for not providing the attributes, and the details of any records retention requirement if one is in place.\n\n\nGeneral Equity Requirements\n\nIn support of the goal of improved equity, and as part of its overall risk assessment process, CSPs assess the elements of their identity services to identify processes and technologies that may result in inequitable access, treatment, or outcomes for members of one group as compared to others. Where risks to equity are identified, CSPs proactively employ mitigations that will reduce or eliminate these discrepancies between demographic groups. Sec. 9 of this document provides a non-exhaustive list of identity proofing processes and technologies that may be subject to inequitable access or outcomes, as well as possible mitigations.\n\nExecutive order [EO13985], Advancing Racial Equity and Support for Underserved Communities Through the Federal Government, requires each federal agency to assess whether, and to what extent, its programs and policies perpetuate systemic barriers to opportunities and benefits for people of color and other underserved groups. Additionally, executive order [EO13988], Preventing and Combating Discrimination on the Basis of Gender Identity or Sexual Orientation, sets policy that all persons are be treated with respect, regardless of gender identity or sexual orientation, and requires agencies to enforce this prohibition on sex discrimination.\n\nCSPs SHOULD review the methods of assessment, data collection, and redress outlined in OMB Report, A Vision for Equitable Data: Recommendations from the Equitable Data Working Group [EO13985-vision] for the development of its equity assessment policy and practices.\n\nThe following requirements apply to all CSPs providing identity services at any IAL:\n\n\n The CSP SHALL assess the elements of its identity proofing process(es) to identify processes or technologies that can result in inequitable access, treatment, or outcomes for members of one group as compared to others.\n Based on the results of its assessment, the CSP SHALL document the measures it takes to mitigate the possibility of inequitable access, treatment, or outcomes.\n The CSP SHALL re-assess the risks to equitable access, treatment, or outcomes periodically and any time the CSP makes changes to its identity service that affect the processes or technologies.\n The CSP SHALL NOT make applicant participation in these risk assessments mandatory.\n The CSP SHALL make the results of its equity assessment, including any associated mitigations, publicly available.\n\n\nGeneral Security Requirements\n\n\n Each online transaction within the identity proofing process, including transactions that involve third parties, SHALL occur over an authenticated protected channel.\n The CSP SHALL implement a means to prevent automated attacks on the identity proofing process. Acceptable means include, but are not limited to: bot detection, mitigation, and management solutions; behavioral analytics4; web application firewall settings; and network traffic analysis.\n All PII collected as part of the identity proofing process SHALL be protected to maintain the confidentiality and integrity of the information, including the encryption of data at rest and the exchange of information using authenticated protected channels.\n The CSP SHALL assess the risks associated with operating its identity service, according to the NIST Risk Management Framework [NIST-RMF] or equivalent risk management guidelines. At a minimum, the CSP SHALL apply appropriate controls consistent with [SP800-53] moderate baseline, regardless of IAL.\n The CSP SHOULD assess risks associated with its use of third-party services and apply appropriate controls, as provided in the [SP800-161] Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations.\n\n\nRedress Requirements\n\n\n The CSP SHALL provide mechanisms for the redress of applicant complaints and for problems arising from the identity proofing process, including but not limited to: proofing failures, delays, and difficulties.\n These mechanisms SHALL be easy for applicants to find and use.\n The CSP SHALL assess the mechanisms for their efficacy in achieving a resolution of complaints or problems.\n\n\nSee Sec. 3.6 of [SP800-63] for more information about redress.\n\nAdditional Requirements for Federal Agencies\n\nThe following requirements apply to federal agencies, regardless of whether they operate their own identity service or use an external CSP as part of their identity service:\n\n\n The agency SHALL consult with their Senior Agency Official for Privacy (SAOP) to conduct an analysis determining whether the collection of PII, including biometrics, to conduct identity proofing triggers Privacy Act requirements.\n The agency SHALL consult with their SAOP to conduct an analysis determining whether the collection of PII, including biometrics, to conduct identity proofing triggers E-Government Act of 2002 [E-Gov] requirements.\n The agency SHALL publish a System of Records Notice (SORN) to cover such collection, as applicable.5\n The agency SHALL publish a Privacy Impact Assessment (PIA) to cover such collection, as applicable.\n The agency SHALL consult with the senior official, office, or governance body responsible for diversity, equity, inclusion, and accessibility (DEIA) for their agency to determine how the identity proofing service can meet the needs of all served populations.\n The agency SHOULD consult with public affairs and communications professionals within their organization to determine if a communications or public awareness strategy should be developed to accompany the roll-out of any new process, or an update to an existing process, including requirements associated with identity proofing. This may include materials detailing information about how to use the technology associated with the service, a Frequently Asked Questions (FAQs) page, prerequisites to participate in the identity proofing process (such as required evidence), webinars or other live or pre-recorded information sessions, or other media to support adoption of the identity service and provide applicants with a mechanism to communicate questions, issues, and feedback.\n If the agency uses a third-party CSP, the agency SHALL conduct its own privacy risk assessment as part of its PIA process, using the CSP’s privacy risk assessment as input to the agency’s assessment.\n If the agency uses a third-party CSP, the agency SHALL incorporate the CSP’s assessment of equity risks into its own assessment of equity risks.\n\n\nRequirements for Confirmation Codes\n\nThis section includes requirements for CSPs that support the use of confirmation codes.\n\nConfirmation codes are used to confirm that an applicant has access to a postal address, email address, or phone number, for the purposes of future communications. They are also used as an identity verification option at IALs 1 and 2, as described in Sec. 2.5.1.\n\nConfirmation codes used for these purposes SHALL include at least 6 decimal digits (or equivalent) from an approved random bit generator (see Sec. 3.2.12 of [SP800-63B]). The confirmation code may be presented as numeric or alphanumeric (e.g., Base64) for manual entry; a secure (e.g., https) link containing a representation of the confirmation code; or a machine-readable optical label, such as a QR code, containing the confirmation code.\n\nConfirmation codes SHALL be valid for at most:\n\n\n 21 days, when sent to a validated postal address within the contiguous United States;\n 30 days, when sent to a validated postal address outside the contiguous United States;\n 10 minutes, when sent to a validated telephone number (SMS or voice); or\n 24 hours, when sent to a validated email address.\n\n\nUpon its use, the CSP SHALL invalidate the confirmation code.\n\nRequirements for Continuation Codes\n\nThis section includes requirements for CSPs that support the use of continuation codes.\n\nContinuation codes are used to re-establish an applicant’s linkage to an incomplete identity proofing or enrollment process. CSPs MAY use continuation codes when an applicant is unable to complete all the steps necessary to be successfully identity proofed and enrolled into the CSP’s identity service in a single session, or when switching between different identity proofing types (such as from remote unattended to remote attended). Continuation codes are intended to be maintained offline (e.g., printed or written down) and stored in a secure location by the applicant for use in re-establishing linkage to a previous, incomplete session.\n\nIn order to facilitate the authentication of the applicant to a subsequent session, the CSP SHALL first bind an authenticator to a record or account established for the applicant prior to the cessation of the initial session. Continuation codes SHALL include at least 64 bits from an approved random bit generator (see Sec. 3.2.12 of [SP800-63B]). The continuation code MAY be presented as numeric or alphanumeric (e.g., Base64) for manual entry, or as a machine-readable optical label, such as a QR code containing the continuation code.\n\nVerification of continuation codes SHALL be subject to throttling requirements, as provided in Sec. 3.2.2 of [SP800-63B]. Continuation codes SHALL be stored in hashed form, using a FIPS-approved or NIST recommended one-way function. Upon its use, the CSP SHALL invalidate the continuation code.\n\nRequirements for Notifications of Identity Proofing\n\nNotifications of proofing are sent to the applicant’s validated address notifying them that they have been successfully identity proofed and provide information about the identity proofing event and subsequent enrollment, including how the recipient can repudiate they were the subject of the identity proofing.\n\nThe following requirements apply to CSPs and RPs that send notifications of proofing at any IAL.\n\nNotifications of proofing:\n\n\n SHALL be sent to a validated postal address, email address, or phone number.\n SHALL include details about the identity proofing event, including the name of the identity service and the date the identity proofing was completed.\n SHALL provide clear instructions, including contact information, on actions for the recipient to take in the case that they repudiate the identity proofing event.\n SHALL provide additional information, such as how the organization or CSP protects the security and privacy of the information it collects and any responsibilities that the recipient has as a subscriber of the identity service.\n SHOULD provide instructions on how to access their subscriber account or information about how the subscriber can update the information contained in that account.\n\n\nIn the event a subscriber repudiates having been identity proofed by the identity service, the CSP or RP SHALL respond in accordance to its fraud management program (Sec. 3.1.2).\n\nRequirements for the Use of Biometrics\n\nBiometrics is the automated recognition of individuals based on their biological and behavioral characteristics such as, but not limited to, fingerprints, voice patterns, or facial features (biological characteristics), and keystroke patterns, angle of holding a smart phone, screen pressure, typing speed, mouse movements, or gait (behavioral characteristics). As used in these guidelines, biometric data refers to any analog or digital representation of biological and behavioral characteristics at any stage of their capture, storage, or processing. This includes live biometric samples from applicants (e.g., facial images, fingerprint), as well as biometric references obtained from evidence (e.g., facial image on a driver’s license, fingerprint minutiae template on identification cards). As applied to the identity proofing process, CSPs can use biometrics to verify that an individual is the rightful subject of identity evidence, to bind an individual to a new piece of identity evidence or credential, or for the purposes of fraud detection.\n\nThe following requirements apply to CSPs that employ biometrics as part of their identity proofing process:\n\n\n CSPs SHALL provide clear, publicly available information about all uses of biometrics, what biometric data is collected, how it is stored, how it is protected, and information on how to remove biometric information consistent with applicable laws and regulations.\n CSPs SHALL collect an explicit biometric consent from all applicants before collecting biometric information.\n CSPs SHALL store a record of subscriber’s consent for biometric use and associate it with subscriber’s account.\n CSPs SHALL have a documented, and publicly available, deletion process and default retention period for all biometric information.\n CSPs SHALL support the deletion of all of a subscriber’s biometric information upon the subscriber’s request at any time, except where otherwise restricted by regulation, law, or statute.\n CSPs SHALL have their biometric algorithms periodically tested by independent entities (e.g., accredited laboratories or research institutions) for their performance characteristics, including performance across demographic groups. At a minimum, the CSP SHALL have an algorithm retested after it has been updated.\n CSPs SHALL assess the performance and demographic impacts of employed biometric technologies in conditions that are substantially similar to the operational environment and user base of the system. The user base is defined by both the demographic characteristics of the expected users as well as the devices they are expected to use. When such assessments include real-world users, participation by users SHALL be voluntary.\n CSPs SHALL meet the following minimum performance thresholds for biometric usage in verification scenarios:\n \n False match rate: 1:10,000 or better; and\n False non-match rate: 1:100 or better\n \n \n CSPs MAY use 1:N matching in support of resolution or fraud detection, pursuant to a privacy risk assessment. In 1:N scenarios, CSPs SHALL meet a minimum performance threshold for false positive identifications of 1:1,000. This applies to, and SHALL be tested for, each demographic group. Tests demonstrating this requirement SHALL employ a gallery no smaller than 90% of the current or intended operational size (N).\n CSPs that make use of 1:N biometric matching for either resolution or fraud prevention purposes SHALL NOT decline a user’s enrollment without a manual review by a trained proofing agent or trusted referee to confirm the automated matching results and confirm the results are not a false positive identification (for example, twins submitting for different accounts with the same CSP).\n CSPs SHALL employ biometric technologies that provide similar performance characteristics for applicants of different demographic groups (age, race, sex, etc.). If significant performance differences across demographic groups are discovered, CSPs SHALL act expeditiously to provide redress options to affected individuals and to close performance gaps.\n All biometric performance tests SHALL be conformant to ISO/IEC 19795-1:2021 and ISO/IEC 19795-10:2024, including demographics testing.\n CSPs SHALL make performance and operational test results publicly available.\n\n\nThe following requirements apply to CSPs who collect biometric characteristics from applicants:\n\n\n CSP SHALL collect biometrics in such a way that provides reasonable assurance that the biometric is collected from the applicant, and not another subject.\n When collecting and comparing biometrics remotely, the CSP SHALL implement presentation attack detection (PAD) capabilities, which meet IAPAR performance metric <0.15, to confirm the genuine presence of a live human being and to mitigate spoofing and impersonation attempts.\n When collecting biometrics onsite, the CSP SHALL have the operator view the biometric source (e.g., fingers, face) for the presence of non-natural materials and perform such inspections as part of the proofing process. All biometric presentation attack detection tests SHALL be conformant to ISO/IEC 30107-3:2023\n\n\nRequirements for Evidence Validation Processes (Authenticity Checks)\nEvidence validation can be conducted by remote optical capture and inspection (often called document authentication or doc auth) or conducted by visual inspection of a trained proofing agent or trusted referee. CSPs may employ either or both processes for evaluating the authenticity of identity evidence.\n\nThe following requirements apply to CSPs that employ optical capture and inspection for the purposes of determining document authenticity:\n\n\n Automated evidence validation technology SHALL meet the following performance measures:\n \n Document false acceptance rate (DFAR) of .10 or less. 6\n Document false rejection rate (DFRR) of .10 or less. 7\n \n \n If a Machine Readable Zone (MRZ) or barcode is present on the evidence, the optical capture and inspection SHALL compare the MRZ data to the printed data on the evidence for consistency.\n CSPs SHALL implement live capture of documents during the validation process. CSPs SHOULD deploy technology controls to prevent the injection of document images, for example using document presence checks (also called document liveness) or inspecting device characteristics to determine the presence of a virtual camera or device emulator.\n CSPs SHALL assess the performance of employed optical capture and inspection technologies in conditions that are substantially similar to the operational environment and user base of the system. These tests SHALL account for all available identity evidence types that the CSPs allow to be validated using optical capture and inspection technology. Where subscribers’ documents, PII, or images are used as part of the testing, it SHALL be on a voluntary basis and with subscriber notification and consent.\n CSPs SHOULD have their evidence validation technology periodically tested by independent entities (e.g., accredited laboratories or research institutions) for their performance characteristics.\n CSPs SHALL make the results of their testing available publicly.\n\n\n\n Note: These requirements apply to technologies that capture and validate images of physical identity evidence. They do not apply to validation techniques that rely on PKI or other cryptographic technologies that are embedded in the evidence themselves.\n\n\nThe following requirements apply to CSPs that employ visual inspection of evidence by trained proofing agents or trusted referees for the purposes of determining document authenticity:\n\n\n Proofing agents and trusted referees SHALL be trained and provided resources to visually inspect all forms of evidence supported by the CSP. This training SHALL include:\n \n Authentic layouts and topography of evidence types\n Physical security features (e.g., raised letters, holographic features, microprinting)\n Techniques for assessing features (e.g., tools to be used, where tactile inspection is needed, manipulation to view specific features)\n Common indications of tampering (e.g., damage to the lamination, image modification)\n \n \n When the setting allows for it (e.g., onsite attended proofing events), proofing agents and trusted referees SHALL be provided with specialized tools and equipment to support the visual inspection of evidence (e.g., magnifiers, ultraviolet lights, barcode readers).\n Proofing agents and trusted referees who conduct visual inspection via remote means SHALL be provided with devices and internet connections that support sufficiently high-quality imagery to be able to effectively inspect presented evidence. In these instances, the visual validation SHOULD be supported by automated document validation technologies that provide additional confidence in the authenticity of the evidence (e.g., submitting and validating evidence in advance of an attended remote session)\n Proofing agents and trusted referees SHALL be reviewed regarding their ability to visually inspect evidence on an ongoing basis, and be assessed and certified with at least annual evaluations.\n\n\n\n Note: Due to the potential number and permutations of identity evidence, these guidelines do not attempt to provide a comprehensive list of security features. CSPs need to provide evidence validation training specific to the types of identity evidence they accept.\n\n\nException and Error Handling\n\nThroughout the identity proofing process there are many points where errors or failures may occur. Such exceptions to a standard identity proofing workflow include: process failures, such as when a user does not possess the required evidence; technical failures, such as when an integrated service is not available; and failures due to user error, such as when an applicant is unable to capture a clear image of their identity evidence when using remote validation tools.\n\nIn order to increase the accessibility, usability, and equity of their identity proofing services, CSPs SHALL document their operational processes for dealing with errors and handling exceptions. These documented processes SHALL include providing trusted referees to support those applicants who are otherwise unable to meet the requirements of IALs 1 and 2. Additionally, CSPs SHOULD support the use of applicant references who can vouch for an applicant’s attributes, conditions, or identity.\n\nTrusted Referees Requirements\n\nTo increase accessibility and promote equal access to online government services, CSPs provide trusted referees. Trusted referees are used to facilitate the identity proofing and enrollment of individuals who are otherwise unable to meet the requirements for identity proofing to a specific IAL. A non-exhaustive list of examples of such individuals and demographic groups includes: individuals who do not possess and cannot obtain the required identity evidence; persons with disabilities; older individuals; persons experiencing homelessness; individuals with little or no access to online services or computing devices; persons without a bank account or with limited credit history; victims of identity theft; individuals displaced or affected by natural disasters; and children under 18. The following requirements apply to the use of trusted referees:\n\n\n The CSP SHALL provide notification to the public of the availability of trusted referee services and how such services are obtained.\n The CSP SHALL establish written policies and procedures for the use of trusted referees as part of its practice statement, as specified in Sec. 3.1.1.\n The CSP SHALL train and certify its trusted referees to make risk-based decisions that allow applicants to be successfully identity proofed based on their unique circumstances. At a minimum such training SHALL include:\n \n Document identification and validation, such as common templates, security features, layouts, and topography.\n Indicators of fraudulent documents, such as damage, tampering, modification, and material types\n Facial and image comparisons to conduct verification of applicants against presented documents\n Indicators of social engineering - such as distress, confusion, or coercion - exhibited by an applicant\n Annual recertification of the trusted referee’s capabilities\n \n \n The CSP SHALL establish a record of any identity proofing session that involves a trusted referee, to include: what evidence was presented; which processes were completed (e.g., validation or verification); and, the reason(s) why a trusted referee was used (e.g., automated process failure, applicant request, established exception policy).\n The CSP MAY offer trusted referee services for either onsite-attended or remote-attended sessions. These sessions SHALL be consistent with the requirements of these proofing types based on the IAL of the proofing event.\n\n\nTrusted Referee Uses\n\nTrusted referees offer a critical path for those who are unable to complete identity proofing by other means. However, given the number of possible failures that may occur within the proofing process, it is essential for CSPs to define the uses for which a trusted referee can be applied within their own service offerings. The following requirements apply to defining the integration of trusted referees into the identity proofing process:\n\n\n CSPs SHALL document which types of exceptions and failures are eligible for the use of a trusted referee.\n CSPs SHALL offer trusted referee services for failures of automated verification processes (e.g., biometric comparisons).\n CSPs SHOULD offer trusted referee services for failures in completing automated validation processes, such as in cases of mismatched core attributes or the absence of the applicant in a record source.\n \n CSPs SHALL provide a policy for additional evidence types that may be used to corroborate core attributes or changes in core attributes.\n Trusted referees SHALL review additional evidence types for authenticity to the greatest degree allowed by the evidence.\n If no authoritative or credible records are available to support validation, the trusted referee MAY compare the attributes on additional pieces of evidence with the strongest piece of evidence available to corroborate the consistency of core attributes.\n If there is a partial mismatch of core attributes to authoritative records, the trusted referee SHALL review evidence that supports the legitimacy of the asserted attribute value (e.g., recent move or change of name).\n \n \n\n\nApplicant Reference Requirements\n\nApplicant references are individuals who participate in the identity proofing of an applicant in order to vouch for the applicant’s identity, attributes, or circumstances related to the applicant’s ability to complete identity proofing. Applicant representatives are not agents of the CSP, but instead are representatives of the applicant who have sufficient knowledge to aide in the completion of identity proofing when other forms of evidence, validation, and verification are not available.\n\nThe following requirements apply to the use of applicant references at IAL1 or IAL2:\n\n\n The CSP SHALL provide notification to the public of the allowability of applicant references and any requirements for the relationship between the reference and an applicant.\n The CSP SHALL establish written policies and procedures for the use of applicant references as part of its practice statement, as specified in Sec. 3.1.1.\n The CSP SHALL identity proof an applicant reference to the same or higher IAL intended for the applicant. The CSP SHALL include the information collected, recorded, and retained for identity proofing the applicant references in its privacy risk assessment for identity proofing applicants, as required in section Sec. 3.1.3.\n The CSP SHALL record the use of an applicant reference in the subscriber account as well as maintain a record of the applicant reference and their relationship to the applicant.\n The RP SHALL conduct a risk assessment to determine the applicability, business requirements, and potential risks associated with excluding or including applicant references for proofing events.\n\n\nUses of Applicant References\n\nApplicant references may take several different actions to support an applicant in the identity proofing process. CSPs and RPs SHALL establish all acceptable uses for applicant references in their Trust Agreements. These MAY include the following:\n\n\n The applicant reference MAY vouch for one or more claimed core attributes relative to the applicant as part of evidence and attribute validation process.\n The applicant reference MAY vouch for a specific condition or status of an applicant relative to the identity proofing process (e.g., homelessness, disaster scenarios)\n \n Note: This information is intended to support risk determinations relative to the identity proofing event. Use of applicant reference statements relative to eligibility for status or benefits is outside the scope of these guidelines.\n \n \n The applicant reference MAY vouch for the identity of the applicant in the absence of sufficient identity evidence.\n\n\nIn all instances, the CSP SHALL establish a record of the role the applicant reference played in the process and document these actions sufficient to support any applicable legal and regulatory requirements. This MAY include:\n\n\n Capturing and recording the statements and assertions made by the applicant reference;\n Capturing an electronic, digital signature, or physical signature of the applicant reference; or,\n Capturing consent and acknowledgement relative to the legal and liability impacts of the applicant reference’s statements.\n\n\nCSPs SHALL make available to the applicant reference clear and understandable information relative to the legal and liability impacts that may result from their participation as an applicant reference.\n\nEstablishing Applicant Reference Relationships\n\nIn many cases, there will be business, legal, or fraud prevention reasons to confirm the relationship between the applicant and an applicant reference. Where such steps are deemed necessary by a risk assessment, the following requirements SHALL apply:\n\n\n The CSP and RP SHALL establish requirements for applicant reference relationship confirmation processes and document this in any Trust Agreements.\n The CSP SHALL make a list of acceptable evidence of relationship available to the applicant reference prior to initiating the relationship confirmation process\n The CSP SHALL request evidence of the applicant’s relationship (e.g., notarized power of attorney, a professional certification)\n Upon successfully identity proofing an applicant, the CSP SHALL record the evidence used to confirm the applicant reference’s relationship to the applicant in the subscriber account.\n\n\nRequirements for Interacting with Minors\n\nThe following requirements apply to all CSPs providing identity proofing services to minors at any IAL.\n\n\n The CSP SHALL establish written policy and procedures as part of its practice statement for identity proofing minors who may not be able to meet the evidence requirements for a given IAL.\n When interacting with persons under the age of 13, the CSP SHALL ensure compliance with the Children’s Online Privacy Protection Act of 1998 [COPPA], or other laws and regulations dealing with the protection of minors, as applicable.\n CSPs SHALL support the use of applicant references when interacting with individuals under the age of 18.\n\n\nElevating Subscriber IALs\n\nCSPs SHOULD allow subscribers to elevate identity assurance levels related to their subscriber accounts to support higher assurance transactions with RPs. For CSPs supporting these functions the following apply:\n\n\n CSPs SHALL document their approved approaches for elevating assurance levels in their practice statements.\n CSPs SHALL require subscribers to authenticate at the highest AAL available on their account prior to initiating the upgrade process.\n CSPs SHALL collect, validate, and verify additional evidence, as mandated to achieve the higher IAL.\n CSPs SHOULD avoid collecting, validating, and verifying previously processed evidence, though they MAY do so based on the age of the account, indicators of fraud, or if evidence has become invalidated since the original proofing event.\n\n\n\n \n \n Options include using a trusted referee, with or without an applicant representative. ↩\n \n \n For more information about privacy risk assessments, refer to the NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management at https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.01162020.pdf. ↩\n \n \n [NISTIR8062] provides an overview of predictability and manageability including examples of how these objectives can be met. ↩\n \n \n Behavioral analytics in this context are used to determine if an interaction is indicative of an automated attack and not an effort to identify or authenticate a specific user based on a captured reference template for that user. ↩\n \n \n For more information about SORNs, see OPM’s System of Records Notice (SORN) Guide (https://www.opm.gov/information-management/privacy-policy/privacy-references/sornguide.pdf). ↩\n \n \n For the purposes of this document, the DFAR is the proportion of processed, fraudulent documents that the document validation system determined to be valid divided by the number of processed fraudulent documents. DFAR is defined as the number of fraudulent documents processed that were deemed valid by a document validation system divided by the number of processed fraudulent documents. ↩\n \n \n For the purposes of this document, DFRR is the proportion of processed, genuine documents which the document validation system determined to be invalid. DFRR is defined as the number of genuine documents processed that were deemed invalid by a document validation system divided by the number of processed genuine documents. ↩\n \n \n\n"
} ,
{
"title" : "Identity Proofing Requirements",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/ial/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Identity Assurance Level Requirements\n\nThis section is normative.\n\nIdentity Assurance Level 1 Requirements\n\nIdentity proofing processes at IAL1 allow for a range of acceptable techniques in order to detect the fraudulent claims to identities by malicious actors, while facilitating user adoption, minimizing the rejection of legitimate users, and reducing application departures. The use of biometric matching, such as the automated comparison of a facial portrait to supplied evidence, at IAL1 is optional, providing pathways to proofing and enrollment where such collection may not be viable.\n\nProofing Types\n\n\n IAL1 Identity Proofing MAY be delivered through any proofing type, as described in Sec. 2.1.3.\n CSPs SHALL offer Unattended Remote identity proofing as an option.\n CSPs SHALL offer at least one method of Attended (Remote or Onsite) identity proofing as an option.\n CSPs MAY combine proofing types and their stated requirements to create hybrid processes. For example, a CSP might leverage remote unattended identity proofing validation processes in advance of a remote attended session where verification will take place. Where such steps are combined, CSPs SHALL document their processes in alignment of requirements of each proofing type that is applied.\n\n\nEvidence Collection\n\nFor each identity proofing type, the CSP SHALL collect the following:\n\n\n One piece of FAIR evidence or better (i.e., STRONG, SUPERIOR) evidence\n\n\nFor onsite attended identity proofing at IAL1, organizations SHOULD prioritize the use of evidence (FAIR or STRONG) that contains a facial portrait, which can be used for verification purposes. While forms of evidence that do not contain a facial portrait MAY be used for such sessions, the associated verification requirements (e.g., returning a confirmation code) may result in additional burden on applicants.\n\nAttribute Collection\n\nThe CSP SHALL collect all Core Attributes. Validated evidence is the preferred source of identity attributes. If the presented identity evidence does not provide all the attributes that the CSP considers core attributes, it MAY collect attributes that are self-asserted by the applicant.\n\nEvidence Validation\n\nEach piece of evidence presented SHALL be validated using one of the following techniques:\n\n\n Confirming the authenticity of digital evidence through interrogation of digital security features (e.g., signatures on assertions or data).\n Confirming the authenticity of physical evidence using automated scanning technology able to detect physical security features.\n Confirming the integrity of physical security features of presented evidence through visual inspection by a proofing agent using real-time or asynchronous processes (e.g., offline manual review).\n Confirming the integrity of physical security features through physical and tactile inspection of security features by a proofing agent at an onsite location.\n\n\nAttribute Validation\n\n\n The CSP SHALL validate all core attributes and the government identifier against an authoritative or credible source to determine accuracy.\n CSPs SHOULD correlate the data on evidence, self-asserted, and as presented by credible and authoritative sources for consistency.\n CSPs SHOULD validate the reference numbers of presented identity evidence if available.\n\n\nVerification Requirements\n\nThe CSP SHALL verify the applicant’s ownership of one piece of evidence using one of the following processes:\n\n\n Confirming the applicant’s ability to return a confirmation code delivered to a validated address associated with the evidence;\n Confirming the applicant’s ability to return a micro-transaction value delivered to a validated financial or similar account;\n Confirming the applicant’s ability to successfully complete an authentication and federation protocol equivalent to AAL2/FAL2, or higher, to access an account related to the identity evidence;\n Comparing the applicant’s facial image to a facial portrait on evidence via an automated comparison.\n Visually comparing the applicant’s facial image to a facial portrait on evidence, or in records associated with the evidence, during either an onsite attended session (in-person with a proofing agent), a remote attended session (live video with a proofing agent), or an asynchronous process (i.e., visual comparison made by a proofing agent at a different time).\n Comparing a stored biometric on identity evidence, or in authoritative records related to the evidence, to a sample provided by the applicant.\n\n\nRemote Attended Requirements\n\n\n All video sessions SHALL take place using a service that allows for the exchange of information over an authenticated protected channel.\n During the video session, the applicant SHALL remain in view of the proofing agent during each step of the proofing process.\n The video quality SHALL be sufficient to support the necessary steps in the validation and verification processes, such as inspecting evidence and making visual comparisons of the user to the evidence.\n The proofing agent SHALL be trained to identify signs of manipulation, coercion, or social engineering occurring during the recorded session.\n CSPs MAY record and maintain video sessions for fraud prevention and prosecution purposes pursuant to a privacy risk assessment, as defined in Sec. 3.1.3.1. If the CSP records session, the following further requirements apply:\n \n The CSP SHALL notify the applicant of the recording prior to initiating a recorded session.\n The CSP SHALL gain consent from the applicant to prior to initiating a recorded session.\n The CSP SHALL publish their retention schedule and deletion processes for all video records.\n \n \n The CSP SHOULD introduce challenges and response features into their video sessions that are randomized or periodically changed to deter deep fakes and pre-recorded materials from being used to defeat the proofing process. These MAY be shifting questions, changes to the orders of sessions, or physical cues that would be hard for attackers to predict.\n The CSP SHALL provide proofing agents with a method or mechanism to flag events for potential fraud.\n\n\nOnsite Attended Requirements\n\n\n The CSP SHALL provide a physical setting in which onsite identity proofing sessions are conducted.\n The CSP SHALL ensure all information systems and technology leveraged by proofing agents and trusted referees are protected consistent with FISMA Moderate or comparable levels of controls.\n CSP proofing agents SHALL be trained to identify signs of manipulation, coercion, or social engineering occurring during the onsite session.\n CSPs MAY record and maintain video sessions for fraud prevention and prosecution purposes pursuant to a privacy risk assessment, as defined in Sec. 3.1.3.1. If the CSP records session, the following further requirements apply:\n \n The CSP SHALL notify the applicant of the recording prior to initiating a recorded session.\n The CSP SHALL gain consent from the applicant prior to initiating a recorded session.\n The CSP SHALL publish their retention schedule and deletion processes for all video records.\n \n \n The CSP SHALL provide proofing agents with a method or mechanism to safely flag events for potential fraud.\n\n\nOnsite Unattended Requirements (Devices & Kiosks)\n\n\n All devices SHALL be safeguarded from tampering through either observation by CSP representatives or through physical and digital tamper prevention features.\n All devices SHALL be protected by appropriate baseline security features comparable to FISMA Moderate controls – including Malware Protection, Admin Specific Access Controls, and Software Update processes.\n All devices SHALL be inspected periodically by trained technicians to deter tampering, modification, or damage.\n\n\nInitial Authenticator Binding\n\nUpon the successful completion of the identity proofing process, a unique subscriber account is established and maintained for the applicant (now subscriber) in the CSP’s identity system. One or more authenticators can be associated (bound) to the subscriber’s account, either at the time of identity proofing or at a later time. See Sec. 5 for more information about subscriber accounts.\n\n\n The CSP SHALL provide the ability for the applicant to bind an authenticator using one of the following methods:\n \n The remote enrollment of a subscriber-provided authenticator, consistent with the requirements for the authenticator type as defined in Sec. 4.1.3 of [SP800-63B].\n Distribution of a physical authenticator to a validated address of record.\n Distribution or onsite enrollment of an authenticator.\n \n \n Where authenticators are bound outside of a single protected session with the user, the CSP SHALL confirm the presence of the intended subscriber through one of the following methods:\n \n Return of an confirmation code, or\n Comparison against a biometric collected at the time of proofing.\n \n \n\n\nNotification of Proofing\n\nUpon the successful completion of identity proofing at IAL1, the CSP SHALL send a notification of proofing to a validated address for the applicant, as specified in Sec. 3.1.10.\n\nIdentity Assurance Level 2 Requirements\n\nIAL2 identity proofing includes additional evidence, validation, and verification requirements in order to provide increased mitigation against impersonation attacks and other identity proofing errors relative to IAL1. IAL2 can be achieved through a number of different types of proofing (e.g., remote unattended, remote attended, etc.) and identity verification at IAL2 can be accomplished with or without the use of biometrics. To provide clear options to achieving IAL2, this section presents three different pathways to achieving alignment with IAL2 outcomes and requirements: IAL2 Verification - Non-Biometric Pathway; IAL2 Verification - Biometric Pathway; and IAL2 Verification - Digital Evidence Pathway. These different options do not imply different security or assurance outcomes; instead they present requirements in a manner that allows for clear selection of non-biometric methods that can be used to achieve IAL2.\n\nProofing Types\n\n\n Identity proofing at IAL2 MAY be delivered through any proofing type, as described in Sec. 2.1.3.\n CSPs SHALL offer Unattended Remote identity proofing as an option.\n CSPs SHALL offer at least one method of Attended (Remote or Onsite) identity proofing as an option.\n CSPs MAY combine elements of different proofing types to create hybrid processes. For example, a CSP might leverage remote unattended identity proofing validation processes in advance of a remote attended session where verification will take place. If a CSP employs a hybrid process, it SHALL document how the process satisfies the requirements associated with the associated proofing types.\n\n\nEvidence Collection\n\nFor all types of proofing the CSP SHALL collect:\n\n\n One piece of FAIR Evidence and one piece of STRONG; or\n One piece of SUPERIOR.\n\n\nAttribute Collection\n\nSame as IAL1\n\nEvidence Validation\n\n\n Each piece of FAIR or STRONG evidence presented SHALL be validated using one of the following techniques.\n \n Confirming the authenticity of digital evidence through interrogation of digital security features (e.g., signatures on assertions or data).\n Confirming the authenticity of physical evidence using automated scanning technology able to detect physical security features.\n Confirming the integrity of physical security features of presented evidence through visual inspection by a proofing agent using real-time or asynchronous processes (e.g., offline manual review).\n Confirming the integrity of physical security features through physical and tactile inspection of security features by a proofing agent at an onsite location.\n \n \n Each Piece of SUPERIOR evidence SHALL be validated through cryptographic verification of the evidence contents and the issuing source, including digital signature verification and the validation of any trust chain back to a trust anchor. SUPERIOR evidence unable to be validated using cryptographic verification SHALL be considered STRONG evidence and validated consistent with the requirements above.\n\n\nAttribute Validation\n\n\n The CSP SHALL validate all core attributes by either:\n \n Comparing the government identifier and core attributes against an authoritative or credible source to determine accuracy; or\n Validating the accuracy of digitally signed attributes contained on SUPERIOR evidence through the public key of the issuing source.\n \n \n CSPs SHOULD correlate the attributes collected from evidence, self-assertion, and as presented by credible and authoritative sources for consistency.\n CSPs SHOULD validate the reference numbers of presented identity evidence if available.\n\n\nVerification Requirements\n\nVerification pathways SHOULD be implemented consistent with relevant policy and be responsive to the use cases, populations, and threat environment of the online service being protected. CSPs SHOULD deploy more than one pathway to IAL2 verification and MAY combine pathways in order to achieve desired outcomes.\n\nIAL2 Verification - Non-Biometric Pathway\n\nThe IAL2 Non-Biometric Pathway provides verification methods that do not use automated comparison of biometric samples provided by the applicant. Non-biometric processes will often still include biometric data being collected and verified - for example, through a visual comparison performed by a proofing agent and images contained on identity evidence - but comparisons are not done through automated means. Additional verification methods that may not require the use of automated biometric comparison are also included in the IAL2 Verification - Digital Evidence Pathway requirements specified in Sec. 4.2.6.2.\n\n\n The CSP SHALL verify the applicant’s ownership of all presented identity evidence.\n Approved non-biometric methods for verifying FAIR evidence at IAL2 include:\n \n Confirming the applicant’s ability to return a confirmation code delivered to a validated address associated with the evidence (e.g., postal address, email address, phone number)\n Visually comparing the applicant’s facial image to a facial portrait on evidence, or in records associated with the evidence, during either an onsite attended session (in-person with a proofing agent), a remote attended session (live video with a proofing agent), or an asynchronous process (i.e., visual comparison made by a proofing agent at a different time)\n \n \n Approved non-biometric methods for verifying STRONG and SUPERIOR evidence at IAL2 include:\n \n Confirming the applicant’s ability to return a confirmation code delivered to a physical address (i.e., postal address) that was obtained from the evidence and was validated with an authoritative source\n Visually comparing the applicant’s facial image to a facial portrait on evidence, or in records associated with the evidence, during either an onsite attended session (in-person with a proofing agent), a remote attended session (live video with a proofing agent), or an asynchronous process (i.e., visual comparison made by a proofing agent at a different time)\n \n \n\n\n\\clearpage\n\n\nIAL2 Verification - Digital Evidence Pathway\n\nThe IAL2 Digital Evidence Pathway provides a means of allowing individuals to make use of digital forms of evidence, such as digital credentials (sometimes referred to as digital identity documents) or digital accounts as part of the verification process. This pathway achieves verification by confirming the individual’s ability to access evidence through digital means.\n\n\n The CSP SHALL verify the applicant’s ownership of all pieces of presented identity evidence.\n Approved digital evidence verification methods for FAIR evidence at IAL2 include:\n \n Confirming the applicant’s ability to return a micro-transaction value delivered to a validated account (e.g., a checking account)\n Confirming the applicant’s ability to return a confirmation code delivered to a validated digital address associated with the digital evidence (e.g., MNO/Phone account)\n Confirming the applicant’s ability to successfully complete an authentication and federation protocol equivalent to AAL2/FAL2 to access an account related to the identity evidence\n \n \n Approved digital evidence verification methods for STRONG evidence at IAL2 include:\n \n Confirming the applicant’s ability to successfully complete an authentication and federation protocol equivalent to AAL2/FAL2, or higher, to access an account related to the identity evidence\n \n \n Approved digital evidence verification methods for SUPERIOR evidence at IAL2 include:\n \n Confirming the applicant’s ability to successfully complete an authentication and federation protocol equivalent to AAL3/FAL2, or higher, to access an account related to the identity evidence\n \n \n\n\nIAL2 Verification - Biometric Pathway\n\nThe IAL2 Biometric Pathway provides verification methods that support automated comparison of biometric samples provided by the applicant.\n\n\n The CSP SHALL verify the applicant’s ownership of all pieces of presented identity evidence.\n Approved biometric methods for verifying FAIR evidence at IAL2 include:\n \n Comparing the applicant’s facial image to a facial portrait on evidence via an automated comparison\n \n \n Approved methods for verifying STRONG and SUPERIOR evidence for use in the IAL2 Biometric Pathway include:\n \n Comparing the applicant’s facial image to a facial portrait on evidence via an automated comparison\n Comparing, via automated means, a non-facial portrait biometric stored on identity evidence, or in-records associated with the evidence, to a live sample provided by the applicant\n \n \n\n\nRemote Attended Requirements\n\nSame as IAL1.\n\nOnsite Attended Requirements\nSame as IAL1.\n\nOnsite Unattended Requirements (Devices & Kiosks)\n\nSame as IAL1.\n\nNotification of Proofing\n\nSame as IAL1.\n\nInitial Authenticator Binding\n\nSame as IAL 1.\n\nIdentity Assurance Level 3\n\nIAL3 adds additional rigor to the steps required at IAL2 and is subject to additional and specific processes (including the use of biometric information comparison, collection, and retention) to further protect the identity and RP from impersonation and other forms of identity fraud. In addition, identity proofing at IAL3 must be attended by a CSP proofing agent, as described in Sec. 2.1.2.\n\nProofing Types\n\nIAL 3 Identity Proofing SHALL only be delivered as Onsite Attended. The Proofing Agent MAY be collocated or attend the proofing session remotely via a CSP controlled kiosk or device.\n\nEvidence Collection\n\n\n For all types of IAL 3 identity proofing the CSP SHALL collect either:\n \n One piece of STRONG and one piece of FAIR (or better), or\n One piece of SUPERIOR\n \n \n\n\nAttribute Requirements\n\n\n The CSP SHALL collect all core attributes. Validated evidence is the preferred source of identity attributes. If the presented identity evidence does not provide all the attributes that the CSP considers core attributes, it MAY collect attributes that are self-asserted by the applicant.\n The CSP SHALL collect and retain a biometric sample from the applicant during the identity proofing process to support account recovery, non-repudiation, and establish a high level of confidence that the same participant is present in the proofing and issuance processes (if done separately). CSPs MAY choose to periodically re-enroll user biometrics based on the modalities they use and the likelihood that subscriber accounts will persist long enough to warrant such a refresh.\n\n\nEvidence Validation\n\n\n Each piece of FAIR or STRONG evidence presented SHALL be validated using one of the following techniques.\n \n Confirming the authenticity of digital evidence through interrogation of digital security features (e.g., signatures on assertions or data).\n Confirming the authenticity of physical evidence using automated scanning technology able to detect physical security features.\n Confirming the integrity of physical security features of presented evidence through physical inspection by a proofing agent using real-time or asynchronous processes (e.g., offline manual review).\n Confirming the integrity of physical security features through physical and tactile inspection of security features by a proofing agent at an onsite location.\n \n \n Each Piece of SUPERIOR evidence SHALL be validated through cryptographic verification of the evidence contents and the issuing source, including digital signature verification and the validation of any trust chain back to a trust anchor.\n\n\nAttribute Validation\n\n\n The CSP SHALL validate all core attributes by either:\n \n Comparing the core attributes against an authoritative or credible source to determine accuracy; or\n Validating the accuracy of digitally signed attributes contained on SUPERIOR evidence through the public key of the issuing source.\n \n \n CSPs SHOULD correlate the attributes collected from evidence, self-assertion, and as presented by credible and authoritative sources for consistency.\n CSPs SHOULD validate the reference numbers of presented identity evidence if available.\n\n\nVerification Requirements\n\n\n The CSP SHALL verify the applicants ownership of the strongest piece of evidence (STRONG or SUPERIOR) by one of the following methods:\n \n Confirming the applicant’s ability to successfully authenticate to a physical device or application (for example a mobile driver’s license) and comparing a digitally protected and transmitted facial portrait to the applicant.\n Comparing the applicant’s facial image to the facial portrait on evidence via an automated comparison.\n Visually comparing the applicant’s facial image to the facial portrait on evidence, either during an onsite attended session or a remote attended session (live video).\n Comparing a stored biometric on identity evidence, or in authoritative records associated with the evidence, to a sample provided by the applicant.\n \n \n\n\nOnsite Attended Requirements (Locally Attended)\n\n\n The CSP SHALL provide a secure, physical setting in which onsite identity proofing sessions are conducted.\n The CSP SHALL provide sensors and capture devices for the collection of biometrics from the applicant.\n The CSP SHALL have the proofing agent view the source of the collected biometric for the presence of any non-natural materials.\n The CSP SHALL have the proofing agent collect the biometric samples in such a way that ensures the sample was collected from the applicant and no other source.\n The CSP SHALL ensure all information systems and technology leveraged by proofing agents and trusted referees are protected consistent with FISMA Moderate or comparable levels of controls to include physical controls for the proofing facility.\n CSP proofing agents SHALL be trained to identify signs of manipulation, coercion, or social engineering occurring during the onsite session.\n CSPs MAY record and maintain video sessions for fraud prevention and prosecution purposes pursuant to a privacy risk assessment, as defined in Sec. 3.1.3.1. If the CSP records session, the following further requirements apply:\n \n The CSP SHALL notify the applicant of the recording prior to initiating a recorded session.\n The CSP SHALL gain consent from the applicant to prior to initiating a recorded session.\n The CSP SHALL publish their retention schedule and deletion processes for all video records.\n \n \n The CSP SHALL provide proofing agents with a method or mechanism to safely flag events for potential fraud.\n\n\nOnsite Attended Requirements (Remotely Attended - Formerly Supervised Remote Identity Proofing)\n\n\n The CSP MAY offer a remote means of interacting with a proofing agent whereby the agent and the applicant do not have to be at the same facility. In this scenario, the following requirements apply:\n \n The CSP SHALL monitor the entire identity proofing session through a high-resolution video transmission with the applicant.\n The CSP SHALL have a live proofing agent participate remotely with the applicant for the evidence collection, evidence validation, and verification steps of the identity proofing process. Data entry of attributes for resolution and enrollment MAY be done without the presence of a live proofing agent.\n The CSP SHALL require all actions taken by the applicant during the evidence collection, evidence validation, and verification steps to be clearly visible to the remote proofing agent.\n The CSP SHALL require that all digital validation and verification of evidence (e.g., via chip or wireless technologies) be performed by integrated scanners and sensors (e.g., embedded fingerprint reader).\n All devices used to support interaction between the proofing agent and the applicantSHALL be safeguarded from tampering through observation by CSP representatives or monitoring devices (e.g., cameras) and through physical and digital tamper prevention features.\n All devices used to support interaction between the proofing agent and the applicant SHALL be protected by appropriate baseline security features comparable to FISMA Moderate controls, including malware protection, admin-specific access controls, and software update processes.\n All devices used to support interaction between the proofing agent and the applicant SHALL be inspected periodically by trained technicians to deter tampering, modification, or damage.\n \n \n\n\nNotification of Proofing\n\nSame as IAL1.\n\nInitial Authenticator Binding\n\n\n The CSP SHALL distribute or enroll the applicant’s initial authenticator during an onsite attended interaction with a proofing agent.\n If the CSP distributes or enrolls the initial authenticator outside of a single, protected session with the user, the CSP SHALL compare a biometric sample collected from the applicant to the one collected at the time of proofing, prior to issuance of the authenticator.\n The CSP MAY request that the applicant bring the identity evidence used during the proofing process to the issuance event to further strengthen the process of binding the authenticators to the applicant.\n\n\n\\clearpage\n\n\nSummary of Requirements\nTable 1 summarizes the requirements for each of the identity assurance levels:\n\nTable 1. IAL Requirements Summary\n\n\n\n\n \n \n Process\n IAL1\n IAL2\n IAL3\n \n \n \n \n Proofing Types\n Remote Unattended Remote Attended Onsite Unattended Onsite Attended\n Same as IAL1\n Onsite Attended\n \n \n Evidence Collection\n Unattended: –1 FAIR or –1 STRONG Attended: –1 FAIR w/ image or –1 STRONG\n For all proofing types: –1 FAIR and 1 STRONG or –1 SUPERIOR\n –1 STRONG + 1 FAIR or –1 SUPERIOR\n \n \n Attribute Collection\n All Core Attributes\n All Core Attributes\n All Core Attributes + Biometric Sample\n \n \n Evidence Validation\n Physical Evidence: –automated doc auth. –visual inspection Digital Evidence: –interrogation of digital security features\n Physical Evidence: –automated doc. auth. –visual inspection –physical/tactile inspection Digital Evidence: –interrogation of digital security features SUPERIOR Evidence: –Dig. sig. verification\n Physical Evidence: –automated doc. auth. –physical inspection –physical/tactile inspection Digital Evidence: –interrogation of digital security features SUPERIOR Evidence: –Dig. sig. verification\n \n \n Attribute Validation\n Confirmation of core attributes against authoritative or credible sources.\n Confirmation of core attributes against authoritative or credible sources. Confirmation of digitally signed attributes through signature verification.\n Confirmation of core attributes against authoritative or credible sources. Confirmation of digitally signed attributes through digital signature verification.\n \n \n Verification\n Verify applicant’s ownership of either the FAIR evidence or the STRONG evidence per 4.1.6\n Verify applicant’s ownership of all presented evidence using methods provided in 4.2.6\n Verify applicant’s ownership of all presented evidence using methods provided in 4.3.6\n \n \n\n\n\n\n"
} ,
{
"title" : "Subscriber Accounts",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/accounts/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Subscriber Accounts\n\nThis section is normative.\n\nSubscriber Accounts\n\nThe CSP SHALL establish and maintain a unique subscriber account for each active subscriber in the CSP identity system from the time of enrollment to the time of account closure. The CSP establishes a subscriber account to record each subscriber as a unique identity within its identity service and to maintain a record of all authenticators associated with that account.\n\nThe CSP SHALL assign a unique identifier to each subscriber account. The identifier SHOULD be randomly generated by the CSP system and of sufficient length and entropy to ensure uniqueness within its user population and to support federation with RPs, where applicable. The identifier MAY be used as a subject identifier in the generation of assertions, consistent with [SP800-63C].\n\nAt a minimum, the CSP SHALL include the following information in each subscriber account:\n\n\n The unique identifier associated with the subscriber account\n Any subject identifiers established for the subscriber, including any RP specific subject identifiers\n A record of the identity proofing steps completed for the subscriber, including:\n \n The type and issuer of identity evidence\n The type of proofing (Remote Unattended, Remote Attended, Onsite Attended, Onsite Unattended)\n The validation and verification methods used\n The use of a trusted referee or other exception handling process\n The use of an applicant reference, including a unique identifier for the applicant reference\n \n \n Maximum IAL successfully achieved for the identity proofing of the subscriber\n Records of any applicant consent agreements related to the collection and processing of information about the applicant, including biometrics, throughout the subscriber account lifecycle\n All authenticators currently bound to the subscriber account, whether registered at enrollment or subsequent to enrollment\n Attributes that were validated during the identity proofing process or in subsequent transactions to support RP access\n\n\nSubscriber Account Access\n\nThe CSP SHALL provide the capability for subscribers to authenticate and access information in their subscriber account.\n\nFor subscriber accounts that contain PII, this capability SHALL be accomplished through AAL2 or AAL3 authentication processes using authenticators registered to the subscriber account.\n\nSubscriber Account Maintenance and Updates\n\nThe CSP SHALL provide the capability for a subscriber to request the CSP to update information contained in their subscriber account. The CSP MAY provide a mechanism for subscribers to update any non-core attributes directly.\n\nThe CSP SHALL validate any changes to core attribute information maintained in the subscriber account.\n\nThe CSP SHALL provide notice to the subscriber of any updates made to information in the subscriber account.\n\nThe CSP SHALL provide the capability for the subscriber to report any unauthorized access or potential compromise to information in their subscriber account.\n\nSubscriber Account Suspension or Termination\n\nThe CSP SHALL promptly suspend or terminate the subscriber account when one of the following occurs:\n\n\n The subscriber elects to terminate their subscriber account with the CSP.\n The CSP determines that the subscriber account has been compromised.\n The CSP determines that the subscriber has violated the policies or rules for participation in the CSP identity service.\n The CSP determines that the subscriber account is inactive in accordance with the policies or rules established by the CSP.\n The CSP receives notification of a subscriber’s death from an authoritative source.\n The CSP receives a legal instrument from a court to terminate a subscriber’s account.\n The CSP ceases identity system and services operations.\n\n\nThe CSP SHALL provide notification to the subscriber that their subscriber account has been suspended or terminated. Such notices SHALL include information about why the account was suspended or terminated, reactivation or renewal options, and any options for redress if the subscriber thinks the account was suspended or terminated in error.\n\nThe CSP SHALL delete any personal or sensitive information from the subscriber account records following account termination in accordance with the record retention and disposal requirements, as documented in its practices statement Sec. 3.1.1.\n"
} ,
{
"title" : "Security",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/security/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Threats and Security Considerations\n\nThis section is informative.\n\nEffective protection of identity proofing processes requires the layering of security controls and processes throughout a transaction with a given applicant. To achieve this, it is necessary to understand where and how threats can arise and compromise enrollments. There are three general categories of threats to the identity proofing process:\n\n\n \n Impersonation: where an attacker attempts to pose as another, legitimate, individual (e.g., identity theft)\n \n \n False or Fraudulent Representation: where an attacker may create a false identity or false claims about an identity (e.g., synthetic identity fraud)\n \n \n Infrastructure: where attackers may seek to compromise confidentiality, availability, and integrity of the infrastructure, data, software, or people supporting the CSP’s identity proofing process (e.g., distributed denial of service, insider threats)\n \n\n\nThis section focuses on impersonation and false or fraudulent representation threats, as infrastructure threats are addressed by traditional computer security controls (e.g., intrusion protection, record keeping, independent audits) and are outside the scope of this document. For more information on security controls, see [SP800-53], Recommended Security and Privacy Controls for Federal Information Systems and Organizations.\n\nTable 2. Identity Proofing and Enrollment Threats\n\n\n \n \n Attack/Threat\n Description\n Example\n \n \n \n \n Automated Enrollment Attempts\n Attackers leverage scripts and automated processes to rapidly generate large volumes of enrollments\n Bots leverage stolen data to submit benefits claims.\n \n \n Evidence Falsification\n Attacker creates or modifies evidence in order to claim an identity\n A fake driver’s license is used as evidence.\n \n \n Synthetic Identity fraud\n Attacker fabricates evidence of identity that is not associated with a real person\n Opening a credit card in a fake name to create a credit file.\n \n \n Fraudulent Use of Identity (Identity Theft)\n Attacker fraudulently uses another individual’s identity or identity evidence\n An individual uses a stolen passport.\n \n \n Social Engineering\n Attacker convinces a legitimate applicant to provide identity evidence or complete the identity proofing process under false pretenses\n An individual submits their identity evidence to an attacker posing as a potential employer.\n \n \n False Claims\n Attacker associates false attributes or information with a legitimate identity\n An individual falsely claims residence in a state in order to obtain a benefit that is available only to state residents.\n \n \n Video or Image Injection Attack\n Attacker creates a fake video feed of an individual associated with a real person\n A deepfake video is used to impersonate an individual portrayed on a stolen driver’s license.\n \n \n\n\nThreat Mitigation Strategies\n\nThreats to the enrollment and identity proofing process are summarized in Table 2. Related mechanisms that assist in mitigating the threats identified above are summarized in Table 3. These mitigations should not be considered comprehensive but a summary of mitigations detailed more thoroughly at each Identity Assurance Level and applied based on the risk assessment processes detailed in Sec. 3 of [SP800-63].\n\nTable 3. Identity Proofing and Enrollment Threat Mitigation Strategies\n\n\n\n\n \n \n Threat/Attack\n Mitigation Strategies\n Normative Reference(s)\n \n \n \n \n Automated Enrollment Attempts\n Web Application Firewall (WAF) controls and bot detection technology. Out-of-band engagement (e.g., confirmation codes). Biometric verification and liveness detection mechanisms. Traffic and network analysis capabilities to identify indications or malicious traffic.\n 3.1.5, 3.1.8, 3.1.11\n \n \n Evidence Falsification\n Validation of core attributes with authoritative or credible sources. Validation of physical or digital security features of the presented evidence.\n 4.1.4 & 4.1.5 (IAL1), 4.2.4 & 4.2.5 (IAL2), 4.3.4 & 4.3.5 (IAL3)\n \n \n Synthetic Identity Fraud\n Collection of identity evidence. Validation of core attributes with authoritative or credible sources. Biometric comparison of the applicant to validated identity evidence or biometric data. Checks against vital statistics repositories (e.g., Death Master File).\n 3.1.2.1, 4.1.2, 4.1.5, & 4.1.6 (IAL1), 4.2.2, 4.2.5, & 4.2.6 (IAL2), 4.3.2, 4.3.5, & 4.3.6 (IAL3)\n \n \n Fraudulent Use of Identity (Identity Theft)\n Biometric comparison of the applicant to validated identity evidence or biometric data. Presentation attack detection measures to confirm the genuine presence of applicant. Out-of-band engagement (e.g., confirmation codes) and notice of proofing. Checks against vital statistics repositories (e.g., Death Master File). Fraud, transaction, and behavioral analysis capabilities to identify indicators of potentially malicious account establishment.\n 3.1.2.1, 3.1.8, 3.1.10, 3.1.11, 4.1.6 (IAL1), 4.2.6 (IAL2), 4.3.6 (IAL3)\n \n \n Social Engineering\n Training of trusted referees to identify indications of coercion or distress. Out-of-band engagement and notice of proofing to validated address. Provide information and communication to end users on common threats and schemes. Offer onsite in-person attended identity proofing option.\n 2.1.3, 3.1.8, 3.1.110, 3.1.13.1, 8.4\n \n \n False Claims\n Geographic restrictions on traffic. Validation of core attributes with authoritative or credible sources.\n 3.1.2.1, 4.1.5 (IAL1), 4.2.5 (IAL2), 4.3.5 (IAL3)\n \n \n Video or Image Injection Attack\n Use of a combination of active and passive PAD. Use of authenticated protected channels for communications between devices and servers running matching. Authentication of biometric sensors where feasible. Monitoring and analysis of incoming video and image files to detect signs of injection.\n 3.1.8\n \n \n\n\n\n\nCollaboration with Adjacent Programs\n\nIdentity proofing services typically serve as the front door for critical business or service functions. Accordingly, these services should not operate in a vacuum. A close coordination of identity proofing and CSP functions with cybersecurity teams, threat intelligence teams, and program integrity teams can enable a more complete protection of business capabilities while constantly improving identity proofing capabilities. For example, payment fraud data collected by program integrity teams could provide indicators of compromised subscriber accounts and potential weaknesses in identity proofing implementations. Similarly, threat intelligence teams may receive indications of new tactics, techniques, and procedures that may impact identity proofing processes. CSPs and RPs should seek to establish consistent mechanisms for the exchange of information between critical security and fraud stakeholders. Where the CSP is external, this may be complicated, but should be addressed through contractual and legal mechanisms, to include technical and interoperability considerations. All data collected, transmitted, or shared should be minimized and subject to a detailed privacy and legal assessment.\n"
} ,
{
"title" : "Privacy",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/privacy/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Privacy Considerations\n\nThis section is informative.\n\nThese privacy considerations provide additional information in implementing the requirements set forth in Sec. 3.1.3 and are intended to guide CSPs and RPs in designing identity systems that prioritize protecting their users’ privacy.\n\nCollection and Data Minimization\n\nThese guidelines permit the collection and processing of only the PII necessary to validate the claimed identity, associate the claimed identity to the applicant, mitigate fraud, and to provide RPs with attributes they may use to make authorization decisions. Collecting unnecessary PII can create confusion regarding why information that is not being used for the identity proofing service is being collected. This leads to invasiveness or overreach concerns, which can lead to a loss of applicant trust. Further, PII retention can become vulnerable to unauthorized access or use. Data minimization reduces the amount of PII vulnerable to unauthorized access or use, and encourages trust in the identity proofing process.\n\nSocial Security Numbers\n\nThese guidelines permit the CSP collection of the SSN as an attribute for use in identity resolution. However, over-reliance on the SSN can contribute to misuse and place the applicant at risk of harm, such as through identity theft. Nonetheless, the SSN may facilitate identity resolution for CSPs, in particular federal agencies that use the SSN to correlate an applicant to agency records. This document recognizes the role of the SSN as an attribute and makes appropriate allowance for its use. Knowledge of the SSN is not sufficient to serve as identity evidence.\n\nWhere possible, CSPs and agencies should consider mechanisms to limit the proliferation and exposure of SSNs during the identity proofing process. This is particularly pertinent where the SSN is communicated to third party providers during attribute validation processes. To the extent possible, privacy protective techniques and technologies should be applied to reduce the risk of an individual’s SSN being exposed, stored, or maintained by third party systems. Examples of this could be the use of attribute claims (e.g., yes/no responses from a validator) to confirm the validity of a SSN without requiring it to be unnecessarily transmitted by the third party. As with all attributes in the identity proofing process, the value and risk of each attribute being processed is subject to a privacy risk assessment and federal agencies may address further in its associated PIA and SORN documentation. The SSN should only be collected where it is necessary to support identity resolution associated with the applications assurance and risk levels.\n\nNotice and Consent\n\nThe guidelines require the CSP to provide explicit notice to the applicant at the time of collection regarding the purpose for collecting and maintaining a record of the attributes necessary for identity proofing, including whether such attributes are voluntary or mandatory in order to complete the identity proofing transactions, and the consequences for not providing the attributes.\n\nAn effective notice will take into account user experience design standards and research, and an assessment of privacy risks that may arise from the collection. Various factors should be considered, including incorrectly inferring that applicants understand why attributes are collected, that collected information may be combined with other data sources, etc. An effective notice is never only a pointer leading to a complex, legalistic privacy policy or general terms and conditions that applicants are unlikely to read or understand.\n\nIn addition, RPs should provide additional guidance to applicants for available choices for the selection of CSPs, identity document requirements, related privacy notices, and alternative means of accessing services.\n\nUse Limitation\n\nThe guidelines require CSPs to use measures to maintain the objectives of predictability (enabling reliable assumptions by individuals, owners, and operators about PII and its processing by an information system) and manageability (providing the capability for the granular administration of PII, including alteration, deletion, and selective disclosure) commensurate with privacy risks that can arise from the processing of attributes for purposes other than identity proofing, authentication, authorization, or attribute assertion, related fraud mitigation, or to comply with law or legal process. A framework for managing these risks and supporting privacy risk management principles can be found in [NISTIR8062].\n\nCSPs may have various business purposes for processing attributes, including providing non-identity services to subscribers. However, processing attributes for other purposes than those disclosed to a subject can create additional privacy risks. CSPs can determine appropriate measures commensurate with the privacy risk arising from the additional processing. For example, absent applicable law, regulation, or policy, it may not be necessary to obtain consent when processing attributes to provide non-identity services requested by subscribers, although notices may help the subscribers maintain reliable assumptions about the processing (predictability). Other processing of attributes may carry different privacy risks which may call for obtaining consent or allowing subscribers more control over the use or disclosure of specific attributes (manageability). Subscriber consent needs to be meaningful; therefore, when CSPs do use consent measures, they cannot make acceptance by the subscriber of additional uses a condition of providing the identity service.\n\nFederal agencies should consult their SAOP if there are questions about whether the proposed processing falls outside the scope of the permitted processing or the appropriate privacy risk mitigation measures.\n\nRedress\n\nThe guidelines require the CSP to provide effective mechanisms for redressing applicant complaints or problems arising from the identity proofing, and make the mechanisms easy for applicants to find and access.\n\nThe Privacy Act requires federal CSPs that maintain a system of records to follow procedures to enable applicants to access and, if incorrect, amend their records. Any Privacy Act Statement should include a reference to the applicable SORN(s) (see Sec. 3.1.3), which provide the applicant with instructions on how to make a request for access or correction. Non-federal CSPs should have comparable procedures, including contact information for any third parties if they are the source of the information.\n\nIn the event an applicant is unable to establish their identity and complete the online enrollment process, CSPs should make the availability of alternative methods for completing the process clear to applicants (e.g., in person at a customer service center).\n\n\n Note: If the identity proofing process is not successful, CSPs should inform the applicant of the procedures to address the issue but should not inform the applicant of the specifics of why the registration failed (e.g., do not inform the applicant, “Your SSN did not match the one that we have on record for you”), as doing so could allow fraudulent applicants to gain more knowledge about the accuracy of the PII.\n\n\nPrivacy Risk Assessment\n\nThe guidelines require the CSP to conduct a privacy risk assessment. In conducting a privacy risk assessment, CSPs should consider:\n\n\n The likelihood that an action it takes (e.g., additional verification steps or records retention) could create a problem for the applicant, such as invasiveness or unauthorized access to the information; and\n The impact on the applicant should a problem occur. CSPs should be able to justify any response they take to identified privacy risks, including accepting the risk, mitigating the risk, and sharing the risk. Applicant consent is considered to be a form of sharing the risk and, therefore, should only be used when an applicant could reasonably be expected to have the capacity to assess and accept this shared risk.\n\n\nAgency-Specific Privacy Compliance\n\nThe guidelines cover specific compliance obligations for federal CSPs. It is critical to involve an agency’s SAOP in the earliest stages of identity service development to assess and mitigate privacy risks and advise the agency on compliance requirements, such as whether or not the PII collection to conduct identity proofing triggers the Privacy Act of 1974 [PrivacyAct] or the E-Government Act of 2002 [E-Gov] requirement to conduct a Privacy Impact Assessment. For example, with respect to identity proofing, it is likely that the Privacy Act requirements will be triggered and require coverage by either a new or existing Privacy Act system of records notice (SORN) due to the collection and maintenance of PII or other attributes necessary to conduct identity proofing.\n\nThe SAOP can similarly assist the agency in determining whether a PIA is required. These considerations should not be read as a requirement to develop a Privacy Act SORN or PIA for identity proofing alone; in many cases it will make the most sense to draft a PIA and SORN that encompasses the entire digital identity lifecycle or includes the identity proofing process as part of a larger, programmatic PIA that discusses the program or benefit to which the agency is establishing online access.\n\nDue to the many components of the digital identity lifecycle, it is important for the SAOP to have an awareness and understanding of each individual component. For example, other privacy artifacts may be applicable to an agency offering or using proofing services such as Data Use Agreements, Computer Matching Agreements, etc. The SAOP can assist the agency in determining what additional requirements apply. Moreover, a thorough understanding of the individual components of digital authentication will enable the SAOP to thoroughly assess and mitigate privacy risks either through compliance processes or by other means.\n\n"
} ,
{
"title" : "Usability",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/usability/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Usability Considerations\nThis section is informative.\n\nIn order to align with the standard terminology of user-centered design and usability, the term “user” is used throughout this section to refer to the human party. In most cases, the user in question will be the subject (in the role of applicant, claimant, or subscriber) as described elsewhere in these guidelines.\n\nThis section is intended to raise implementers’ awareness of the usability considerations associated with identity proofing and enrollment (for usability considerations for typical authenticator usage and intermittent events, see Sec. 8 of [SP800-63B].\n\n[ISO/IEC9241-11] defines usability as the “extent to which a system, product, or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” This definition focuses on users, goals, and context of use as the necessary elements for achieving effectiveness, efficiency, and satisfaction. A holistic approach considering these key elements is necessary to achieve usability.\n\nThe overarching goal of usability for identity proofing and enrollment is to promote a smooth, positive enrollment process for users by minimizing user burden (e.g., time and frustration) and enrollment friction (e.g., the number of steps to complete and the amount of information to track). To achieve this goal, organizations have to first familiarize themselves with their users.\n\nThe identity proofing and enrollment process sets the stage for a user’s interactions with a given CSP and the online services that the user will access; as negative first impressions can influence user perception of subsequent interactions, organizations need to promote a positive user experience throughout the process.\n\nUsability cannot be achieved in a piecemeal manner. Performing a usability evaluation on the enrollment and identity proofing process is critical. It is important to conduct a usability evaluation with representative users, realistic goals and tasks, and appropriate contexts of use. The enrollment and identity proofing process should be designed and implemented so that it is easy for users to do the right thing, hard for them to do the wrong thing, and easy for them to recover when the wrong thing happens. [ISO/IEC9241-11], [ISO16982], and [ISO25060] provide guidance on how to evaluate the overall usability of an identity service and additional considerations for increasing usability.\n\nFrom the user’s perspective, the three main steps of identity proofing and enrollment are pre-enrollment preparation, the enrollment and proofing session, and post-enrollment actions. These steps may occur in a single session or there could be significant time elapsed between each one (e.g., days or weeks).\n\nGeneral and step-specific usability considerations are described in sub-sections below.\n\nGuidelines and considerations are described from the users’ perspective.\n\nSection 508 of the Rehabilitation Act of 1973 [Section508] was enacted to eliminate barriers in information technology and require federal agencies to make electronic and information technology accessible to people with disabilities. While these guidelines do not directly assert requirements from Section 508, identity service providers are expected to comply with Section 508 provisions. Beyond compliance with Section 508, Federal Agencies and their service providers are generally expected to design services and systems with the experiences of people with disabilities in mind to ensure that accessibility is prioritized throughout identity system lifecycles.\n\nGeneral User Considerations During Identity Proofing and Enrollment\n\nThis sub-section provides usability considerations that are applicable across all steps of the enrollment process. Usability considerations specific to each step are detailed in Sec. 8.2, Sec. 8.3, and Sec. 8.4.\n\n\n \n To avoid user frustration, streamline the process required for enrollment to make each step as clear and easy as possible.\n \n \n Clearly communicate how and where to acquire technical assistance. For example, provide helpful information such as a link to an online self-service portal, chat sessions, and a phone number for help desk support. Ideally, sufficient information should be provided to enable users to answer their own enrollment preparation questions without outside intervention.\n \n \n Clearly explain what personal data is being collected and whether collecting the data is optional or not. Additionally, provide information indicating with whom the data will be shared, where it will be stored, and how it will be protected.\n \n Ensure that all information presented is usable.\n \n Follow good information design practice for all user-facing materials (e.g., data collection notices and fillable forms).\n Write materials in plain language and avoid technical jargon. If appropriate, tailor that language to the literacy level of the intended population. Use active voice and conversational style; logically sequence main points; use the same word consistently rather than synonyms to avoid confusion; and use bullets, numbers, and formatting where appropriate to aid readability.\n Consider text legibility, such as font style, size, color, and contrast with the surrounding background. The highest contrast is black on white. Text legibility is important because users have different levels of visual acuity. Illegible text will contribute to user comprehension errors or user entry errors (e.g., when completing fillable forms). Use sans serif font styles for electronic materials and serif fonts for paper materials. When possible, avoid fonts that do not clearly distinguish between easily confusable characters (such as the letter “O” and the number “0”). This is especially important for confirmation codes. Use a minimum font size of 12 points, as long as the text fits the display.\n \n \n Perform a usability evaluation for each step with representative users. Establish realistic goals and tasks, and appropriate contexts of use for the usability evaluation.\n\n\nPre-Enrollment Preparation\n\nThis section describes an effective approach to facilitate sufficient pre-enrollment preparation so users can avoid challenging, frustrating enrollment sessions. Ensuring that users are as prepared as possible for their enrollment sessions is critical to the overall success and usability of the identity proofing and enrollment process.\n\nSuch preparation is only possible if users receive the necessary information (e.g., the required documentation) in a usable format in an appropriate timeframe. This includes making users aware of exactly what identity evidence will be required. Users do not need to know anything about IALs or whether the identity evidence required is scored as “fair,” “strong,” or “superior,” whereas organizations need to know what IAL is required for access to a particular system.\n\nTo ensure users are equipped to make informed decisions about whether to proceed with the enrollment process, and what will be needed for their session, provide users:\n\n\n Information about the entire process, such as what to expect in each step.\n \n Clear explanations of the expected timeframes to allow users to plan accordingly.\n \n \n \n Explanation of the need for — and benefits of — identity proofing to allow users to understand the value proposition.\n \n \n Identity evidence requirements for the intended IAL and a list of acceptable evidence documents, with information about how they will be validated.\n \n \n If there is an enrollment fee and, if so, the amount and acceptable forms of payment. Offering a variety of acceptable forms of payment allows users to choose their preferred payment operation.\n \n Information on whether the user’s enrollment session will be in-person or in-person over remote channels, and whether a user can choose. Only provide information relevant to the allowable session option(s).\n \n Information on the location(s), whether a user can choose their preferred location, and necessary logistical information for in-person or in-person over remote channels session. Note that users may be reluctant to bring identity evidence to certain public places (a supermarket versus a bank), as it increases exposure to loss or theft.\n Information on the technical requirements (e.g., requirements for internet access) for remote sessions.\n An option to set an appointment for in-person or in-person over remote channels identity proofing sessions to minimize wait times. If walk-ins are allowed, make it clear to users that their wait times may be greater without an appointment.\n \n Provide clear instructions for setting up an enrollment session appointment, reminders, and how to reschedule existing appointments.\n Offer appointment reminders and allow users to specify their preferred appointment reminder method(s) (e.g., postal mail, voicemail, email, text message). Users need information such as the date, time, and location, and a description of the required identity evidence.\n \n \n \n \n Information on the allowed and required identity evidence and attributes, whether each piece is voluntary or mandatory, and the consequences for not providing the complete set of identity evidence. Users need to know the specific combinations of identity evidence, including requirements specific to a piece of identity evidence (e.g., a raised seal on a birth certificate). This is especially important due to potential difficulties procuring the necessary identity evidence.\n \n Where possible, implement tools to make it easier to obtain the necessary identity evidence.\n Inform users of any special requirements for minors or people with unique needs. For example, provide users with the information on whether applicant reference and/or trusted referee processes are available and the information necessary to use those processes (see Sec. 3.1.13).\n If forms are required:\n \n Provide fillable forms before and at the enrollment session. Do not require users to have access to a printer.\n Minimize the amount of information that users must enter on a form, as users are easily frustrated and more error-prone with longer forms. Where possible, pre-populate forms.\n \n \n \n \n\n\nIdentity Proofing and Enrollment\n\nUsability considerations specific to identity proofing and enrollment include:\n\n\n At the start of the identity proofing session, remind users of the procedure. Do not expect them to remember the procedures described during the pre-enrollment preparation step. If the enrollment session does not immediately follow pre-enrollment preparation, it is especially important to clearly remind users of the typical timeframe to complete the proofing and enrollment phase.\n Depending on the identity proofing method (e.g., Remote or Onsite Unattended), provide a separate video window that provides a step-by-step tutorial of the identity proofing process. When these types of tutorials or examples are provided, service providers should provide a range of support options to cover a broad set of users. Alternatives to a video window include verbal or written instructions.\n Provide options for the user to reschedule the time or type of their identity proofing appointment, if needed.\n Provide a checklist with the allowed and required identity evidence to ensure that users have the requisite identity evidence to proceed with the enrollment session, including enrollment codes, if applicable. If users do not have the complete set of identity evidence, they must be informed regarding whether they can complete a partial identity proofing session or use exception processing through a trusted referee or, as appropriate, applicant references for identity proofing exception processing. This also would apply to international users where the types of identity evidence and access to data, services, and validation sources may not be easily or readily available to achieve IAL identity proofing requirements. Trusted referees and applicant references are intended to provide capabilities for alternative identity proofing workflows and risk-based decisions for such types of users needing exception processing.\n Notify users regarding what information will be destroyed, what, if any, information will be retained for future follow-up sessions, and what identity evidence they will need to bring to complete a future session. Ideally, users can choose whether they would like to complete a partial identity proofing session.\n Set user expectations regarding the outcome of the enrollment session, as prior identity verification experiences may drive their expectations (e.g., receiving a driver’s license in person, receiving a passport in the mail).\n \n Clearly indicate if 1) users will receive an authenticator immediately at the end of a successful enrollment session; 2) if they will have to schedule a follow-up appointment to pick up an authenticator in person; or 3) or if users will receive the authenticator in the mail and, if so, when they can expect to receive it.\n \n \n During the enrollment session, there are several requirements to provide users with explicit notice at the time of identity proofing, such as what data will be collected and processed by the CSP. (See Sec. 3.1 and Sec. 7 for detailed requirements on notices). CSPs should be aware that seeking consent from users for the use of their attributes for purposes other than identity proofing, authentication, authorization, or attribute assertions, may make them uncomfortable. If users do not perceive how they benefit by the additional collection or uses, they may be unwilling or hesitant to provide consent or continue the process. It is recommended, then, that CSPs provide users with a thorough explanation of how they might benefit from the additional processing of their personal information, and steps the CSP takes to mitigate the risks associated with such processing. Additionally, CSPs should provide users with the ability to opt out of the additional processing.\n \n If a confirmation code is issued:\n \n Notify users in advance that they will receive a confirmation code, when to expect it, the length of time for which the code is valid, and how it will arrive (e.g., physical mail, SMS, landline telephone, or email).\n When a confirmation code is delivered to a user, remind the users which service they are enrolling in and include instructions on how to use the code and the length of time for which the code is valid. This is especially important given the short validity timeframes specified in Sec. 3.1.8.\n If issuing a machine-readable optical label, such as a QR Code (see Sec. 3.1.8), provide users with information on how to obtain QR code scanning capabilities (e.g., acceptable QR code applications).\n Inform users that they will be required to repeat the enrollment process if enrollment codes expire or are lost before use.\n Provide users with alternative options, as not all users are able to access and use technology equitably. For example, users may not have the technology needed for this approach to be feasible.\n \n \n At the end of the enrollment session:\n \n If enrollment is successful, send subscribers a notification of proofing confirming successful identity proofing and enrollment (see Sec. 3.1.8) and directions on next steps they need to take (e.g., when and where to pick up their authenticator, when it will arrive in the mail).\n If enrollment is partially complete (due to users not having the complete set of identity evidence, users choosing to stop the process, or session timeouts), communicate to users:\n \n what information will be destroyed;\n what, if any, information will be retained for future follow-up sessions;\n how long the information will be retained; and\n what identity evidence they will need to bring to a future session.\n \n \n If enrollment is not successful, provide users with clear instructions for alternative identity proofing and enrollment options, for example, in-person proofing for users who cannot complete remote proofing.\n \n \n \n If users receive the authenticator during the enrollment session, provide users with instructions on the use and maintenance of the authenticator. For example, information could include instructions for use (especially if there are different requirements for first-time use or initialization), information on authenticator expiration, how to protect the authenticator, and what to do if the authenticator is lost or stolen.\n \n For both in-person and remote identity proofing, additional usability considerations apply:\n \n At the start of the enrollment session, operators or attendants need to explain their role to users (e.g., whether operators or attendants will walk users through the enrollment session or observe silently and only interact as needed).\n At the start of the enrollment session, inform users that they must not depart during the session, and that their actions must be visible throughout the session.\n When biometrics are collected during the enrollment session, provide users with clear instructions on how to complete the collection process. The instructions are best given just prior to the process. Verbal instructions with guidance from a live operator are the most effective (e.g., instructing users where the biometric sensor is, when to start, how to interact with the sensor, and when the biometric collection is completed).\n \n \n Since remote identity proofing is conducted online, follow general web usability principles. For example:\n \n Design the user interface to walk users through the enrollment process.\n Reduce users’ memory load.\n Make the interface consistent.\n Clearly label sequential steps.\n Make the starting point clear.\n Design to support multiple platforms and device sizes.\n Make the navigation consistent, easy to find, and easy to follow.\n \n \n\n\nPost-Enrollment\nPost-enrollment refers to the step immediately following enrollment but prior to the first use of an authenticator (for usability considerations for typical authenticator usage and intermittent events, see [SP800-63B], Sec. 10. As described above, users have already been informed at the end of their enrollment session regarding the expected delivery (or pick-up) mechanism by which they will receive their authenticator.\n\nUsability considerations for post-enrollment include:\n\n\n \n Minimize the amount of time that users wait for their authenticator to arrive. Shorter wait times will allow users to access information systems and services more quickly.\n \n \n Inform users whether they need to go to a physical location to pick up their authenticators. The previously identified usability considerations for appointments and reminders still apply.\n \n \n Along with the authenticator, give users information relevant to the use and maintenance of the authenticator; this may include instructions for use, especially if there are different requirements for first-time use or initialization, information on authenticator expiration, and what to do if the authenticator is lost or stolen.\n \n \n Provide information to users about how to protect themselves from common threats to their identity accounts and associated authenticators, such as social engineering and phishing attacks.\n \n\n"
} ,
{
"title" : "Equity",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/equity/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Equity Considerations\n\nThis section is informative.\n\nThis section is intended to provide guidance to CSPs and RPs for assessing the risks associated with inequitable access, treatment, or outcomes for individuals using its identity services, as required in Sec. 3.1.4. It provides a non-exhaustive list of potential areas in the identity proofing process that may be subject to inequities, as well as possible mitigations that can be applied. CSPs and RPs can use this section as a starting point for considering where the risks for inequitable access, treatment, or outcomes exist within its identity service. It is not intended that the below guidance be considered a definitive, all-inclusive list of associated equity risks to identity services.\n\nIn assessing equity risks, CSPs and RPs start by considering the overall user population served by its online service. Additionally, CSPs and RPs further identify groups of users within the population whose shared characteristic(s) can cause them to be subject to inequitable access, treatment, or outcomes when using that service. CSPs and RPs are encouraged to assess the effectiveness of any mitigations by evaluating their impacts on the affected user group(s). The usability considerations provided in Sec. 8 should also be considered when applying equity risk mitigations to help improve the overall usability and equity for all persons using an identity service.\n\nPursuant to Executive Order 13985 [EO13985], Advancing Racial Equity and Support for Underserved Communities Through the Federal Government, OMB published its Study to Identify Methods to Assess Equity: Report to the President [OMB-Equity] which identified “the best methods, consistent with applicable law, to assist agencies in assessing equity with respect to race, ethnicity, religion, income, geography, gender identity, sexual orientation, and disability.” CSPs and RPs are encouraged to consult this study when determining which approaches and methods they will use to assess the equity of their identity services.\n\nIt is intended that remote identity proofing can broaden usability and accessibility for enrollment into online identity services. The following subsections present considerations for some identity proofing processes that may create risks of inequitable treatment for some groups and individuals and present the use of trusted referees to help to mitigate such risks associated with remote identity proofing. However, it is important that the use of trusted referees do not create additional risks of exclusion among groups who may lack internet access or who do not have easy access to smartphones or computing devices. Providing in-person options for trusted referees can help ensure that those impacted by the digital divide are still able to access services offered by the CSP or RP.\n\nAdditionally, CSPs and RPs should assess whether implementing these considerations could introduce delays to the identity proofing process and employ appropriate methods, such as online scheduling tools or additional staffing for peak demand times, to mitigate these delays.\n\nIt is also intended that the considerations and mitigations provided in this section will be proactively employed and result in a more equitable identity proofing experience for the population served by the identity service. CSPs are expected to continuously monitor the performance of their service and to make remedial updates, as appropriate. This includes policies and processes for redressing user reports of inequitable access, treatment, or outcomes of the service.\n\nIdentity Resolution and Equity\n\nIdentity resolution involves collecting the minimum set of attributes to be able to distinguish the claimed identity as a single, unique individual within the population served by the identity service. Attributes are obtained from the presented identity evidence, applicant self-assertion, and/or back-end attribute providers.\n\nThis section provides a set of possible problems and mitigations with the inequitable access, treatment, or outcomes associated with the identity resolution process:\n\nDescription: The identity service design requires an applicant to enter their name using a Western name format (e.g., first name, last name, optional middle name).\n\nPossible mitigations include:\n\n\n Analyzing possible name configurations and determining how all names can be accurately accommodated using the name fields\n Providing easy-to-find and use guidance to users on how to enter all names using the name fields\n Accepting reasonable name variations (for example, to allow for differences in name order, multiple surnames, etc.)\n Providing the option for applicants to switch to an attended (onsite or remote) workflow option\n\n\nDescription: The identity service cannot accommodate applicants whose name, gender, or other attributes have changed and are not consistently reflected on the presented identity evidence or match what is in the attribute verifier’s records.\n\nPossible mitigations include:\n\n\n Providing trusted referees (Sec. 3.1.13.1) who can make risk-based decisions based on the specific applicant circumstances\n Allowing for the use of applicant references (Sec. 3.1.13.3) who can vouch for the differences in attributes\n Providing an easily accessible list of acceptable evidence in support of the updated attribute, such as a marriage certificate\n Accepting reasonable name variations (for example, to allow for differences in name order, multiple surnames, hyphenation, or recent name changes)\n\n\nIdentity Validation and Equity\n\nIdentity evidence and core attribute validation involves confirming the genuineness, currency, and accuracy of the presented identity evidence and the accuracy of any additional attributes. These outcomes are accomplished by comparison of the evidence and attributes against data held by authoritative or credible sources. When considered together with the identity resolution phase, the result of successful validation phase is the confirmation, to some level of confidence, that the claimed identity exists in the real world.\n\nThis section provides a set of possible problems and mitigations with the inequitable access, treatment, or outcomes associated with the evidence and attribute validation process:\n\nDescription: Certain user groups do not possess the necessary minimum evidence to meet the requirements of a given IAL.\n\nPossible mitigations include:\n\n\n Providing trusted referees (Sec. 3.1.13.1) who can make risk-based decisions based on the specific applicant circumstances\n Allowing for the use of applicant references (Sec. 3.1.13.3), such as the parent of a minor child, who can vouch for the applicant\n Ensuring that the selected IAL is not higher than necessary to be commensurate with the risk of the digital service offering\n RPs offering a limited set of functionality or options for users identity proofed at lower IALs\n\n\nDescription: Records held by authoritative and credible sources are insufficient to support the validation of core attributes or presented evidence for applicants belonging to certain user groups, such as those who self-exclude themselves from programs and services due to fears of surveillance or other concerns that might result in a record of their association.\n\nPossible mitigations include:\n\n\n Providing trusted referees (Sec. 3.1.13.1) who can make risk-based decisions based on the specific applicant circumstances\n Allowing the use of applicant references (Sec. 3.1.13.3) who can vouch for the difference in attributes\n Employing multiple authoritative or credible sources\n\n\nDescription: Records held by authoritative and credible sources may include inaccurate or false information about persons who are the victims of identity fraud.\n\nPossible mitigations include:\n\n\n Providing trusted referees (Sec. 3.1.13.1) who can make risk-based decisions based on the specific applicant circumstances\n Allowing the use of applicant references (Sec. 3.1.13.3) who can vouch for the difference in attributes\n Employing multiple authoritative or credible sources\n\n\nIdentity Verification and Equity\n\nIdentity verification involves proving the binding between the applicant undergoing the identity proofing process and the validated, real-world identity established through the identity resolution and validation steps. It most often involves collecting a picture (facial image capture) of the applicant taken during the identity proofing event and comparing it to a photograph contained on a presented and validated piece of identity evidence.\n\nThis section provides a set of possible problems and mitigations with the inequitable treatment or outcomes associated with the identity verification phase:\n\nDescription: Facial image capture technologies lack the ability to capture certain skin tones or facial features of sufficient quality to perform a comparison.\n\nPossible mitigations include:\n\n\n Employing robust image capture technologies, with high performing algorithms, which have been demonstrated to accommodate different skin tones, facial features, and lighting situations\n Conducting operational testing of image capture technologies to determine if they function equitably across ethnicity, race, sex assigned at birth, and other demographic factors and upgrading, as needed, to correct for inequities\n Providing guidance to the applicant about how to improve the lighting or conditions for image capture\n Providing risk-based alternative processes, such as Trusted Referees (Sec. 3.1.13.1), that compensate for residual bias and technological limitations\n Providing the option for applicants to use CSP-controlled kiosks, which employ state-of-the-art facial and biometric capture technologies\n Providing the option for applicants to switch to an attended workflow option\n\n\nDescription: For biometric comparison involving facial images, facial coverings worn for religious purposes may impede the ability to capture a facial image of an applicant. For biometric comparison involving other biometric characteristics, demographic factors may impede the ability to capture a usable biometric sample, such as age affecting the capability to collect a usable fingerprint.\n\nPossible mitigations include:\n\n\n Providing trusted referees (Sec. 3.1.13.1) who can make risk-based decisions based on the specific applicant circumstances.\n Providing alternative ways to accomplish identity verification, such as an in-person proofing.\n Offering alternative biometric collection and comparison capabilities.\n\n\nDescription: When using 1:1 facial image comparison technologies, biased facial comparison algorithms may result in false non-matches.\n\nPossible mitigations include:\n\n\n Using algorithms that are independently tested for consistent performance across demographic groups and image types\n Supporting alternative processes to compensate for residual bias and technological limitations\n Conducting ongoing quality monitoring and operational testing to identify performance variances across demographic groups and implementing corrective actions as needed (e.g., updated algorithms, machine learning, etc.)\n\n\nDescription: When employing visual facial image comparison performed by agents of the CSP (proofing agents or trusted referees), human biases and inconsistencies in making facial comparisons may result in false non-matches.\n\nPossible mitigations include:\n\n\n Defining policy and procedures aimed at reducing/eliminating the inequitable treatment of applicants by CSP agents\n Rigorously training and certifying CSP agents\n Conducting ongoing quality monitoring and taking corrective actions when biases, inequitable treatments, or outcomes are identified\n\n\nUser Experience and Equity\n\nThe Usability Considerations section of this document (Sec. 8) provides CSPs with guidance on how to provide applicants with a smooth, positive identity proofing experience. In addition to the specific considerations provided in Sec. 8, this section provides CSPs with additional considerations when considering the equity of their user experience.\n\nDescription: Lack of access to the needed technology (e.g. connected mobile device or computer), or difficulties in using the required technologies, unduly burdens some user groups.\n\nPossible mitigations include:\n\n\n Allowing the use of process assistants who assist applicants, who are otherwise able to meet the identity proofing requirements, in the use of the required technologies and activities\n Allowing the use of publicly-available devices (e.g., computers or tablets) and providing online help resources for completing the identity proofing process on a non-applicant-owned computer or device\n Providing in-person proofing options\n Employing technologies, such as auto capture, that simplify the uploading of identity evidence and facial images\n\n\nDescription: The remote or in-person identity proofing process presents challenges for persons with disabilities.\n\nPossible mitigations for remote identity proofing include:\n\n\n Providing trusted referees (Sec. 3.1.13.1) who are trained to communicate and assist people with a variety of needs or disabilities (e.g., fluent in sign language)\n Allowing for the use of applicant references (Sec. 3.1.13.3)\n Supporting the use of accessibility and other technologies, such as audible instructions, screen readers and voice recognition technologies\n Allowing the use of process assistants to assist applicants, who are otherwise able to meet the identity proofing requirements, in the use of the required technologies and activities\n\n\nPossible mitigations for in-person identity proofing include:\n\n\n Providing trained operators who are trained to communicate and assist people with a variety of needs or disabilities (e.g., fluent in sign language)\n Choosing equipment and workstations that can be adjusted to different heights and angles\n Selecting locations that are convenient and comply with ADA accessibility guidelines\n\n\n"
} ,
{
"title" : "Identity Evidence Examples",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/evidence/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Identity Evidence Examples by Strength\n\nThis appendix is informative.\n\nThis appendix provides a non-exhaustive list of types of identity evidence, grouped by strength.\n\nFair Evidence Examples\n\nTable 4. Fair Evidence Examples\n\n\n \n \n Evidence\n Proofing\n Validation\n Verification\n \n \n \n \n Financial Account\n KYC/CIP requirements\n Confirm signature on assertion is from intended origin.\n *Demonstrated possession via an AAL2 authentication event and FAL 2 federated assertion. *User input of a micro deposit event of sufficient entropy.\n \n \n Phone Account\n Established and documented account opening practices.\n *Confirm presence of user account with MNO. *Confirm signature on assertion is from intended origin.\n *Demonstrated possession through enrollment code. *Demonstrated possession via and AAL2 authentication event and FAL2 federated assertion.\n \n \n Student ID Card\n Student registration and enrollment practices.\n *Confirm signature on assertion is from intended origin; or *Confirm physical security features and evaluate for tampering.\n *Demonstrated possession via and AAL2 authentication event and FAL2 federated assertion. *Physical comparison to image on the ID. *Biometric Comparison to image on the ID.\n \n \n Corporate ID Card\n Onboarding and background screening practices.\n *Confirm signature on assertion is from intended origin; or *Confirm physical security features and evaluate for tampering.\n *Demonstrated possession via and AAL2 authentication event and FAL2 federated assertion. *Physical comparison to image on the ID. *Biometric Comparison to image on the ID.\n \n \n Veteran Health ID card\n VA identity verification, issuance and eligibility process\n *Confirm signature on assertion is from intended origin; or *Confirm physical security features and evaluate for tampering\n *Demonstrated possession via and AAL2 authentication event and FAL2 federated assertion. *Physical comparison to image on the ID. *Biometric Comparison to image on the ID.\n \n \n Credit or Debit Card\n KYC/CIP Account Opening Practices.\n *Confirm physical security features, physical signature.\n *Demonstrated ability to authenticate to the card using a PIN or other activation factor (if available). *Physical inspection of the card. Must be presented with other evidence containing a photo.\n \n \n Snap Card\n State defined eligibility and enrollment requirements.\n Confirm physical security features, physical signature\n *Visual inspection of the card. Must be presented with other evidence containing a photo (if there is no image on the card).\n \n \n Social Security Card\n SSN application process.\n *Confirm physical security features, inspect for tampering.\n *Visual inspection of the card. Must be presented with other evidence containing a photo.\n \n \n\n\n\\clearpage\n\n\nStrong Evidence Examples\n\nTable 5. Strong Evidence Examples\n\n\n \n \n Evidence\n Proofing\n Validation\n Verification\n \n \n \n \n Driver’s License or State ID\n State issuance processes, REAL ID Act\n Confirm physical security features through inspection.\n *Physical comparison of image on ID. *Biometric Comparison of the image on the ID. *Biometric comparison to issuing source records.\n \n \n Permanent Resident Card (issued prior to May 11, 2010)\n DHS issuance and eligibility process\n *Confirm physical security features through inspection.\n *Physical comparison of image on ID. *Biometric Comparison of the image on the ID. *Biometric comparison to issuing source records.\n \n \n U.S. Uniformed Services Privilege and Identification Card\n DoD issuance and eligibility processes\n *Confirm physical security features through inspection.\n *Visual comparison of image on ID. *Biometric Comparison of the image on the ID. *Biometric comparison to issuing source records.\n \n \n Native American Tribal Photo Identification Card\n Local issuance and eligibility processes\n *Confirm physical security features through inspection.\n *Visual comparison of image on ID. *Biometric Comparison of the image on the ID. *Biometric comparison to issuing source records.\n \n \n Veteran Health ID Card (VHIC)\n VA identity verification, issuance and eligibility process\n *Confirm physical security features and evaluate for tampering\n *Visual comparison to image on the ID. *Biometric Comparison to image on the ID.\n \n \n\n\n\\clearpage\n\n\nSuperior Evidence Examples\n\nTable 6. Superior Evidence Examples\n\n\n \n \n Evidence\n Proofing\n Validation\n Verification\n \n \n \n \n Personal Identity Verification (PIV) Card\n FIPS 201-3 identity verification and issuance processes\n Validation of stored PKI Certificate, CRL check if available\n *Authentication consistent with multi-factor cryptographic authenticators per NIST SP 800-63B. *Biometric comparison to image stored on ID or biometric stored on ID. *Visual comparison of image on ID.\n \n \n Personal Identity Verification-Interoperable (PIV-I) Card\n FIPS 201-3 identity verification and issuance processes\n Validation of stored PKI Certificate, CRL check if available\n *Authentication consistent with multi-factor cryptographic authenticators per NIST SP 800-63B. *Biometric comparison to image stored on ID or biometric stored on ID. *Visual comparison of image on ID.\n \n \n Common Access Card (CAC)\n DoD identity verification and issuance process\n Validation of stored PKI Certificate, CRL check if available\n *Authentication consistent with multi-factor cryptographic authenticators per NIST SP 800-63B. *Biometric comparison to image stored on ID or biometric stored on ID. *Visual comparison of image on ID.\n \n \n US Passport\n State Department passport issuance process\n Validation of stored PKI certificate, CRL check if available.\n *Visual comparison of image on ID on stored in ID. *Biometric comparison to image on ID or stored in ID. *Biometric comparison to issuing source records.\n \n \n International e-Passports Passports\n ICAO compliant and/or State Department approved\n Validation of stored PKI certificate, CRL check if available.\n *Visual comparison of image on ID on stored in ID. *Biometric comparison to image on ID or stored in ID. *Biometric comparison to issuing source records.\n \n \n Mobile Driver’s License (MDL)\n State Issuance processes, AAMVA guidance, and Real ID Act\n Validation of Mobile Security Object, revocation check if available\n *Authentication consistent with multi-factor cryptographic authenticators per NIST SP 800-63B.\n \n \n Digital Permanent Resident Card (Verifiable Credential)\n DHS issuance and eligibility process\n Validation of stored verifiable credential, revocation check if available\n *Authentication consistent with multi-factor cryptographic authenticators per NIST SP 800-63B.\n \n \n European Digital Identity Wallet (EUDI Wallet) Personal Identification (PID) Element\n EC defined identity verification and issuance process; qualified issuer certified\n Validation of stored verifiable credential or Mobile Security Object, revocation check if available\n *Authentication consistent with multi-factor cryptographic authenticators per NIST SP 800-63B.\n \n \n\n"
} ,
{
"title" : "List of Symbols, Abbreviations, and Acronyms",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/abbreviations/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "List of Symbols, Abbreviations, and Acronyms\n\n\n 1:1 Comparison\n One-to-One Comparison\n AAL\n Authentication Assurance Level\n CSP\n Credential Service Provider\n DNS\n Domain Name System\n FACT Act\n Fair and Accurate Credit Transaction Act of 2003\n FAL\n Federation Assurance Level\n FEDRAMP\n Federal Risk and Authorization Management Program\n FMR\n False Match Rate\n FNMR\n False Non-Match Rate\n IAL\n Identity Assurance Level\n IdP\n Identity Provider\n KBA\n Knowledge-Based Authentication\n KBV\n Knowledge-Based Verification\n MFA\n Multi-Factor Authentication\n NARA\n National Archives and Records Administration\n PAD\n Presentation Attack Detection\n PIA\n Privacy Impact Assessment\n PII\n Personally Identifiable Information\n PKI\n Public Key Infrastructure\n RMF\n Risk Management Framework\n RP\n Relying Party\n SMS\n Short Message Service\n SORN\n System of Records Notice\n\n"
} ,
{
"title" : "SP 800-63A",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/abstract/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "\nABSTRACT\n\nThis guideline focuses on the enrollment and verification of an identity for use in digital authentication. Central to this is a process known as identity proofing in which an applicant provides evidence to a credential service provider (CSP) reliably identifying themselves, thereby allowing the CSP to assert that identification at a useful identity assurance level. This document defines technical requirements for each of three identity assurance levels. The guidelines are not intended to constrain the development or use of standards outside of this purpose. This publication supersedes NIST Special Publication (SP) 800-63A.\n\nKeywords\n\nauthentication; credential service provider; electronic authentication; digital authentication; electronic credentials; digital credentials; identity proofing; federation.\n"
} ,
{
"title" : "Change Log",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/changelog/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Change Log\n\nThis appendix is informative.\n\nThis appendix provides a high-level overview of the changes to SP 800-63A since its initial release.\n\n\n Reorganizes the sections to introduce general identity proofing requirements before providing specific requirements\n Separates global requirements from IAL-specific requirements to facilitate the design of identity services, regardless of assurance level\n Provides requirements for lower-risk applications, through an updated IAL1\n Introduces fraud mitigation guidance and requirements\n Adds requirements for CSP-specific privacy and equity risk assessments and considerations for integrating the results into agency assessment processes\n Introduces the concept of core attributes\n Decouples the collection of identity attributes from the collection of identity evidence\n Adjusts evidence collection requirements for IALs 1 and 2\n Expands acceptable evidence and attribute validation sources to include credible sources\n Provides non-biometric options for identity verification at IALs 1 and 2\n Adds new guidance and requirements for subscriber accounts\n Adds new guidance and requirements for the consideration of equity risks associated with identity proofing processes\n Introduces exception handling concepts and requirements, including requirements for the use of trusted referees and applicant references\n\n\n"
} ,
{
"title" : "Glossary",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/glossary/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Glossary\n\nThis section is informative.\n\nA wide variety of terms are used in the realm of digital identity. While many definitions are consistent with earlier versions of SP 800-63, some have changed in this revision. Many of these terms lack a single, consistent definition, warranting careful attention to how the terms are defined here.\n\n\n applicant\n A subject undergoing the processes of identity proofing and enrollment.\n applicant reference\n A representative of the applicant who can vouch for the identity of the applicant, specific attributes related to the applicant, or conditions relative to the context of the individual (e.g., emergency status, homelessness).\n approved cryptography\n An encryption algorithm, hash function, random bit generator, or similar technique that is Federal Information Processing Standard (FIPS)-approved or NIST-recommended. Approved algorithms and techniques are either specified or adopted in a FIPS or NIST recommendation.\n assertion\n A statement from an IdP to an RP that contains information about an authentication event for a subscriber. Assertions can also contain identity attributes for the subscriber.\n attribute\n A quality or characteristic ascribed to someone or something. An identity attribute is an attribute about the identity of a subscriber.\n attribute validation\n The process or act of confirming that a set of attributes are accurate and associated with a real-life identity. See validation.\n authenticate\n See authentication.\n authentication\n The process by which a claimant proves possession and control of one or more authenticators bound to a subscriber account to demonstrate that they are the subscriber associated with that account.\n Authentication Assurance Level (AAL)\n A category that describes the strength of the authentication process.\n authenticator\n Something that the subscriber possesses and controls (e.g., a cryptographic module or password) and that is used to authenticate a claimant’s identity. See authenticator type and multi-factor authenticator.\n authenticity\n The property that data originated from its purported source.\n authoritative source\n An entity that has access to or verified copies of accurate information from an issuing source such that a CSP has high confidence that the source can confirm the validity of the identity attributes or evidence supplied by an applicant during identity proofing. An issuing source may also be an authoritative source. Often, authoritative sources are determined by a policy decision of the agency or CSP before they can be used in the identity proofing validation phase.\n authorize\n A decision to grant access, typically automated by evaluating a subject’s attributes.\n biometric reference\n One or more stored biometric samples, templates, or models attributed to an individual and used as the object of biometric comparison in a database, such as a facial image stored digitally on a passport, fingerprint minutiae template on a National ID card or Gaussian Mixture Model for speaker recognition.\n biometric sample\n An analog or digital representation of biometric characteristics prior to biometric feature extraction, such as a record that contains a fingerprint image.\n biometrics\n Automated recognition of individuals based on their biological or behavioral characteristics. Biological characteristics include but are not limited to fingerprints, palm prints, facial features, iris and retina patterns, voiceprints, and vein patterns. Behavioral characteristics include but are not limited to keystrokes, angle of holding a smart phone, screen pressure, typing speed, mouse or mobile phone movements, and gyroscope position.\n claimant\n A subject whose identity is to be verified using one or more authentication protocols.\n claimed address\n The physical location asserted by a subject where they can be reached. It includes the individual’s residential street address and may also include their mailing address.\n claimed identity\n An applicant’s declaration of unvalidated and unverified personal attributes.\n core attributes\n The set of identity attributes that the CSP has determined and documented to be required for identity proofing.\n credential service provider (CSP)\n A trusted entity whose functions include identity proofing applicants to the identity service and registering authenticators to subscriber accounts. A CSP may be an independent third party.\n credible source\n An entity that can provide or validate the accuracy of identity evidence and attribute information. A credible source has access to attribute information that was validated through an identity proofing process or that can be traced to an authoritative source, or it maintains identity attribute information obtained from multiple sources that is checked for data correlation for accuracy, consistency, and currency.\n digital identity\n An attribute or set of attributes that uniquely describes a subject within a given context.\n digital signature\n An asymmetric key operation in which the private key is used to digitally sign data and the public key is used to verify the signature. Digital signatures provide authenticity protection, integrity protection, and non-repudiation support but not confidentiality or replay attack protection.\n disassociability\n Enabling the processing of PII or events without association to individuals or devices beyond the operational requirements of the system. [NISTIR8062]\n electronic authentication (e-authentication)\n See digital authentication.\n enrollment\n The process through which a CSP/IdP provides a successfully identity-proofed applicant with a subscriber account and binds authenticators to grant persistent access.\n entropy\n The amount of uncertainty that an attacker faces to determine the value of a secret. Entropy is usually stated in bits. A value with n bits of entropy has the same degree of uncertainty as a uniformly distributed n-bit random value.\n equity\n The consistent and systematic fair, just, and impartial treatment of all individuals, including individuals who belong to underserved communities that have been denied such treatment, such as Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders, and other persons of color; members of religious minorities; lesbian, gay, bisexual, transgender, and queer (LGBTQ+) persons; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality. [EO13985]\n Federal Information Processing Standard (FIPS)\n Under the Information Technology Management Reform Act (Public Law 104-106), the Secretary of Commerce approves the standards and guidelines that the National Institute of Standards and Technology (NIST) develops for federal computer systems. NIST issues these standards and guidelines as Federal Information Processing Standards (FIPS) for government-wide use. NIST develops FIPS when there are compelling federal government requirements, such as for security and interoperability, and there are no acceptable industry standards or solutions. See background information for more details.\n\n FIPS documents are available online on the FIPS home page: https://www.nist.gov/itl/fips.cfm\n \n federation\n A process that allows for the conveyance of identity and authentication information across a set of networked systems.\n Federation Assurance Level (FAL)\n A category that describes the process used in a federation transaction to communicate authentication events and subscriber attributes to an RP.\n hash function\n A function that maps a bit string of arbitrary length to a fixed-length bit string. Approved hash functions satisfy the following properties:\n\n \n \n One-way — It is computationally infeasible to find any input that maps to any pre-specified output.\n \n \n Collision-resistant — It is computationally infeasible to find any two distinct inputs that map to the same output.\n \n \n \n identifier\n A data object that is associated with a single, unique entity (e.g., individual, device, or session) within a given context and is never assigned to any other entity within that context.\n identity\n See digital identity\n Identity Assurance Level (IAL)\n A category that conveys the degree of confidence that the subject’s claimed identity is their real identity.\n identity evidence\n Information or documentation that supports the real-world existence of the claimed identity. Identity evidence may be physical (e.g., a driver’s license) or digital (e.g., a mobile driver’s license or digital assertion). Evidence must support both validation (i.e., confirming authenticity and accuracy) and verification (i.e., confirming that the applicant is the true owner of the evidence).\n identity proofing\n The processes used to collect, validate, and verify information about a subject in order to establish assurance in the subject’s claimed identity.\n identity provider (IdP)\n The party in a federation transaction that creates an assertion for the subscriber and transmits the assertion to the RP.\n identity resolution\n The process of collecting information about an applicant to uniquely distinguish an individual within the context of the population that the CSP serves.\n identity verification\n See verification\n injection attack\n An attack in which an attacker supplies untrusted input to a program. In the context of federation, the attacker presents an untrusted assertion or assertion reference to the RP in order to create an authenticated session with the RP.\n issuing source\n An authority responsible for the generation of data, digital evidence (i.e., assertions), or physical documents that can be used as identity evidence.\n knowledge-based verification (KBV)\n A process of validating knowledge of personal or private information associated with an individual for the purpose of verifying the claimed identity of an applicant. KBV does not include collecting personal attributes for the purposes of identity resolution.\n legal person\n An individual, organization, or company with legal rights.\n manageability\n Providing the capability for the granular administration of personally identifiable information, including alteration, deletion, and selective disclosure. [NISTIR8062]\n natural person\n A real-life human being, not synthetic or artificial.\n network\n An open communications medium, typically the Internet, used to transport messages between the claimant and other parties. Unless otherwise stated, no assumptions are made about the network’s security; it is assumed to be open and subject to active (e.g., impersonation, session hijacking) and passive (e.g., eavesdropping) attacks at any point between the parties (e.g., claimant, verifier, CSP, RP).\n non-repudiation\n The capability to protect against an individual falsely denying having performed a particular transaction.\n offline attack\n An attack in which the attacker obtains some data (typically by eavesdropping on an authentication transaction or by penetrating a system and stealing security files) that the attacker is able to analyze in a system of their own choosing.\n one-to-one (1:1) comparison\n The process in which a biometric sample from an individual is compared to a biometric reference to produce a comparison score.\n online attack\n An attack against an authentication protocol in which the attacker either assumes the role of a claimant with a genuine verifier or actively alters the authentication channel.\n online service\n A service that is accessed remotely via a network, typically the internet.\n personal information\n See personally identifiable information.\n personally identifiable information (PII)\n Information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other information that is linked or linkable to a specific individual. [A-130]\n personally identifiable information processing\n An operation or set of operations performed upon personally identifiable information that can include the collection, retention, logging, generation, transformation, use, disclosure, transfer, or disposal of personally identifiable information.\n practice statement\n A formal statement of the practices followed by the parties to an authentication process (e.g., CSP or verifier). It usually describes the parties’ policies and practices and can become legally binding.\n predictability\n Enabling reliable assumptions by individuals, owners, and operators about PII and its processing by an information system. [NISTIR8062]\n private key\n In asymmetric key cryptography, the private key (i.e., a secret key) is a mathematical key used to create digital signatures and, depending on the algorithm, decrypt messages or files that are encrypted with the corresponding public key. In symmetric key cryptography, the same private key is used for both encryption and decryption.\n processing\n Operation or set of operations performed upon PII that can include, but is not limited to, the collection, retention, logging, generation, transformation, use, disclosure, transfer, and disposal of PII. [NISTIR8062]\n presentation attack\n Presentation to the biometric data capture subsystem with the goal of interfering with the operation of the biometric system.\n presentation attack detection (PAD)\n Automated determination of a presentation attack. A subset of presentation attack determination methods, referred to as liveness detection, involves the measurement and analysis of anatomical characteristics or voluntary or involuntary reactions, to determine if a biometric sample is being captured from a living subject that is present at the point of capture.\n process assistant\n An individual who provides support for the proofing process but does not support decision-making or risk-based evaluation (e.g., translation, transcription, or accessibility support).\n proofing agent\n An agent of the CSP who is trained to attend identity proofing sessions and can make limited risk-based decisions – such as physically inspecting identity evidence and making physical comparisons of the applicant to identity evidence.\n Privacy Impact Assessment (PIA)\n A method of analyzing how personally identifiable information (PII) is collected, used, shared, and maintained. PIAs are used to identify and mitigate privacy risks throughout the development lifecycle of a program or system. They also help ensure that handling information conforms to legal, regulatory, and policy requirements regarding privacy.\n pseudonym\n A name other than a legal name.\n pseudonymity\n The use of a pseudonym to identify a subject.\n pseudonymous identifier\n A meaningless but unique identifier that does not allow the RP to infer anything regarding the subscriber but that does permit the RP to associate multiple interactions with a single subscriber.\n public key\n The public part of an asymmetric key pair that is used to verify signatures or encrypt data.\n public key certificate\n A digital document issued and digitally signed by the private key of a certificate authority that binds an identifier to a subscriber’s public key. The certificate indicates that the subscriber identified in the certificate has sole control of and access to the private key. See also [RFC5280].\n public key infrastructure (PKI)\n A set of policies, processes, server platforms, software, and workstations used to administer certificates and public-_private key_ pairs, including the ability to issue, maintain, and revoke public key certificates.\n registration\n See enrollment.\n relying party (RP)\n An entity that relies upon a verifier’s assertion of a subscriber’s identity, typically to process a transaction or grant access to information or a system.\n remote\n A process or transaction that is conducted through connected devices over a network, rather than in person.\n resolution\n See identity resolution.\n risk assessment\n The process of identifying, estimating, and prioritizing risks to organizational operations (i.e., mission, functions, image, or reputation), organizational assets, individuals, and other organizations that result from the operation of a system. A risk assessment is part of risk management, incorporates threat and vulnerability analyses, and considers mitigations provided by security controls that are planned or in-place. It is synonymous with “risk analysis.”\n risk management\n The program and supporting processes that manage information security risk to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, and other organizations and includes (i) establishing the context for risk-related activities, (ii) assessing risk, (iii) responding to risk once determined, and (iv) monitoring risk over time.\n RP subscriber account\n An account established and managed by the RP in a federated system based on the RP’s view of the subscriber account from the IdP. An RP subscriber account is associated with one or more federated identifiers and allows the subscriber to access the account through a federation transaction with the IdP.\n Senior Agency Official for Privacy (SAOP)\n Person responsible for ensuring that an agency complies with privacy requirements and manages privacy risks. The SAOP is also responsible for ensuring that the agency considers the privacy impacts of all agency actions and policies that involve PII.\n\n\n\\clearpage\n\n\n\n social engineering\n The act of deceiving an individual into revealing sensitive information, obtaining unauthorized access, or committing fraud by associating with the individual to gain confidence and trust.\n subject\n A person, organization, device, hardware, network, software, or service. In these guidelines, a subject is a natural person.\n subscriber\n An individual enrolled in the CSP identity service.\n subscriber account\n An account established by the CSP containing information and authenticators registered for each subscriber enrolled in the CSP identity service.\n supplemental controls\n Controls that may be added, in addition to those specified in the organization’s tailored assurance level, in order to address specific threats or attacks.\n synthetic identity fraud\n The use of a combination of personally identifiable information (PII) to fabricate a person or entity in order to commit a dishonest act for personal or financial gain.\n system of record (SOR)\n An SOR is a collection of records that contain information about individuals and are under the control of an agency. The records can be retrieved by the individual’s name or by an identifying number, symbol, or other identifier.\n System of Record Notice (SORN)\n A notice that federal agencies publish in the Federal Register to describe their systems of records.\n transaction\n See digital transaction\n trust agreement\n A set of conditions under which a CSP, IdP, and RP are allowed to participate in a federation transaction for the purposes of establishing an authentication session between the subscriber and the RP.\n trusted referee\n An agent of the CSP who is trained to make risk-based decisions regarding an applicant’s identity proofing case when that applicant is unable to meet the expected requirements of a defined IAL proofing process.\n usability\n The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use. [ISO/IEC9241-11]\n validation\n The process or act of checking and confirming that the evidence and attributes supplied by an applicant are authentic, accurate and associated with a real-life identity. Specifically, evidence validation is the process or act of checking that the presented evidence is authentic, current, and issued from an acceptable source. See also attribute validation.\n verification\n The process or act of confirming that the applicant undergoing identity proofing holds the claimed real-life identity represented by the validated identity attributes and associated evidence. Synonymous with “identity verification.”\n verifier\n An entity that verifies the claimant’s identity by verifying the claimant’s possession and control of one or more authenticators using an authentication protocol. To do this, the verifier needs to confirm the binding of the authenticators with the subscriber account and check that the subscriber account is\n\n"
} ,
{
"title" : "Note to Reviewers",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/reviewers/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Note to Reviewers\n\nIn December 2022, NIST released the Initial Public Draft (IPD) of SP 800-63, Revision 4. Over the course of a 119-day public comment period, the authors received exceptional feedback from a broad community of interested entities and individuals. The input from nearly 4,000 specific comments has helped advance the improvement of these Digital Identity Guidelines in a manner that supports NIST’s critical goals of providing foundational risk management processes and requirements that enable the implementation of secure, private, equitable, and accessible identity systems. Based on this initial wave of feedback, several substantive changes have been made across all of the volumes. These changes include but are not limited to the following:\n\n\n Updated text and context setting for risk management. Specifically, the authors have modified the process defined in the IPD to include a context-setting step of defining and understanding the online service that the organization is offering and intending to potentially protect with identity systems.\n Added recommended continuous evaluation metrics. The continuous improvement section introduced by the IPD has been expanded to include a set of recommended metrics for holistically evaluating identity solution performance. These are recommended due to the complexities of data streams and variances in solution deployments.\n Expanded fraud requirements and recommendations. Programmatic fraud management requirements for credential service providers and relying parties now address issues and challenges that may result from the implementation of fraud checks.\n Restructured the identity proofing controls. There is a new taxonomy and structure for the requirements at each assurance level based on the means of providing the proofing: Remote Unattended, Remote Attended (e.g., video session), Onsite Unattended (e.g., kiosk), and Onsite Attended (e.g., in-person).\n Integrated syncable authenticators. In April 2024, NIST published interim guidance for syncable authenticators. This guidance has been integrated into SP 800-63B as normative text and is provided for public feedback as part of the Revision 4 volume set.\n Added user-controlled wallets to the federation model. Digital wallets and credentials (called “attribute bundles” in SP 800-63C) are seeing increased attention and adoption. At their core, they function like a federated IdP, generating signed assertions about a subject. Specific requirements for this presentation and the emerging context are presented in SP 800-63C-4.\n\n\nThe rapid proliferation of online services over the past few years has heightened the need for reliable, equitable, secure, and privacy-protective digital identity solutions.\nRevision 4 of NIST Special Publication SP 800-63, Digital Identity Guidelines, intends to respond to the changing digital landscape that has emerged since the last major revision of this suite was published in 2017, including the real-world implications of online risks. The guidelines present the process and technical requirements for meeting digital identity management assurance levels for identity proofing, authentication, and federation, including requirements for security and privacy as well as considerations for fostering equity and the usability of digital identity solutions and technology.\n\nBased on the feedback provided in response to the June 2020 Pre-Draft Call for Comments, research into real-world implementations of the guidelines, market innovation, and the current threat environment, this draft seeks to:\n\n\n Address comments received in response to the IPD of Revision 4 of SP 800-63\n Clarify the text to address the questions and issues raised in the public comments\n Update all four volumes of SP 800-63 based on current technology and market developments, the changing digital identity threat landscape, and organizational needs for digital identity solutions to address online security, privacy, usability, and equity\n\n\nNIST is specifically interested in comments and recommendations on the following topics:\n\n\n \n Identity Proofing and Enrollment\n\n \n Is the updated structure of the requirements around defined types of proofing sufficiently clear? Are the types sufficiently described?\n Are there additional fraud program requirements that need to be introduced as a common baseline for CSPs and other organizations?\n Are the fraud requirements sufficiently described to allow for appropriate balancing of fraud, privacy, and usability trade-offs?\n Are the added identity evidence validation and authenticity requirements and performance metrics realistic and achievable with existing technology capabilities?\n \n \n \n General\n\n \n What specific implementation guidance, reference architectures, metrics, or other supporting resources could enable more rapid adoption and implementation of this and future iterations of the Digital Identity Guidelines?\n What applied research and measurement efforts would provide the greatest impacts on the identity market and advancement of these guidelines?\n \n \n\n\nReviewers are encouraged to comment and suggest changes to the text of all four draft volumes of the SP 800-63-4 suite. NIST requests that all comments be submitted by 11:59pm Eastern Time on October 7th, 2024. Please submit your comments to [email protected]. NIST will review all comments and make them available on the NIST Identity and Access Management website. Commenters are encouraged to use the comment template provided on the NIST Computer Security Resource Center website for responses to these notes to reviewers and for specific comments on the text of the four-volume suite.\n"
} ,
{
"title" : "Purpose",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/preface/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Preface\n\nThe purpose of this document, and associated companion volumes [SP800-63], [SP800-63B], and [SP800-63C], is to provide guidance to organizations for the processes and technologies for the management of digital identities at designated levels of assurance.\n\nThis document provides requirements for the identity proofing of individuals at each Identity Assurance Level (IAL) for the purposes of enrolling them into an identity service or providing them access to online resources. It applies to the identity proofing of individuals over a network or in person.\n"
} ,
{
"title" : "References",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63a/references/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "References\n\nThis section is informative.\n\n[A-130] Office of Management and Budget (2016) Managing Information as a Strategic Resource. (The White House, Washington, DC), OMB Circular A-130, July 28, 2016. Available at https://obamawhitehouse.archives.gov/sites/default/files/omb/assets/OMB/circulars/a130/a130revised.pdf\n\n[COPPA] Children’s Online Privacy Protection Act of 1998, Pub. L. 105-277 Title XIII, 112 Stat. 2681-728. Available at https://www.govinfo.gov/app/details/PLAW-105publ277\n\n[EO13985] Biden J (2021) Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. (The White House, Washington, DC), Executive Order 13985, January 25, 2021. https://www.federalregister.gov/documents/2021/01/25/2021-01753/advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government\n\n[EO13988] Biden J (2021) Preventing and Combating Discrimination on the Basis of Gender Identity or Sexual Orientation. (The White House, Washington, DC), Executive Order 13988, January 20, 2021. https://www.federalregister.gov/documents/2021/01/25/2021-01761/preventing-and-combating-discrimination-on-the-basis-of-gender-identity-or-sexual-orientation\n\n[EO13985-vision] Office of Management and Budget (2022) A Vision for Equitable Data: Recommendations from the Equitable Data Working Group. (The White House, Washington, DC), OMB Report Pursuant to Executive Order 13985, April 22, 2022. https://www.whitehouse.gov/wp-content/uploads/2022/04/eo13985-vision-for-equitable-data.pdf\n\n[E-Gov] E-Government Act of 2002, Pub. L. 107-347, 116 Stat. 2899. https://www.govinfo.gov/app/details/PLAW-107publ347\n\n[ISO/IEC9241-11] International Standards Organization (2018) ISO/IEC 9241-11 Ergonomics of human-system interaction – Part 11: Usability: Definitions and concepts (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/63500.html\n\n[ISO16982] International Standards Organization (2002) ISO/TR 16982:2002 Ergonomics of human-system interaction Usability methods supporting human-centred design (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/31176.html\n\n[ISO25060] International Standards Organization (2023) ISO/TR 25060:2023 Systems and software engineering Systems and software Quality Requirements and Evaluation (SQuaRE) General framework for Common Industry Format (CIF) for usability-related information (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/83763.html\n\n[NISTIR8062] Brooks SW, Garcia ME, Lefkovitz NB, Lightman S, Nadeau EM (2017) An Introduction to Privacy Engineering and Risk Management in Federal Systems. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Interagency or Internal Report (IR) 8062. https://doi.org/10.6028/NIST.IR.8062\n\n[NIST-Privacy] National Institute of Standards and Technology (2020) NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, Version 1.0. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Cybersecurity White Paper (CSWP) 10. https://doi.org/10.6028/NIST.CSWP.10\n\n[NIST-RMF] Joint Task Force (2018) Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-37, Rev. 2. https://doi.org/10.6028/NIST.SP.800-37r2\n\n[OMB-Equity] Office of Management and Budget (July 20th, 2021). Study to Identify Methods to Assess Equity: Report to the President. https://www.whitehouse.gov/wp-content/uploads/2021/08/OMB-Report-on-E013985-Implementation_508-Compliant-Secure-v1.1.pdf\n\n[PrivacyAct] Privacy Act of 1974, Pub. L. 93-579, 5 U.S.C. § 552a, 88 Stat. 1896 (1974). https://www.govinfo.gov/content/pkg/USCODE-2020-title5/pdf/USCODE-2020-title5-partI-chap5-subchapII-sec552a.pdf\n\n[Section508] General Services Administration (2022) IT Accessibility Laws and Policies. Available at https://www.section508.gov/manage/laws-and-policies/\n\n[RFC5280] Cooper D, Santesson S, Farrell S, Boeyen S, Housley R, Polk W (2008) Internet X.509 Public Key Infrastructure Certification and Certificate Revocation List (CRL) Profile. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 5280. https://doi.org/10.17487/RFC5280\n\n[SP800-53] Joint Task Force (2020) Security and Privacy Controls for Information Systems and Organizations. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-53 Rev. 5, Includes updates as of December 10, 2020. https://doi.org/10.6028/NIST.SP.800-53r5\n\n[SP800-63] Temoshok D, Proud-Madruga D, Choong YY, Galluzzo R, Gupta S, LaSalle C, Lefkovitz N, Regenscheid A (2024) Digital Identity Guidelines. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63-4 2pd. https://doi.org/10.6028/NIST.SP.800-63-4.2pd\n\n[SP800-63B] Temoshok D, Fenton JL, Choong YY, Lefkovitz N, Regenscheid A, Galluzzo R, Richer JP (2024) Digital Identity Guidelines: Authentication and Authenticator Management. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63B-4 ipd. https://doi.org/10.6028/NIST.SP.800-63b-4.2pd\n\n[SP800-63C] Temoshok D, Richer JP, Choong YY, Fenton JL, Lefkovitz N, Regenscheid A, Galluzzo R (2024) Digital Identity Guidelines: Federation and Assertions. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63C-4 2pd. https://doi.org/10.6028/NIST.SP.800-63c-4.2pd\n\n[SP800-161] Boyen H, Smith A, Bartol N, Winkler K, Holbrook A (2022) Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations. (National Institute of Standards and Technology, Gaithersburg, MD) NIST Special Publication (SP) 800-161r1. https://doi.org/10.6028/NIST.SP.800-161r1\n\n"
} ,
{
"title" : "Introduction",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/introduction/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Introduction\n\nThis section is informative.\n\nAuthentication is the process of determining the validity of one or more authenticators used to claim a digital identity by establishing that a subject attempting to access a digital service is in control of the secrets used to authenticate. If return visits are applicable to a service, successful authentication provides reasonable risk-based assurance that the subject accessing the service today is the same as the one who previously accessed the service. One-time services (where the subscriber will only ever access the service once) do not necessarily require the issuance of authenticators to support persistent digital authentication.\n\nThe authentication of claimants is central to the process of associating a subscriber with their online activity as recorded in their subscriber account, which is maintained by a credential service provider (CSP). Authentication is performed by verifying that the claimant controls one or more authenticators (called tokens in some earlier editions of SP 800-63) associated with a given subscriber account. The authentication process is conducted by a verifier, which is a role of the CSP or — in federated authentication — of an identity provider (IdP). Upon successful authentication, the verifier asserts an identifier to the relying party (RP). Optionally, the verifier may assert additional attributes to the RP.\n\nThis document provides recommendations on types of authentication processes, including choices of authenticators, that may be used at various Authentication Assurance Levels (AALs). It also provides recommendations on events that may occur during the lifetime of authenticators, including initial issuance, maintenance, and invalidation in the event of loss or theft of the authenticator.\n\nThis technical guideline applies to the digital authentication of subjects to systems over a network. It also requires that verifiers and RPs participating in authentication protocols be authenticated to claimants to assure the identity of the services with which they are authenticating. It does not address the authentication of a person for physical access (e.g., to a building). However, some credentials used for digital access may also be used for physical access authentication as described in [SP800-116].\n\nAALs characterizes the strength of an authentication transaction as an ordinal category. Stronger authentication (i.e., a higher AAL) requires malicious actors to have better capabilities and to expend greater resources to successfully subvert the authentication process. Authentication at higher AALs can effectively reduce the risk of attacks. A high-level summary of the technical requirements for each of the AALs is provided below; see Sec. 2 and Sec. 3 of this document for specific normative requirements.\n\nAuthentication Assurance Level 1: AAL1 provides basic confidence that the claimant controls an authenticator bound to the subscriber account being authenticated. AAL1 requires only single-factor authentication using a wide range of available authentication technologies. However, it is recommended that applications assessed at AAL1 offer multi-factor authentication options. Successful authentication requires that the claimant prove possession and control of the authenticator.\n\nAuthentication Assurance Level 2: AAL2 provides high confidence that the claimant controls one or more authenticators bound to the subscriber account being authenticated. Proof of the possession and control of two distinct authentication factors is required. Applications assessed at AAL2 must offer a phishing-resistant authentication option.\n\nAuthentication Assurance Level 3: AAL3 provides very high confidence that the claimant controls one or more authenticators bound to the subscriber account being authenticated. Authentication at AAL3 is based on the proof of possession of a key through the use of a public-key cryptographic protocol. AAL3 authentication requires a hardware-based authenticator with a non-exportable private key and a phishing-resistant authenticator (see Sec. 3.2.5); the same device may fulfill both requirements. To authenticate at AAL3, claimants are required to prove possession and control of two distinct authentication factors.\n\nWhen a session has been authenticated at a given AAL and a higher AAL is required, an authentication process may also provide step-up authentication to raise the session’s AAL.\n\nNotations\n\nThis guideline uses the following typographical conventions in text:\n\n\n Specific terms in CAPITALS represent normative requirements. When these same terms are not in CAPITALS, the term does not represent a normative requirement.\n \n The terms “SHALL” and “SHALL NOT” indicate requirements to be strictly followed in order to conform to the publication and from which no deviation is permitted.\n The terms “SHOULD” and “SHOULD NOT” indicate that among several possibilities, one is recommended as particularly suitable without mentioning or excluding others, that a certain course of action is preferred but not necessarily required, or that (in the negative form) a certain possibility or course of action is discouraged but not prohibited.\n The terms “MAY” and “NEED NOT” indicate a course of action that is permissible within the limits of the publication.\n The terms “CAN” and “CANNOT” indicate a material, physical, or causal possibility and capability or — in the negative — the absence of that possibility or capability.\n \n \n\n\nDocument Structure\n\nThis document is organized as follows. Each section is labeled as either normative (i.e., mandatory for compliance) or informative (i.e., not mandatory).\n\n\n Section 1 introduces the document. This section is informative.\n Section 2 describes requirements for Authentication Assurance Levels. This section is normative.\n Section 3 describes requirements for authenticator and verifier requirements. This section is normative.\n Section 4 describes requirements for authenticator event management. This section is normative.\n Section 5 describes requirements for session management. This section is normative.\n Section 6 provides security considerations. This section is informative.\n Section 7 provides privacy considerations. This section is informative.\n Section 8 provides usability considerations. This section is informative.\n Section 9 provides equity considerations. This section is informative.\n The References section lists publications that are referred to in this document. This section is informative.\n Appendix A discusses the strength of passwords. This appendix is informative.\n Appendix B discusses syncable authenticators. This appendix is normative.\n Appendix C contains a selected list of abbreviations used in this document. This appendix is informative.\n Appendix D contains a glossary of selected terms used in this document. This appendix is informative.\n Appendix E contains a summarized list of changes in this document’s history. This appendix is informative.\n\n"
} ,
{
"title" : "Authentication Assurance Levels",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/aal/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Authentication Assurance Levels\n\nThis section is normative.\n\nTo satisfy the requirements of a given AAL and be recognized as a subscriber, a claimant SHALL authenticate to an RP or IdP as described in [SP800-63C] with a process whose strength is equal to or greater than the requirements at that level. The authentication process results in an identifier that uniquely identifies the subscriber each time they authenticate to that RP. The identifier MAY be pseudonymous. Other attributes that identify the subscriber as a unique subject MAY also be provided.\n\nDetailed normative requirements for authenticators and verifiers at each AAL are provided in Sec. 3. See [SP800-63] Sec. 3 for details on how to choose the most appropriate AAL.\n\nPersonal information collected during and after identity proofing (described in [SP800-63A]) MAY be made available to the subscriber by the digital identity service through the subscriber account. The release or online availability of any personally identifiable information (PII) or other personal information by federal agencies requires multi-factor authentication in accordance with [EO13681]. Therefore, federal agencies SHALL select a minimum of AAL2 when PII or other personal information is made available online.\n\nAt all AALs, pre-authentication checks MAY be used to lower the risk of misauthentication. For example, authentication from an unexpected geolocation or IP address block (e.g., a cloud service) might prompt the use of additional risk-based controls. Where used, CSPs or verifiers SHALL assess their pre-authentication checks for efficacy and to identify and mitigate potential disparate impacts on their user populations. CSPs or verifiers SHALL include pre-authentication checks in the authentication privacy risk assessment. Pre-authentication checks do not impact or change the AAL of a transaction or substitute for an authentication factor.\n\nThroughout this document, [FIPS140] requirements are satisfied by the latest edition of FIPS 140. Legacy FIPS 140 certifications MAY also be used while still valid.\n\nAuthentication Assurance Level 1\n\nAAL1 provides basic confidence that the claimant controls an authenticator bound to the subscriber account. AAL1 requires either single-factor or multi-factor authentication using a wide range of available authentication technologies. Verifiers SHOULD make multi-factor authentication options available at AAL1 and encourage their use. Successful authentication requires that the claimant prove possession and control of the authenticator through a secure authentication protocol.\n\nPermitted Authenticator Types\n\nAAL1 authentication SHALL use any of the following authentication types, which are further defined in Sec. 3:\n\n\n Password (Sec. 3.1.1): A memorizable secret typically chosen by the subscriber\n Look-up secret (Sec. 3.1.2): A secret determined by the claimant by looking up a prompted value in a list held by the subscriber\n Out-of-band device (Sec. 3.1.3): A secret sent or received through a separate communication channel with the subscriber\n Single-factor one-time password (OTP) (Sec. 3.1.4): A one-time secret obtained from a device or application held by the subscriber\n Multi-factor OTP (Sec. 3.1.5): A one-time secret obtained from a device or application held by the subscriber that requires activation by a second authentication factor\n Single-factor cryptographic authentication (Sec. 3.1.6): Proof of possession and control via an authentication protocol of a cryptographic key held by the subscriber.\n Multi-Factor cryptographic authentication (Sec. 3.1.7): Proof of possession and control via an authentication protocol of a cryptographic key held by the subscriber that requires activation by a second authentication factor\n\n\nAuthenticator and Verifier Requirements\n\nAuthenticators used at AAL1 SHALL use approved cryptography. In other words, they must use approved algorithms, but the implementation need not be validated under [FIPS140].\n\nCommunication between the claimant and verifier SHALL occur via one or more authenticated protected channels.\n\nCryptography used by verifiers operated by or on behalf of federal agencies at AAL1 SHALL be validated to meet the requirements of [FIPS140] Level 1.\n\nReauthentication\n\nThese guidelines provide for two types of timeouts, which are further described in Sec. 5.2:\n\n\n An overall timeout limits the duration of an authenticated session to a specified period following authentication or a previous reauthentication.\n An inactivity timeout terminates a session that has not had activity from the subscriber for a specified period.\n\n\nPeriodic reauthentication of subscriber sessions SHALL be performed, as described in Sec. 5.2. A definite reauthentication overall timeout SHALL be established, which SHOULD be no more than 30 days at AAL1. An inactivity timeout MAY be applied but is not required at AAL1.\n\nAuthentication Assurance Level 2\n\nAAL2 provides high confidence that the claimant controls one or more authenticators that are bound to the subscriber account. Proof of possession and control of two distinct authentication factors is required through the use of secure authentication protocols. Approved cryptographic techniques are required.\n\nPermitted Authenticator Types\n\nAt AAL2, authentication SHALL use either a multi-factor authenticator or a combination of two single-factor authenticators. A multi-factor authenticator requires two factors to execute a single authentication event, such as a cryptographically secure device with an integrated biometric sensor that is required to activate the device. Authenticator requirements are specified in Sec. 3.\n\nWhen a multi-factor authenticator is used, any of the following MAY be used:\n\n\n Multi-factor Out-of-band authenticator (Sec. 3.1.3.4)\n Multi-factor OTP (Sec. 3.1.5)\n Multi-factor cryptographic authentication (Sec. 3.1.7)\n\n\nWhen a combination of two single-factor authenticators is used, the combination SHALL include a password (Sec. 3.1.1) and one physical authenticator (i.e., “something you have”) from the following list:\n\n\n Look-up secret (Sec. 3.1.2)\n Out-of-band device (Sec. 3.1.3)\n Single-factor OTP (Sec. 3.1.4)\n Single-factor cryptographic authentication (Sec. 3.1.6)\n\n\nA biometric characteristic is not recognized as an authenticator by itself. When biometric authentication meets the requirements in Sec. 3.2.3, a physical authenticator is authenticated along with the biometric. The physical authenticator then serves as “something you have,” while the biometric match serves as “something you are.” When a biometric comparison is used as an activation factor for a multi-factor authenticator, the authenticator itself serves as the physical authenticator.\n\nAuthenticator and Verifier Requirements\n\nAuthenticators used at AAL2 SHALL use approved cryptography. Cryptographic authenticators procured by federal agencies SHALL be validated to meet the requirements of [FIPS140] Level 1. At least one authenticator used at AAL2 SHALL be replay-resistant, as described in Sec. 3.2.7. Authentication at AAL2 SHOULD demonstrate authentication intent from at least one authenticator, as discussed in Sec. 3.2.8.\n\nCommunication between the claimant and verifier SHALL occur via one or more authenticated protected channels.\n\nCryptography used by verifiers operated by or on behalf of federal agencies at AAL2 SHALL be validated to meet the requirements of [FIPS140] Level 1.\n\nWhen a biometric factor is used in authentication at AAL2, the performance requirements stated in Sec. 3.2.3 SHALL be met, and the verifier SHALL determine that the biometric sensor and subsequent processing meet these requirements.\n\nVerifiers SHALL offer at least one phishing-resistant authentication option at AAL2, as described in Sec. 3.2.5. Federal agencies SHALL require their staff, contractors, and partners to use phishing-resistant authentication to access federal information systems. In all cases, verifiers SHOULD encourage the use of phishing-resistant authentication at AAL2 whenever practical since phishing is a significant threat vector.\n\nReauthentication\n\nPeriodic reauthentication of subscriber sessions SHALL be performed as described in Sec. 5.2. A definite reauthentication overall timeout SHALL be established, which SHOULD be no more than 24 hours at AAL2. The inactivity timeout SHOULD be no more than 1 hour. When the inactivity timeout has occurred but the overall timeout has not yet occurred, the verifier MAY allow the subscriber to reauthenticate using only a successful password or biometric comparison in conjunction with the session secret.\n\nAuthentication Assurance Level 3\n\nAAL3 provides very high confidence that the claimant controls authenticators that are bound to the subscriber account. Authentication at AAL3 is based on the proof of possession of a key through the use of a cryptographic protocol along with either an activation factor or a password. AAL3 authentication requires the use of a hardware-based authenticator that provides phishing resistance. Approved cryptographic techniques are required.\n\n\\clearpage\n\n\nPermitted Authenticator Types\n\nAAL3 authentication SHALL require one of the following authenticator combinations:\n\n\n Multi-factor cryptographic authentication (Sec. 3.1.7)\n Single-factor cryptographic authentication (Sec. 3.1.6) used in conjunction with a password (Sec. 3.1.1)\n\n\nAuthenticator and Verifier Requirements\n\nAuthenticators used at AAL3 SHALL use approved cryptography. Communication between the claimant and verifier SHALL occur via one or more authenticated protected channels. The cryptographic authenticator used at AAL3 SHALL be hardware-based and SHALL provide phishing resistance, as described in Sec. 3.2.5. The cryptographic authentication protocol SHALL be replay-resistant as described in Sec. 3.2.7. All authentication and reauthentication processes at AAL3 SHALL demonstrate authentication intent from at least one authenticator as described in Sec. 3.2.8.\n\nMulti-factor authenticators used at AAL3 SHALL be hardware cryptographic modules that are validated at [FIPS140] Level 2 or higher overall with at least [FIPS140] Level 3 physical security. Single-factor cryptographic authenticators used at AAL3 SHALL be validated at [FIPS140] Level 1 or higher overall with at least [FIPS140] Level 3 physical security. AAL3 protects the verifier from compromise through the use of public-key cryptography since the verifier does not possess the private key required to authenticate.\n\nCryptography used by verifiers at AAL3 SHALL be validated at [FIPS140] Level 1 or higher.\n\nHardware-based authenticators and verifiers at AAL3 SHOULD resist relevant side-channel (e.g., timing and power-consumption analysis) attacks.\n\nWhen a biometric factor is used in authentication at AAL3, the verifier SHALL determine that the biometric sensor and subsequent processing meet the performance requirements stated in Sec. 3.2.3.\n\nReauthentication\n\nPeriodic reauthentication of subscriber sessions SHALL be performed, as described in Sec. 5.2. At AAL3, the overall timeout for reauthentication SHALL be no more than 12 hours. The inactivity timeout SHOULD be no more than 15 minutes. Unlike AAL2, AAL3 reauthentication requirements are the same as for initial authentication at AAL3.\n\n\\clearpage\n\n\nGeneral Requirements\n\nThe following requirements apply to authentication at all AALs.\n\nSecurity Controls\n\nThe CSP SHALL employ appropriately tailored security controls from the moderate baseline security controls defined in [SP800-53] or an equivalent federal (e.g., [FEDRAMP]) or industry standard that the organization has chosen for the information systems, applications, and online services that these guidelines are used to protect. The CSP SHALL ensure that the minimum assurance-related controls for the appropriate system are satisfied.\n\nRecords Retention Policy\n\nThe CSP SHALL comply with its respective records retention policies in accordance with applicable laws, regulations, and policies, including any National Archives and Records Administration (NARA) records retention schedules that may apply. If the CSP opts to retain records in the absence of mandatory requirements, the CSP SHALL conduct a risk management process, including assessments of privacy and security risks, to determine how long records should be retained and SHALL inform the subscriber of that retention policy.\n\nPrivacy Requirements\n\nThe CSP SHALL employ appropriately tailored privacy controls defined in [SP800-53] or an equivalent industry standard.\n\nIf CSPs process attributes for purposes other than identity service (i.e., identity proofing, authentication, or attribute assertions), related fraud mitigation, or compliance with laws or legal process, CSPs SHALL implement measures to maintain predictability and manageability commensurate with the privacy risks that arise from the additional processing. Examples of such measures include providing clear notice, obtaining subscriber consent, and enabling the selective use or disclosure of attributes. When CSPs use consent measures, CSPs SHALL NOT make consent for the additional processing a condition of the identity service.\n\nRegardless of whether the CSP is an agency or private-sector provider, the following requirements apply to a federal agency that offers or uses the authentication service:\n\n\n The agency SHALL consult with their Senior Agency Official for Privacy (SAOP) and conduct an analysis to determine whether the collection of PII to issue or maintain authenticators triggers the requirements of the Privacy Act of 1974 [PrivacyAct] (see Sec. 7.4).\n The agency SHALL publish a System of Records Notice (SORN) to cover such collections, as applicable.\n The agency SHALL consult with its SAOP and conduct an analysis to determine whether the collection of PII to issue or maintain authenticators triggers the requirements of the E-Government Act of 2002 [E-Gov].\n The agency SHALL publish a Privacy Impact Assessment (PIA) to cover such collection, as applicable.\n\n\nRedress Requirements\n\nThe CSP and verifier SHALL provide mechanisms for the redress of subscriber complaints and for problems that arise from subscriber authentication processes as described in Sec. 5.6 of SP 800-63. These mechanisms SHALL be easy for subscribers to find and use. The CSP SHALL assess the mechanisms for efficacy in resolving complaints or problems.\n\nSummary of Requirements\n\nFigure 1 provides a non-normative summary of the requirements for each of the AALs.\n\nFig. 1. Summary of requirements by AAL\n\n\n\n"
} ,
{
"title" : "Authenticators",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/authenticators/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Authenticator and Verifier Requirements\n\nThis section is normative.\n\nThis section provides detailed requirements that are specific to each type of authenticator. With the exception of the reauthentication requirements specified in Sec. 2 and the requirement for phishing resistance at AAL3 described in Sec. 3.2.5, the technical requirements for each authenticator type are the same regardless of the AAL at which the authenticator is used.\n\nIn many circumstances, users need to share devices that are used in authentication processes, such as a family phone that receives OTPs. In public-facing applications, CSPs SHOULD NOT prevent a device from being registered as an authenticator by multiple subscribers. However, they MAY establish appropriate restrictions to prevent large-scale fraud or misuse.\n\nAuthentication, authenticator binding (see in Sec. 4), and session maintenance (see in Sec. 5) are based on proof of possession of one or more types of secrets, as shown in Table 1.\n\nTable 1. Summary of secrets (non-normative)\n\n\n \n \n Type of Secret\n Purpose\n Reference Section\n \n \n \n \n Password\n A subscriber-chosen secret used as an authentication factor\n 3.1.1\n \n \n Look-up secret\n A secret issued by a verifier and used only once to prove possession of the secret\n 3.1.2\n \n \n Out-of-band secret\n A short-lived secret generated by a verifier and independently sent to a subscriber’s device to verify its possession\n 3.1.3\n \n \n One-time passcodes (OTP)\n A secret generated by an authenticator and used only once to prove possession of the authenticator\n 3.1.4, 3.1.5\n \n \n Activation secret\n A password that is used locally as an activation factor for a multi-factor authenticator\n 3.2.10\n \n \n Long-term authenticator secret\n A secret embedded in a physical authenticator to allow it to function for authentication\n 4.1\n \n \n Recovery code\n A secret issued to the subscriber to allow them to recover an account at which they are no longer able to authenticate\n 4.2\n \n \n Session secret\n A secret issued by the verifier at authentication and used to establish the continuity of authenticated sessions\n 5.1\n \n \n\n\nRequirements by Authenticator Type\n\nThe following requirements apply to specific authenticator types.\n\nPasswords\n\nA password (sometimes referred to as a passphrase or, if numeric, a PIN) is a secret value intended to be chosen and either memorized or recorded by the subscriber. Passwords must be of sufficient complexity and secrecy that it would be impractical for an attacker to guess or otherwise discover the correct secret value. A password is “something you know”.\n\nThe requirements in this section apply to centrally verified passwords that are used as independent authentication factors and sent over an authenticated protected channel to the verifier of a CSP. Passwords used locally as an activation factor for a multi-factor authenticator are referred to as activation secrets and discussed in Sec. 3.2.10.\n\nPasswords are not phishing-resistant.\n\nPassword Authenticators\n\nPasswords SHALL either be chosen by the subscriber or assigned randomly by the CSP.\n\nIf the CSP disallows a chosen password because it is on a blocklist of commonly used, expected, or compromised values (see Sec. 3.1.1.2), the subscriber SHALL be required to choose a different password. Other complexity requirements for passwords SHALL NOT be imposed. A rationale for this is presented in Appendix A, Strength of Passwords.\n\nPassword Verifiers\n\nThe following requirements apply to passwords:\n\n\n Verifiers and CSPs SHALL require passwords to be a minimum of eight characters in length and SHOULD require passwords to be a minimum of 15 characters in length.\n Verifiers and CSPs SHOULD permit a maximum password length of at least 64 characters.\n Verifiers and CSPs SHOULD accept all printing ASCII [RFC20] characters and the space character in passwords.\n Verifiers and CSPs SHOULD accept Unicode [ISO/ISC 10646] characters in passwords. Each Unicode code point SHALL be counted as a single character when evaluating password length.\n Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.\n Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.\n Verifiers and CSPs SHALL NOT permit the subscriber to store a hint that is accessible to an unauthenticated claimant.\n Verifiers and CSPs SHALL NOT prompt subscribers to use knowledge-based authentication (KBA) (e.g., “What was the name of your first pet?”) or security questions when choosing passwords.\n Verifiers SHALL verify the entire submitted password (i.e., not truncate it).\n\n\nIf Unicode characters are accepted in passwords, the verifier SHOULD apply the normalization process for stabilized strings using either the NFKC or NFKD normalization defined in Sec. 12.1 of Unicode Normalization Forms [UAX15]. This process is applied before hashing the byte string that represents the password. Subscribers choosing passwords that contain Unicode characters SHOULD be advised that some endpoints may represent some characters differently, which would affect their ability to authenticate successfully.\n\n\\clearpage\n\n\nWhen processing a request to establish or change a password, verifiers SHALL compare the prospective secret against a blocklist that contains known commonly used, expected, or compromised passwords. The entire password SHALL be subject to comparison, not substrings or words that might be contained therein. For example, the list MAY include but is not limited to:\n\n\n Passwords obtained from previous breach corpuses\n Dictionary words\n Context-specific words, such as the name of the service, the username, and derivatives thereof\n\n\nIf the chosen password is found on the blocklist, the CSP or verifier SHALL require the subscriber to select a different secret and SHALL provide the reason for rejection. Since the blocklist is used to defend against brute-force attacks and unsuccessful attempts are rate-limited, as described below, the blocklist SHOULD be of sufficient size to prevent subscribers from choosing passwords that attackers are likely to guess before reaching the attempt limit.\n\n\n Excessively large blocklists are of little incremental security benefit because the blocklist is used to defend against online attacks, which are already limited by the throttling requirements described in Sec. 3.2.2.\n\n\nVerifiers SHALL offer guidance to the subscriber to assist the user in choosing a strong password. This is particularly important following the rejection of a password on the blocklist as it discourages trivial modification of listed weak passwords [Blocklists].\n\nVerifiers SHALL implement a rate-limiting mechanism that effectively limits the number of failed authentication attempts that can be made on the subscriber account, as described in Sec. 3.2.2.\n\nVerifiers SHALL allow the use of password managers. Verifiers SHOULD permit claimants to use the “paste” functionality when entering a password to facilitate their use. Password managers have been shown to increase the likelihood that users will choose stronger passwords, particularly if the password managers include password generators [Managers].\n\nTo assist the claimant in successfully entering a password, the verifier SHOULD offer an option to display the secret — rather than a series of dots or asterisks — while it is entered and until it is submitted to the verifier. This allows the claimant to confirm their entry if they are in a location where their screen is unlikely to be observed. The verifier MAY also permit the claimant’s device to display individual entered characters for a short time after each character is typed to verify the correct entry. This is common on mobile devices.\n\nVerifiers MAY make allowances for mistyping, such as removing leading and trailing whitespace characters before verification or allowing the verification of passwords with differing cases for the leading character, provided that passwords remain at least the required minimum length after such processing.\n\nVerifiers and CSPs SHALL use approved encryption and an authenticated protected channel when requesting passwords.\n\nVerifiers SHALL store passwords in a form that is resistant to offline attacks. Passwords SHALL be salted and hashed using a suitable password hashing scheme. Password hashing schemes take a password, a salt, and a cost factor as inputs and generate a password hash. Their purpose is to make each password guess more expensive for an attacker who has obtained a hashed password file, thereby making the cost of a guessing attack high or prohibitive. The chosen cost factor SHOULD be as high as practical without negatively impacting verifier performance. It SHOULD be increased over time to account for increases in computing performance. An approved password hashing scheme published in the latest revision of [SP800-132] or updated NIST guidelines on password hashing schemes SHOULD be used. The chosen output length of the password verifier, excluding the salt and versioning information, SHOULD be the same as the length of the underlying password hashing scheme output.\n\nThe salt SHALL be at least 32 bits in length and chosen to minimize salt value collisions among stored hashes. Both the salt value and the resulting hash SHALL be stored for each password. A reference to the password hashing scheme used, including the work factor, SHOULD be stored for each password to allow migration to new algorithms and work factors. For example, for the Password-Based Key Derivation Function 2 (PBKDF2) [SP800-132], the cost factor is an iteration count: the more times that the PBKDF2 function is iterated, the longer it takes to compute the password hash.\n\nIn addition, verifiers SHOULD perform an additional iteration of a keyed hashing or encryption operation using a secret key known only to the verifier. If used, this key value SHALL be generated by an approved random bit generator, as described in Sec. 3.2.12. The secret key value SHALL be stored separately from the hashed passwords. It SHOULD be stored and used within a hardware-protected area, such as a hardware security module or trusted execution environment (TEE). With this additional iteration, brute-force attacks on the hashed passwords are impractical as long as the secret key value remains secret.\n\nLook-Up Secrets\n\nA look-up secret authenticator is a physical or electronic record that stores a set of secrets shared between the claimant and the CSP. The claimant uses the authenticator to look up the appropriate secrets needed to respond to a prompt from the verifier. For example, the verifier could ask a claimant to provide a specific subset of the numeric or character strings printed on a card in table format. A typical application of look-up secrets is for one-time saved recovery codes (see Sec. 4.2.1.1) that the subscriber stores for use if another authenticator is lost or malfunctions. A look-up secret is “something you have.”\n\nLook-up secrets are not phishing-resistant.\n\nLook-Up Secret Authenticators\nCSPs that create look-up secret authenticators SHALL use an approved random bit generator, as described in Sec. 3.2.12, to generate the list of secrets and SHALL deliver the authenticator list securely to the subscriber (e.g., in an in-person session, via a session authenticated by the subscriber at AAL2 or higher, or through the postal mail to a contact address). Look-up secrets SHALL be at least six decimal digits (or equivalent) in length. Additional requirements described in Sec. 3.1.2.2 may also apply, depending on their length.\n\nLook-up secrets MAY be distributed by the CSP in person, by postal mail to a contact address for the subscriber, or by online distribution. If distributed online, look-up secrets SHALL be distributed over a secure channel in accordance with the post-enrollment binding requirements in Sec. 4.1.2.\n\nLook-Up Secret Verifiers\n\nVerifiers of look-up secrets SHALL prompt the claimant for the next secret from their authenticator or a specific (e.g., numbered) secret. A secret from a look-up secret authenticator SHALL be used successfully only once. If the look-up secret is derived from a grid card, each grid cell SHOULD be used only once, which limits the number of authentications that can be accomplished using look-up secrets. Otherwise, a very long list of secrets is required.\n\nVerifiers SHALL store look-up secrets in a form that is resistant to offline attacks. All look-up secrets SHALL be stored in a hashed form using an approved hashing function.\n\nLook-up secrets SHALL be at least six decimal digits (or equivalent) in length, as specified in Sec. 3.1.2.1. Look-up secrets that are shorter than specified lengths have additional verification requirements as follows:\n\n\n \n Look-up secrets that are shorter than the minimum security strength specified in the latest revision of [SP800-131A] (112 bits as of the date of this publication) SHALL be stored in a salted and hashed form using a suitable password hashing scheme, as described in Sec. 3.1.1.2. The salt value SHALL be at least 32 bits in length and arbitrarily chosen to minimize salt value collisions among stored hashes. Both the salt value and the resulting hash SHALL be stored for each look-up secret. Because look-up secrets are generated using a random bit generator, the work factor for the password hashing scheme MAY be small.\n \n \n The verifier SHALL implement a rate-limiting mechanism that effectively limits the number of failed authentication attempts that can be made on the subscriber account, as described in Sec. 3.2.2.\n \n\n\nThe verifier SHALL use approved encryption and an authenticated protected channel when requesting look-up secrets.\n\nOut-of-Band Devices\n\nAn out-of-band authenticator is a physical device that is uniquely addressable and can communicate securely with the verifier over a distinct communications channel, referred to as the secondary channel. The device is possessed and controlled by the claimant and supports private communication over this secondary channel, which is separate from the primary channel for authentication. An out-of-band authenticator is “something you have.”\n\nOut-of-band authentication uses a short-term secret generated by the verifier. The secret securely binds the authentication operation on the primary and secondary channels and establishes the claimant’s control of the out-of-band device.\n\nOut-of-band authentication is not phishing-resistant.\n\nThe out-of-band authenticator can operate in one of the following ways:\n\n\n The claimant transfers a secret received by the out-of-band device via the secondary channel to the verifier using the primary channel. For example, the claimant may receive the secret (typically a 6-digit code) on their mobile device and type it into their authentication session. This method is shown in Fig. 2.\n\n\nFig. 2. Transfer of secret to primary device\n\n\n\n\n The claimant transfers a secret received via the primary channel to the out-of-band device for transmission to the verifier via the secondary channel. For example, the claimant may view the secret on their authentication session and either type it into an app on their mobile device or use a technology such as a barcode or QR code to effect the transfer. This method is shown in Fig. 3.\n\n\nFig. 3. Transfer of secret to out-of-band device\n\n\n\n\n A third method of out-of-band authentication compares secrets received from the primary and secondary channels and requests approval on the secondary channel. This method is no longer considered acceptable because it increased the likelihood that the subscriber would approve an authentication request without actually comparing the secrets as required. This has been observed with “authentication fatigue” attacks where an attacker (claimant) would generate many out-of-band authentication requests to the subscriber, who might approve one to eliminate the annoyance. For this reason, an authenticator that receives a push notification from the verifier and simply asks the claimant to approve the transaction (even if they provide some additional information about the authentication) does not meet the requirements of this section.\n\n\nOut-of-Band Authenticators\n\nThe out-of-band authenticator SHALL establish a separate channel with the verifier to retrieve the out-of-band secret or authentication request. This channel is considered to be out-of-band with respect to the primary communication channel (even if it terminates on the same device), provided that the device does not leak information from one channel to the other without the claimant’s authorization.\n\nThe out-of-band device SHOULD be uniquely addressable by the verifier. Communication over the secondary channel SHALL use approved encryption unless sent via the public switched telephone network (PSTN). For additional authenticator requirements that are specific to using the PSTN for out-of-band authentication, see Sec. 3.1.3.3.\n\nEmail SHALL NOT be used for out-of-band authentication because it may be vulnerable to:\n\n\n Accessibility using only a password\n Interception in transit or at intermediate mail servers\n Rerouting attacks, such as those caused by DNS spoofing\n\n\nThe out-of-band authenticator SHALL uniquely authenticate itself in one of the following ways when communicating with the verifier:\n\n\n \n Using approved cryptography, establish a mutually authenticated protected channel (e.g., client-authenticated TLS) to the verifier. Communication between the out-of-band authenticator and the verifier MAY use a trusted intermediary service to which each authenticates. The key SHALL be provisioned in a mutually authenticated session during authenticator binding, as described in Sec. 4.1.\n \n \n Authenticate to a public mobile telephone network using a SIM card or equivalent secret that uniquely identifies the subscriber. This method SHALL only be used if a secret is sent from the verifier to the out-of-band device via the PSTN (SMS or voice) or an encrypted instant messaging service.\n \n \n Use a wired connection to the PSTN that the verifier can call and dictate the out-of-band secret. For purposes of this definition, “wired connection” includes services such as cable providers that offer PSTN services through other wired media and fiber via analog telephone adapters.\n \n\n\nFor single-factor out-of-band authenticators, if a secret is sent by the verifier to the out-of-band device, the device SHOULD NOT display the authentication secret while it is locked by the owner (i.e., the device SHOULD require the presentation and verification of a PIN, passcode, or biometric characteristic to view). However, authenticators SHOULD indicate the receipt of an authentication secret on a locked device.\n\nIf the out-of-band authenticator requests approval over the secondary communication channel rather than by presenting a secret that the claimant transfers to the primary communication channel, it SHALL accept a transfer of the secret from the primary channel and send it to the verifier over the secondary channel to associate the approval with the authentication transaction. The claimant MAY perform the transfer manually and with the assistance of a representation, such as a barcode or QR code.\n\nOut-of-Band Verifiers\n\nFor additional verification requirements that are specific to the PSTN, see Sec. 3.1.3.3.\n\nThe verifier waits for an authenticated protected channel to be established with the out-of-band authenticator and verifies its identifying key. The verifier SHALL NOT store the identifying key itself but SHALL use a verification method (e.g., an approved hash function or proof of possession of the identifying key) to uniquely identify the authenticator. Once authenticated, the verifier transmits the authentication secret to the authenticator. The connection with the out-of-band authenticator MAY be either manually initiated or prompted by a mechanism such as a push notification.\n\nDepending on the type of out-of-band authenticator, one of the following SHALL take place:\n\n\n Transfer of the secret from the secondary to the primary channel\n As shown in Fig. 2, the verifier MAY signal the device that contains the subscriber’s authenticator to indicate a readiness to authenticate. It SHALL then transmit a random secret to the out-of-band authenticator and wait for the secret to be returned via the primary communication channel.\n\n\n\\clearpage\n\n\n\n Transfer of the secret from the primary to the secondary channel\n As shown in Fig. 3, the verifier SHALL display a random authentication secret to the claimant via the primary channel. It SHALL then wait for the secret to be returned via the secondary channel from the claimant’s out-of-band authenticator. The verifier MAY additionally display an address, such as a phone number or VoIP address, for the claimant to use in addressing its response to the verifier.\n\n\nIn all cases, the authentication SHALL be considered invalid unless completed within 10 minutes. Verifiers SHALL accept a given authentication secret as valid only once during the validity period to provide replay resistance, as described in Sec. 3.2.7.\n\nThe verifier SHALL generate random authentication secrets that are at least six decimal digits (or equivalent) in length using an approved random bit generator as described in Sec. 3.2.12. If the authentication secret is less than 64 bits long, the verifier SHALL implement a rate-limiting mechanism that effectively limits the total number of consecutive failed authentication attempts that can be made on the subscriber account as described in Sec. 3.2.2. Generating a new authentication secret SHALL NOT reset the failed authentication count.\n\nOut-of-band verifiers that send a push notification to a subscriber device SHOULD implement a reasonable limit on the rate or total number of push notifications that will be sent since the last successful authentication.\n\nAuthentication Using the Public Switched Telephone Network\n\nUse of the PSTN for out-of-band verification is restricted as described in this section and Sec. 3.2.9. Setting or changing the pre-registered telephone number is considered to be the binding of a new authenticator and SHALL only occur as described in Sec. 4.1.2.\n\nSome subscribers may be unable to use PSTN to deliver out-of-band authentication secrets in areas with limited telephone coverage (particularly without mobile phone service). Accordingly, verifiers SHALL ensure that alternative authenticator types are available to all subscribers and SHOULD remind subscribers of this limitation of PSTN out-of-band authenticators before binding one or more devices controlled by the subscriber.\n\nVerifiers SHOULD consider risk indicators (e.g., device swap, SIM change, number porting, or other abnormal behavior) before using the PSTN to deliver an out-of-band authentication secret.\n\n\n Consistent with the restriction of authenticators in Sec. 3.2.9, NIST may adjust the restricted status of out-of-band authentication using the PSTN based on the evolution of the threat landscape and the technical operation of the PSTN.\n\n\nMulti-Factor Out-of-Band Authenticators\n\nMulti-factor out-of-band authenticators operate similarly to single-factor out-of-band authenticators (see Sec. 3.1.3.1). However, they require the presentation and verification of an activation factor (i.e., a password or a biometric characteristic) before allowing the claimant to complete the authentication transaction (i.e., before accessing or entering the authentication secret as appropriate for the authentication flow being used). Each use of the authenticator SHALL require the presentation of the activation factor.\n\nAuthenticator activation secrets SHALL meet the requirements of Sec. 3.2.10. A biometric activation factor SHALL meet the requirements of Sec. 3.2.3, including limits on the number of consecutive authentication failures. The password or biometric sample used for activation and any biometric data derived from the biometric sample (e.g., a probe produced through signal processing) SHALL be zeroized (erased) immediately after an authentication operation.\n\nSingle-Factor OTP\n\nA single-factor OTP generates one-time passwords (OTPs). This category includes hardware devices and software-based OTP generators that are installed on devices such as mobile phones. These authenticators have an embedded secret that is used as the seed for generating OTPs and do not require activation through a second factor. The OTP is displayed on the authenticator and manually input for transmission to the verifier, thereby proving possession and control of the authenticator. A single-factor OTP authenticator is something you have.\n\nSingle-factor OTPs are similar to look-up secret authenticators except that the secrets are cryptographically and independently generated by the authenticator and the verifier and compared by the verifier. The secret is computed based on a nonce that may be time-based or from a counter on the authenticator and verifier.\n\nOTP authentication is not phishing-resistant. [FIPS140] validation of OTP authenticators and verifiers is not required.\n\nSingle-Factor OTP Authenticators\n\nSingle-factor OTP authenticators and verifiers contain two persistent values: 1) a symmetric key that persists for the authenticator’s lifetime and 2) a nonce that is either changed each time the authenticator is used or is based on a real-time clock.\n\nThe secret key and its algorithm SHALL provide at least the minimum security strength specified in the latest revision of [SP800-131A] (112 bits as of the date of this publication). The nonce SHALL be of sufficient length to ensure that it is unique for each operation of the authenticator over its lifetime. If a subscriber needs to change the device on which a software-based OTP authenticator resides, they SHOULD bind the authenticator application on the new device to their subscriber account, as described in Sec. 4.1.2, and invalidate the authenticator application that will no longer be used.\n\nThe authenticator output is obtained using an approved block cipher or hash function to securely combine the key and nonce. In coordination with the verifier, the authenticator MAY truncate its output to as few as six decimal digits or equivalent.\n\nIf the nonce used to generate the authenticator output is based on a real-time clock, the nonce SHALL be changed at least once every two minutes.\n\nSingle-Factor OTP Verifiers\n\nSingle-factor OTP verifiers effectively duplicate the process of generating the OTP used by the authenticator. As such, the symmetric keys used by authenticators are also present in the verifier and SHALL be strongly protected against unauthorized disclosure by access controls that limit access to the keys to only those software components that require access.\n\nWhen binding a single-factor OTP authenticator to a subscriber account, the verifier or associated CSP SHALL use approved cryptography for key establishment to generate and exchange keys or to obtain the secrets required to duplicate the authenticator output.\n\nThe verifier SHALL use approved encryption and an authenticated protected channel when collecting the OTP. Verifiers SHALL accept a given OTP only once while it is valid to provide replay resistance as described in Sec. 3.2.7. If a claimant’s authentication is denied due to the duplicate use of an OTP, verifiers MAY warn the claimant if an attacker has been able to authenticate in advance. Verifiers MAY also warn a subscriber in an existing session of the attempted duplicate use of an OTP.\n\nThe verifier SHOULD implement or, if the authenticator output is less than 64 bits in length, SHALL implement a rate-limiting mechanism that effectively limits the number of failed authentication attempts that can be made on the subscriber account, as described in Sec. 3.2.2.\n\nMulti-Factor OTPs\n\nA multi-factor OTP generates one-time passwords for authentication following the input of an activation factor. This includes hardware devices and software-based OTP generators that are installed on mobile phones and similar devices. The second authentication factor may be provided through an integral entry pad, an integral biometric (e.g., fingerprint) reader, or a direct computer interface (e.g., USB port). The OTP is displayed on the authenticator and manually input for transmission to the verifier. The multi-factor OTP authenticator is “something you have” activated by either “something you know” or “something you are.”\n\nOTP authentication is not phishing-resistant. [FIPS140] validation of OTP authenticators and verifiers is not required.\n\nMulti-Factor OTP Authenticators\n\nMulti-factor OTP authenticators operate similarly to single-factor OTP authenticators (see Sec. 3.1.4.1), except they require the presentation and verification of either a password or a biometric characteristic to obtain the OTP from the authenticator. Each use of the authenticator SHALL require the input of the activation factor.\n\nIn addition to activation information, multi-factor OTP authenticators and verifiers contain two persistent values: 1) a symmetric key that persists for the authenticator’s lifetime and 2) a nonce that is either changed each time the authenticator is used or based on a real-time clock.\n\nThe secret key and its algorithm SHALL provide at least the minimum security strength specified in the latest revision of [SP800-131A] (112 bits as of the date of this publication). The nonce SHALL be of sufficient length to ensure that it is unique for each operation of the authenticator over its lifetime. If a subscriber needs to change the device on which a software-based OTP authenticator resides, they SHOULD bind the authenticator application on the new device to their subscriber account, as described in Sec. 4.1.2, and invalidate the authenticator application that will no longer be used.\n\nThe authenticator output is obtained using an approved block cipher or hash function to securely combine the key and nonce. In coordination with the verifier, the authenticator MAY truncate its output to as few as six decimal digits or equivalent.\n\nIf the nonce used to generate the authenticator output is based on a real-time clock, the nonce SHALL be changed at least once every two minutes.\n\nAuthenticator activation secrets SHALL meet the requirements of Sec. 3.2.10. A biometric activation factor SHALL meet the requirements of Sec. 3.2.3, including limits on the number of consecutive authentication failures. The unencrypted key and activation secret or biometric sample and any biometric data derived from the biometric sample (e.g., a probe produced through signal processing) SHALL be zeroized (erased) immediately after an OTP has been generated.\n\nMulti-Factor OTP Verifiers\n\nMulti-factor OTP verifiers effectively duplicate the process of generating the OTP used by the authenticator without requiring a second authentication factor. As such, the symmetric keys used by authenticators SHALL be strongly protected against unauthorized disclosure by access controls that limit access to the keys to only those software components that require access.\n\nWhen binding a multi-factor OTP authenticator to a subscriber account, the verifier or associated CSP SHALL use approved cryptography for key establishment to generate and exchange keys or to obtain the secrets required to duplicate the authenticator output.\n\nThe verifier SHALL use approved encryption and an authenticated protected channel when collecting the OTP. Verifiers SHALL accept a given OTP only once while it is valid to provide replay resistance as described in Sec. 3.2.7. If a claimant’s authentication is denied due to the duplicate use of an OTP, verifiers MAY warn the claimant if an attacker has been able to authenticate in advance. Verifiers MAY also warn a subscriber in an existing session of the attempted duplicate use of an OTP.\n\nTime-based OTPs [TOTP] SHALL have a defined lifetime that is determined by the expected clock drift in either direction of the authenticator over its lifetime plus an allowance for network delay and user entry of the OTP.\n\nThe verifier SHALL implement a rate-limiting mechanism that effectively limits the number of consecutive failed authentication attempts that can be made on the subscriber account, as required by Sec. 3.2.10.\n\nSingle-Factor Cryptographic Authentication\n\nSingle-factor cryptographic authentication is accomplished by proving the possession and control of a cryptographic key via an authentication protocol. Depending on the strength of authentication required, the private or symmetric key may be stored in a manner that is accessible to the endpoint being authenticated or in a separate, directly connected processor or device from which the key cannot be exported. The authenticator output is highly dependent on the specific cryptographic protocol used but is generally some type of signed message. A single-factor cryptographic authenticator is “something you have.”\n\nCryptographic authentication is phishing-resistant if it meets the additional requirements in Sec. 3.2.5.\n\nSingle-Factor Cryptographic Authenticators\n\nSingle-factor cryptographic authenticators encapsulate one or more private or symmetric keys. The key SHOULD be stored in appropriate storage available to the authenticator (e.g., keychain storage), or if the key is to be non-exportable, it SHALL be stored in an isolated execution environment protected by hardware or in a separate processor with a controlled interface to the central processing unit of the user endpoint. If they are accessible to the endpoint being authenticated, the private or symmetric keys SHALL be strongly protected against unauthorized disclosure by using access controls that limit access to the key to only those software components that require access.\n\nExternal (i.e., non-embedded) cryptographic authenticators SHALL meet the requirements for connected authenticators in Sec. 3.2.11.\n\nAs required by Sec. 2.3.2, single-factor cryptographic authenticators that are being used at AAL3 must meet the authentication intent requirements of Sec. 3.2.8.\n\nSingle-Factor Cryptographic Verifiers\n\nSingle-factor cryptographic verifiers generate a challenge nonce, send it to the corresponding authenticator, and use the authenticator output to verify possession of the authenticator. The authenticator output is highly dependent on the specific cryptographic authenticator and protocol used but is generally some type of signed message.\n\nThe verifier has either a symmetric or an asymmetric public cryptographic key that corresponds to each authenticator. While both types of keys SHALL be protected against modification, symmetric keys SHALL additionally be protected against unauthorized disclosure by access controls that limit access to the key to only those software components that require access.\n\nThe secret or symmetric key and its algorithm SHALL provide at least the minimum security strength specified in the latest revision of [SP800-131A] (112 bits as of the date of this publication). The challenge nonce SHALL be at least 64 bits in length and SHALL either be unique over the authenticator’s lifetime or statistically unique (i.e., generated using an approved random bit generator, as described in Sec. 3.2.12). The verification operation SHALL use approved cryptography.\n\nMulti-Factor Cryptographic Authentication\n\nMulti-factor cryptographic authentication uses an authentication protocol to prove possession and control of a cryptographic key that requires activation through a second authentication factor. Depending on the strength of authentication needed, the private or symmetric key may be stored in a manner accessible to the endpoint being authenticated or in a separate, directly connected processor or device from which the key cannot be exported. The authenticator output is highly dependent on the specific cryptographic protocol used but is generally some type of signed message. A multi-factor cryptographic authenticator is “something you have” and is activated by an activation factor representing either “something you know” or “something you are.”\n\nCryptographic authentication is phishing-resistant if it meets the additional requirements in Sec. 3.2.5.\n\nMulti-Factor Cryptographic Authenticators\n\nMulti-factor cryptographic authenticators encapsulate one or more private or symmetric keys that SHALL only be accessible through the presentation and verification of an activation factor (i.e., a password or a biometric characteristic). The key SHOULD be stored in appropriate storage available to the authenticator (e.g., keychain storage), or if the key is to be non-exportable, it SHALL be stored in an isolated execution environment protected by hardware or in a separate processor with a controlled interface to the central processing unit of the user endpoint. If accessible to the endpoint being authenticated, the key SHALL be strongly protected against unauthorized disclosure by using access controls that limit access to the key to only those software components that require access.\n\nSome cryptographic authenticators, referred to as “syncable authenticators,” can manage their private keys using a sync fabric (cloud provider). Additional requirements for using syncable authenticators are in Appendix B.\n\nExternal (non-embedded) cryptographic authenticators SHALL meet the requirements for connected authenticators in Sec. 3.2.11.\n\nEach authentication operation that uses the authenticator SHALL require the activation factor to be input.\n\nAuthenticator activation secrets SHALL meet the requirements of Sec. 3.2.10. A biometric activation factor SHALL meet the requirements of Sec. 3.2.3, including limits on the number of consecutive authentication failures.\n\nThe activation secret or biometric sample and any biometric data derived from the biometric sample (e.g., a probe produced through signal processing) SHALL be zeroized (erased) after an authentication transaction.\n\nMulti-Factor Cryptographic Verifiers\n\nThe requirements for a multi-factor cryptographic verifier are identical to those for a single-factor cryptographic verifier, as described in Sec. 3.1.6.2. Verification of the output from a multi-factor cryptographic authenticator proves that the activation factor was used.\n\nUsage With Subscriber-Controlled Wallets\n\nA special-case usage of multi-factor cryptographic authentication is with subscriber-controlled wallets, described in Sec. 5 of [SP800-63C]. After the claimant first unlocks the wallet using an activation factor, the authentication process uses a federation protocol, as detailed in [SP800-63C]. The assertion contents and presentation requirements of the federation protocol provide the security characteristics required of cryptographic authenticators. As such, subscriber-controlled wallets can be considered multi-factor authenticators through the activation factor and the presentation and validation of an assertion generated by the wallet.\n\nAccess to the private key SHALL require an activation factor. Authenticator activation secrets SHALL meet the requirements of Sec. 3.2.10. Biometric activation factors SHALL meet the requirements of Sec. 3.2.3, including limits on the number of consecutive authentication failures. The password or biometric sample used for activation and any biometric data derived from the biometric sample SHALL be zeroized (erased) immediately after an authentication transaction.\n\nAuthentication processes using subscriber-controlled wallets SHALL be used with a federation process as detailed in Sec. 5 of [SP800-63C]. Signed audience-restricted assertions generated by subscriber-controlled wallets are considered phishing-resistant because they prevent an assertion presented to an impostor RP from being used by the legitimate one. Assertions that lack a valid signature from the wallet or an audience restriction SHALL NOT be considered phishing-resistant.\n\nSyncable Authenticators\n\nSome multifactor cryptographic authenticators allow the subscriber to copy (clone) the authentication secret to additional devices, usually via a sync fabric. This eases the burden for subscribers who want to use additional devices to authenticate. Specific requirements for syncable authenticators and the sync fabric are given in Appendix B.\n\nGeneral Authenticator Requirements\n\nThe following requirements apply to all types of authentication.\n\nPhysical Authenticators\n\nCSPs SHALL provide subscriber instructions for appropriately protecting the authenticator against theft or loss. The CSP SHALL provide a mechanism to invalidate1 the authenticator immediately upon notification from a subscriber that the authenticator’s loss, theft, or compromise is suspected.\n\nPossession and control of a physical authenticator are based on proof of possession of a secret associated with the authenticator. When an embedded secret (typically a certificate and associated private key) is in the endpoint, its “device identity” can be considered a physical authenticator. However, this requires a secure authentication protocol that is appropriate for the AAL being authenticated. Browser cookies do not satisfy this requirement except at AAL1 or as short-term secrets for session maintenance (not authentication) as described in Sec. 5.1.1.\n\nRate Limiting (Throttling)\n\nWhen required by the authenticator type descriptions in Sec. 3.1, the verifier SHALL implement controls to protect against online guessing attacks. Unless otherwise specified in the description of a given authenticator, the verifier SHALL limit consecutive failed authentication attempts using one or more specific authenticators on a single subscriber account to no more than 100.\n\n\n The limit of 100 attempts is an upper bound; agencies MAY impose lower limits. The limit of 100 was chosen to balance the likelihood of a correct guess (e.g., 100 attempts against a six-digit decimal OTP authenticator output) versus the potential need for account recovery when the limit is exceeded.\n\n\nAdditional techniques MAY be used to reduce the likelihood that an attacker will lock the legitimate claimant out due to rate limiting. These include:\n\n\n \n Requiring the claimant to complete a bot-detection and mitigation challenge before attempting authentication\n \n \n Requiring the claimant to wait after a failed attempt for a period of time that increases as the subscriber account approaches its maximum allowance for consecutive failed attempts (e.g., 30 seconds up to an hour)\n \n \n Accepting only authentication requests from IP addresses from which the subscriber has been successfully authenticated before\n \n \n Leveraging other risk-based or adaptive authentication techniques to identify user behavior that falls within or outside typical norms (e.g., the use of the claimant’s IP address, geolocation, timing of request patterns, or browser metadata)\n \n\n\nWhen the subscriber successfully authenticates, the verifier SHOULD disregard any previous failed attempts for the authenticators used in the successful authentication.\n\nFollowing successful authentication at a given AAL, the verifier SHOULD reset the retry count of an authenticator that has been locked out due to excessive retries. If this is provided, the maximum AAL of the authenticator being reset SHALL not exceed the AAL of the session from which it is being reset. If the subscriber cannot authenticate at the required AAL, the account recovery procedures in Sec. 4.2 SHALL be used.\n\nUse of Biometrics\n\nThe use of biometrics (i.e., something you are) in authentication includes both the measurement of physical characteristics (e.g., fingerprint, iris, facial characteristics) and behavioral characteristics (e.g., typing cadence). Both classes are considered biometric modalities, although modalities may differ in the extent to which they establish authentication intent as described in Sec. 3.2.8.\n\nFor a variety of reasons, this document supports only a limited use of biometrics for authentication. These reasons include:\n\n\n The biometric false match rate (FMR) does not provide sufficient confidence in the subscriber’s authentication by itself. In addition, FMR does not account for spoofing attacks.\n Biometric comparison is probabilistic, whereas the other authentication factors are deterministic.\n Biometric template protection schemes provide a method for revoking biometric characteristics comparable to other authentication factors (e.g., PKI certificates and passwords). However, the availability of such solutions is limited.\n Biometric characteristics do not constitute secrets. They can often be obtained online or, in the case of a facial image, by taking a picture of someone with or without their knowledge. Latent fingerprints can be lifted from objects someone touches, and iris patterns can be captured with high-resolution images. While presentation attack detection (PAD) technologies can mitigate the risk of these types of attacks, additional trust in the sensor or biometric processing is required to ensure that PAD is operating in accordance with the needs of the CSP and the subscriber.\n\n\nTherefore, the limited use of biometrics for authentication is supported with the following requirements and guidelines.\n\nBiometrics SHALL be used only as part of multi-factor authentication with a physical authenticator (i.e., “something you have”). The biometric characteristic SHALL be presented and compared for each authentication operation. An alternative non-biometric authentication option SHALL always be provided to the subscriber. Biometric data SHALL be treated and secured as sensitive PII.\n\nThe biometric system SHALL operate with an FMR [ISO/IEC2382-37] of one in 10000 or better. This FMR SHALL be achieved under the conditions of a conformant attack (i.e., zero-effort impostor attempt) as defined in [ISO/IEC30107-1]. The biometric system SHOULD demonstrate a false non-match rate (FNMR) of less than 5 %. Biometric performance SHALL be tested in accordance with [ISO/IEC19795-1].\n\nBiometric authentication technologies SHALL provide similar performance for subscribers of different demographic types (e.g., racial background, gender, ethnicity).\n\nThe biometric system SHOULD implement PAD. Testing the biometric system for deployment SHOULD demonstrate an impostor attack presentation accept rate (IAPAR) of less than 0.15. Presentation attack resistance SHALL be tested in accordance with Clause 13 of [ISO/IEC30107-3]. The PAD decision MAY be made either locally on the claimant’s device or by a central verifier.\n\nThe biometric system SHALL allow no more than five consecutive failed authentication attempts or 10 consecutive failed attempts if PAD is implemented and meets the above requirements. Once that limit has been reached, the biometric authenticator SHALL impose a delay of at least 30 seconds before each subsequent attempt, with an overall limit of no more than 50 consecutive failed authentication attempts or 100 if PAD is implemented due to the mitigation of presentation attacks. Once the overall limit is reached, the biometric system SHALL disable biometric user authentication and offer another factor (e.g., a different biometric modality or an activation secret if it is not a required factor) if such an alternative method is already available. These limits are upper bounds, and agencies MAY make risk-based decisions to impose lower limits.\n\nThe verifier SHOULD determine the performance and integrity of the sensor and its associated endpoint. Acceptable methods for making this determination include but are not limited to:\n\n\n Use of a known sensor, as determined by sensor authentication\n First- or third-party testing against biometric performance standards\n Runtime interrogation of signed metadata (e.g., attestation), as described in Sec. 3.2.4\n\n\nBiometric comparison can be performed locally on a device being used by the claimant or at a central verifier. Since the potential for attacks on a larger scale is greater at central verifiers, comparison SHOULD be performed locally.\n\nThe presentation of a biometric factor for authenticator activation SHALL be a separate operation from unlocking the host device (e.g., smartphone). However, the same activation factor used to unlock the host device MAY be used in the authentication operation. Agencies MAY lower this requirement for authenticators that are managed by or on behalf of the CSP (e.g., via mobile device management) and constrained to have short agency-determined inactivity timeouts and biometric systems that meet the above requirements.\n\nIf the comparison is performed centrally:\n\n\n The sensor or endpoint SHALL be authenticated before capturing the biometric sample from the claimant. The verifier MAY limit the use of the centrally stored biometric template to particular sensors or sensor classes (e.g., sensor manufacturers or models).\n Appropriate controls (e.g., encryption and access controls) for sensitive PII SHALL be implemented.\n An authenticated protected channel between the sensor (or an endpoint containing a sensor that resists sensor replacement) and the verifier SHALL be established. All transmission of biometric information SHALL be conducted over that authenticated protected channel.\n\n\nBiometric samples collected in the authentication process MAY be used to train comparison algorithms (e.g., updating templates to address changes in subscriber characteristics) or — with subscriber consent — for other research purposes. Biometric samples and any biometric data derived from the biometric sample SHALL be zeroized (erased) immediately after any training or research data has been derived.\n\nAttestation\n\nThe CSP needs to have a reliable basis for evaluating the characteristics of the authenticator, such as the inclusion of a signed attestation. An attestation is information conveyed to the CSP, generally when an authenticator is bound, regarding a connected authenticator or the endpoint involved in an authentication operation. Information conveyed by attestation MAY include but is not limited to:\n\n\n The provenance (e.g., manufacturer or supplier certification), health, and integrity of the authenticator and endpoint\n Security features of the authenticator\n Security and performance characteristics of biometric sensors\n Sensor modality\n\n\nAttestations SHALL be signed using a digital signature that provides at least the minimum security strength specified in the latest revision of [SP800-131A] (112 bits as of the date of this publication).\n\nVerifiers in federal enterprise systems2 SHOULD use attestation features to verify the capabilities and source of authenticators. In other applications, attestation information MAY be used as part of a verifier’s risk-based authentication decisions.\n\nPhishing (Verifier Impersonation) Resistance\n\nPhishing attacks, previously referred to in SP 800-63B as “verifier impersonation,” are attempts by fraudulent verifiers and RPs to fool an unwary claimant into presenting an authenticator to an impostor. In some prior versions of SP 800-63, protocols resistant to phishing attacks were also referred to as “strongly MitM-resistant.”\n\nThe term phishing is widely used to describe a variety of similar attacks. In this document, phishing resistance is the ability of the authentication protocol to prevent the disclosure of authentication secrets and valid authenticator outputs to an impostor verifier without relying on the vigilance of the claimant. How the claimant is directed to the impostor verifier is not relevant. For example, regardless of whether the claimant was directed there via search engine optimization or prompted by email, it is considered to be a phishing attack.\n\nApproved cryptographic algorithms SHALL be used to establish phishing resistance where required. Keys used for this purpose SHALL provide at least the minimum security strength specified in the latest revision of [SP800-131A] (112 bits as of the date of this publication).\n\nPhishing resistance requires single- or multi-factor cryptographic authentication. Authenticators that involve the manual entry of an authenticator output (e.g., out-of-band and OTP authenticators) are not phishing-resistant because the manual entry does not bind the authenticator output to the specific session being authenticated. For example, an impostor verifier could relay an authenticator output to the verifier and successfully authenticate.\n\nTwo methods of phishing resistance are recognized: channel binding and verifier name binding. Channel binding is considered more secure than verifier name binding because it is not vulnerable to the misissuance or misappropriation of verifier certificates, but both methods satisfy the requirements for phishing resistance.\n\nChannel Binding\n\nAn authentication protocol with channel binding SHALL be used to establish an authenticated protected channel with the verifier. The protocol SHALL then strongly and irreversibly bind a channel identifier negotiated in establishing the authenticated protected channel to the authenticator output (e.g., by signing the two values together using a private key controlled by the claimant for which the public key is known to the verifier). The verifier SHALL validate the signature or other information used to prove phishing resistance. This prevents an impostor verifier — even one that has obtained a certificate representing the actual verifier — from successfully relaying that authentication on a different authenticated protected channel.\n\nAn example of a phishing-resistant authentication protocol that uses channel binding is client-authenticated TLS [TLS] because the client signs the authenticator output along with earlier messages from the protocol that are unique to the particular TLS connection being negotiated.\n\nVerifier Name Binding\n\nAn authentication protocol with verifier name binding SHALL be used to establish an authenticated protected channel with the verifier. The protocol SHALL then generate an authenticator output that is cryptographically bound to a verifier identifier that is authenticated as part of the protocol. In the case of domain name system (DNS) identifiers, the verifier identifier SHALL be either the authenticated hostname of the verifier or a parent domain that is at least one level below the public suffix [PSL] associated with that hostname. The binding MAY be established by choosing an associated authenticator secret, deriving an authenticator secret using the verifier identifier, cryptographically signing the authenticator output with the verifier identifier, or using similar cryptographically secure means.\n\nW3C WebAuthn [WebAuthn], which is used by authenticators that implement the FIDO2 specifications [FIDO2], is an example of a standard that provides phishing resistance through verifier name binding.\n\nVerifier-CSP Communications\n\nIf the verifier and CSP are separate entities (as shown by the dotted line in Fig. 3 of [SP800-63]), communications between the verifier and CSP SHALL occur through a mutually authenticated secure channel (e.g., a client-authenticated TLS connection) using approved cryptography.\n\nReplay Resistance\n\nAn authentication process resists replay attacks if it is impractical to achieve a successful authentication by recording and replaying a previous authentication message. Replay resistance is in addition to the replay-resistant nature of authenticated protected channel protocols since the output could be stolen before entry into the protected channel. Protocols that use nonces or challenges to prove the “freshness” of the transaction are resistant to replay attacks since the verifier will easily detect when old protocol messages are replayed because they will not contain the appropriate nonces or timeliness data.\n\nExamples of replay-resistant authenticators include OTP authenticators, cryptographic authenticators, and look-up secrets.\n\nIn contrast, passwords are not considered replay-resistant because the authenticator output — the secret itself — is provided for each authentication.\n\nAuthentication Intent\n\nAn authentication process demonstrates intent if it requires the claimant to respond explicitly to each authentication or reauthentication request. The goal of authentication intent is to make it more difficult for authenticators (e.g., multi-factor cryptographic authenticators) to be used without the claimant’s knowledge, such as by malware on the endpoint. The authenticator itself SHALL establish authentication intent, although multi-factor cryptographic authenticators MAY establish intent by reentry of the activation factor for the authenticator.\n\nAuthentication intent MAY be established in several ways. Authentication processes that require the claimant’s intervention can be used to prove intent (e.g., a claimant entering an authenticator output from an OTP authenticator). Cryptographic authenticators that require user action for each authentication or reauthentication operation can also be used to establish intent (e.g., by pushing a button or reinsertion).\n\nThe presentation of biometric characteristics does not always establish authentication intent. For example, using a front-facing camera on a mobile phone to capture a face biometric does not constitute intent, as it can be reasonably expected to capture a face image while the device is used for other non-authentication purposes. In these scenarios, an explicit mechanism (e.g., tapping a software or physical button) SHALL be provided to establish authentication intent.\n\nRestricted Authenticators\n\nAs threats evolve, authenticators’ ability to resist attacks typically degrades. Conversely, the performance of some authenticators may improve, such as when changes to their underlying standards increase their ability to resist particular attacks.\n\nTo account for these changes in authenticator performance, NIST places additional restrictions on authenticator types or specific classes or instantiations of an authenticator type. Although they represent a less secure approach to multi-factor authentication, restricted authenticators remain necessary for some government-to-public applications.\n\nThe acceptance of a restricted authenticator requires the implementing organization to assess, understand, and accept the risks associated with that authenticator and acknowledge that risks will likely increase over time. It is the RP’s responsibility to determine the level of acceptable risk for their systems and associated data, to define any methods for mitigating excessive risks, and to communicate those determinations to the verifier. If the RP determines that the risk to any party is unacceptable, the restricted authenticator SHALL NOT be used, and an alternative authenticator type SHALL be used.\n\nFurthermore, the risk of an authentication error is typically borne by multiple parties, including the implementing organization, organizations that rely on the authentication decision, and the subscriber. Because the subscriber may be exposed to additional risks when an organization accepts a restricted authenticator and the subscriber may have a limited understanding of and ability to control that risk, the CSP SHALL do all of the following:\n\n\n \n Offer subscribers at least one alternative authenticator that is not restricted and can be used to authenticate at the required AAL\n \n \n Provide subscribers with meaningful notice regarding the restricted authenticator’s security risks and the availability of unrestricted alternatives\n \n \n Address any additional risks to subscribers and RPs in its risk assessment\n \n \n Develop a migration plan for the possibility that the restricted authenticator is no longer acceptable in the future and include this migration plan in its Digital Identity Acceptance Statement (see Sec. 3.4.4 of [SP800-63])\n \n\n\nActivation Secrets\n\nA password used locally as an activation factor for a multi-factor authenticator is referred to as an activation secret. An activation secret is used to obtain access to a stored authentication key. In all cases, the activation secret SHALL remain within the authenticator and its associated user endpoint.\n\nAuthenticators that use activation secrets SHALL require the secrets to be at least four characters in length and SHOULD require the secrets to be at least six characters in length. Activation secrets MAY be entirely numeric (i.e., a PIN). If alphanumeric values are permitted, all printing ASCII [RFC20] characters and the space character, SHOULD be allowed. Unicode [ISO/ISC 10646] characters SHOULD also be permitted in alphanumeric secrets. The authenticator or its management tools SHOULD implement a blocklist to discourage users from selecting commonly used activation secrets (e.g., 123456).\n\nThe authenticator or verifier SHALL implement a retry-limiting mechanism that limits the number of consecutive failed activation attempts using the authenticator to no more than 10. If an incorrect activation secret entry causes the authenticator to provide an invalid output to the central verifier, the verifier MAY implement this retry-limiting mechanism. Otherwise, retry limiting SHALL be implemented in the authenticator. Once the limit of attempts is reached, the authenticator SHALL be disabled, and a different authenticator SHALL be required for authentication.\n\nFor authenticators that are usable at AAL3, verification of activation secrets SHALL be performed in a hardware-protected environment (e.g., a secure element, TPM, or TEE). At AAL2, if a hardware-protected environment is not used, the authenticator SHALL use the activation secret to derive a key used to decrypt the authentication key.\n\nSubmitting the activation factor SHALL be a separate operation from unlocking the host device (e.g., smartphone). However, the same activation factor used to unlock the host device MAY be used in the authentication operation. Agencies MAY lower this requirement for authenticators and that are managed by or on behalf of the CSP (e.g., via mobile device management) that are constrained to have short agency-determined inactivity timeouts and device activation factors that meet the corresponding requirements in this section.\n\nConnected Authenticators\n\nCryptographic authenticators require a trustworthy connection between the authenticator and the endpoint being authenticated that provides resistance to eavesdropping, injection, and relay attacks. This connection SHALL be made using a wired connection (e.g., USB or direct connection with a smartcard), a wireless technology, or a hybrid of those technologies, including network connections.\n\nApproved cryptography SHALL be used for all cases in which cryptographic operations are required. All communication of authentication data between authenticators and endpoints SHALL occur directly between those devices or through an authenticated protected channel between the authenticator and endpoint.\n\nWired Connections\n\nWired connections, including those with embedded authenticators, MAY be assumed to be trustworthy because their attack surface is minimal. Claimants SHOULD be advised to use trusted hardware (e.g., cables, adapters, etc.) to ensure that they have not been compromised.\n\nWireless and Hybrid Connections\n\nWireless and network-based authenticator connections are potentially vulnerable to threats, including eavesdropping, injection, and relay attacks. The potential for such attacks on wireless connections depends on the technology’s effective range. To minimize the attack surface for threats to the authenticator-endpoint connection, the authentication process SHALL require physical proximity between the authenticator and endpoint by establishing a wireless connection with a range of no more than 200 meters.\n\nWireless and hybrid connections SHALL establish a key for encrypted communication between the authenticator and endpoint in one of the following ways:\n\n\n \n Through a temporary wired connection between the devices.\n \n \n Through an association process (similar to a pairing process but not requiring a persistent relationship between devices) to establish a key for encrypted communication between the authenticator and endpoint. The association process SHALL employ a pairing code3 or other shared secret between the devices. Either the authenticator or endpoint SHALL have a pairing code that MAY be printed on the device. The pairing code SHALL be at least six decimal digits (or equivalent) in length. It SHALL be conveyed between the devices by manual entry or using a QR code or similar representation that is optically communicated.\n \n\n\nWhen using a wireless technology with an effective range of less than 1 meter (e.g., NFC), any activation secret transmitted from the endpoint to the authenticator SHALL be encrypted using a key established between the devices. An authenticated connection SHOULD be used. A pairing code SHALL be used if the authenticator is configured to require authenticated pairing.\n\n\n Encrypting only the activation secret and not the entire authentication transaction may expose sensitive information, such as the identity of the RP, although this would require the attacker to be very close to the subscriber. Special care should be taken with authenticators that contain PII and that do not require authenticated pairing. Encryption SHOULD be used to protect that information against “skimming” and eavesdropping attacks.\n\n\nWireless technologies with an effective range of 1 meter or more (e.g., Bluetooth LE) and network connections SHALL use an authenticated encrypted connection between the authenticator and endpoint. The entire authentication transaction SHALL be encrypted. Examples of this include the pairing code used with the virtual contact interface specified in [SP800-73] and the hybrid transport specified by the [CTAP2.2] protocol.\n\nThe key established by the association process may be either temporary (i.e., valid for a limited number of transactions or time-limited) or persistent. A mechanism for endpoints to remove persistent keys SHALL be provided.\n\nRandom Values\n\nRandom values are extensively used in authentication processes, such as nonces and authentication secrets. Unless otherwise specified, random values that reference this section SHALL be generated by an approved random bit generator [RBG]4 that provides at least the minimum security strength specified in the latest revision of [SP800-131A] (112 bits as of the date of this publication).\n\nExportability\n\nExportability is the ability of an authenticator to share its authentication secret (either a private or symmetric key) with another endpoint or authenticator. Generally, endpoints with access to the authentication secret are considered exportable since software (perhaps malware) on the endpoint could access and leak the authentication secret. Non-exportable authenticators are considered more secure, and accordingly, a non-exportable cryptographic authenticator is required at AAL3. Syncable authenticators are inherently exportable (see Appendix B).\n\nTo be considered non-exportable, an authenticator SHALL either be a separate piece of hardware or an embedded processor or execution environment (e.g., secure element, TEE, or trusted platform module). These hardware authenticators and embedded processors are separate from a host processor, such as the CPU on a laptop or mobile device. A non-exportable authenticator SHALL be designed to prohibit the export of the authentication secret to the host processor and SHALL NOT be capable of being reprogrammed by the host processor to allow the secret to be extracted. The authenticator is subject to applicable [FIPS140] requirements of the AAL at which the authenticator is being used, including applicable tamper resistance requirements.\n\n\n \n \n Invalidation can take several forms, including revocation of a PKI-based authenticator and removal from the subscriber account. ↩\n \n \n Federal enterprise systems include those considered in scope for PIV guidance, such as government contractors, government employees, and mission partners. It does not include government-to-consumer or public-facing use cases. ↩\n \n \n As used in this section, the term pairing code does not imply that a persistent pairing process (e.g., Bluetooth) is necessarily used. ↩\n \n \n Detailed information on generating random values may be found in the NIST SP 800-90 document suite comprising [SP800-90A], [SP800-90B], and [SP800-90C]. ↩\n \n \n\n"
} ,
{
"title" : "Authenticator Event Management",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/events/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Authenticator Event Management\n\nThis section is normative.\n\nEvents can occur over the lifetime of a subscriber’s authenticator and affect its use. These events include binding, maintenance, loss, theft, compromise, unauthorized duplication, expiration, and revocation. This section describes the actions to be taken in response to those events.\n\nAuthenticator Binding\n\nAuthenticator binding refers to establishing an association between a specific authenticator and a subscriber account to enable the authenticator to authenticate for that subscriber account, possibly in conjunction with other authenticators.\n\nAuthenticators SHALL be bound to subscriber accounts by either:\n\n\n Being issued by the CSP as part of enrollment or\n Using a subscriber-provided authenticator that is acceptable to the CSP.\n\n\nThe SP 800-63 suite of guidelines refers to the binding rather than the issuance of authenticators to accommodate both options.\n\nThroughout the lifetime of a digital identity, CSPs SHALL maintain a record of all authenticators that are or have ever been bound to each subscriber account. The CSP SHALL determine the characteristics of the authenticator being bound (e.g., single-factor versus multi-factor, phishing-resistant or not) so that verifiers can assess compliance with the requirements at each AAL. This determination MAY be based on strong evidence (e.g., authenticator attestation), direct information from having issued the authenticator, or typical characteristics of authenticator implementations (e.g., whether a user verification bit is set).\n\nThe CSP SHALL also maintain other state information required to meet the authenticator verification requirements. For example, the throttling of authentication attempts described in Sec. 3.2.2 requires the CSP or verifier to maintain state information on recent failed authentication attempts, except for activation factors verified at the authenticator.\n\nThe record created by the CSP SHALL contain the date and time of significant authenticator life cycle events (e.g., binding to the subscriber account, renewal, update, expiration). The record SHOULD include information about the source of the binding (e.g., IP address, device identifier) of any device associated with the event.\n\nAs part of the binding process, the CSP MAY require additional information about the new authenticator or its associated endpoint to determine whether it is suitable for the requested AAL.\n\nBinding at Enrollment\n\nBinding at the time of enrollment is considered to be part of the enrollment process and is discussed in [SP800-63A].\n\nPost-Enrollment Binding\n\nBinding an Additional Authenticator\n\nTo minimize the need for account recovery, CSPs and verifiers SHOULD encourage subscribers to maintain at least two separate means of authentication. For example, a subscriber who usually uses an OTP authenticator as a physical authenticator MAY also be issued look-up secret authenticators or register a device for out-of-band authentication to be used if the physical authenticator is lost, stolen, or damaged. See Sec. 4.2 for more information on replacing passwords.\n\nAccordingly, CSPs SHOULD permit the binding of multiple authenticators to a subscriber account. When any new authenticator is bound to a subscriber account, the CSP SHALL ensure that the process requires authentication at either the maximum AAL currently available in the subscriber account or the maximum AAL at which the new authenticator will be used, whichever is lower. For example, binding an authenticator that is suitable for use at AAL2 requires authentication at AAL2 unless the subscriber account currently has only AAL1 authentication capability. When an authenticator is added, the CSP SHALL notify the subscriber via a mechanism independent of the transaction binding the new authenticator, as described in Sec. 4.6.\n\nExternal Authenticator Binding\n\nExternal authenticator binding refers to binding an authenticator to a subscriber account when it is not connected to or embedded in the authenticated endpoint. This process is typically used when adding authenticators that are embedded in a new endpoint or when connectivity limitations prevent the newly bound authenticator from being connected to an authenticated endpoint.\n\nThe binding process SHALL proceed in one of the following ways:\n\n\n \n An endpoint that has authenticated to the CSP requests a binding code from the CSP. The binding code is input into the endpoint associated with the new authenticator and sent to the CSP.\n \n \n The endpoint associated with the new authenticator obtains a binding code from the CSP. The binding code is input to an authenticated endpoint and sent to the CSP.\n \n\n\n\\clearpage\n\n\nIn addition to the requirements in Sec. 4.1.2.1 and Sec. 4.2, the following requirements SHALL apply when binding an external authenticator:\n\n\n \n An authenticated protected channel SHALL be established by the endpoint associated with the new authenticator and the CSP.\n \n \n The subscriber MAY be prompted to enter an identifier by which the CSP knows them on the endpoint associated with the new authenticator.\n \n \n The CSP SHALL generate a binding code using an approved random bit generator as described in Sec. 3.2.12 and send it to either the new authenticator endpoint or the authenticated endpoint approving the binding. The binding code SHALL be at least 40 bits in length if used with an identifier entered in the previous step. Otherwise, a binding code of at least 112 bits in length SHALL be required.\n \n \n The subscriber SHALL transfer the binding code to the other endpoint. This transfer SHALL either be manual or via a local out-of-band method (e.g., QR code). The binding code SHALL NOT be communicated over any insecure channel (e.g., email).\n \n \n The binding code SHALL be usable only once and SHALL be valid for a maximum of 10 minutes.\n \n \n Following the binding of the new authenticator (or issuance of a certificate, in the case of PKI-based authenticators), the CSP SHOULD encourage the subscriber to authenticate with the new authenticator to confirm that the process has been completed successfully.\n \n \n The CSP SHALL provide clear instructions on what the subscriber should do in the event of an authenticator binding mishap (e.g., making a button available to be pressed or a contact address to be used to allow a misbound authenticator to be quickly invalidated), as appropriate. This MAY be provided in the authenticated session in addition to the binding notification described in Sec. 4.6.\n \n\n\nThe binding of an external authenticator may introduce risks due to the potential for the subscriber to be tricked into using a binding code by an attacker or supplying a binding code to an attacker. In some cases, representations (e.g., QR codes) obtained from a trusted source (e.g., an authenticated session, especially when that authentication is phishing-resistant) are considered to be more robust against such attacks because they typically contain the URL of the CSP in addition to the binding code. As a result, there is less potential for the subscriber to be fooled into entering a binding code at a phishing site.\n\nBinding to a Subscriber-Provided Authenticator\n\nA subscriber may already possess authenticators that are suitable for authentication at a particular AAL. For example, they may have a multi-factor authenticator from a social network provider, considered AAL2 without identity proofing, and would like to use that authenticator at an RP that requires IAL2. This would necessitate identity proofing at IAL2, perhaps by a different CSP, and binding authenticators at enrollment with that CSP.\n\nCSPs SHOULD, where practical, accommodate subscriber-provided authenticators to relieve the burden on the subscriber of managing many authenticators. The binding of these authenticators SHALL be done as described in Sec. 4.1.2. If the authenticator strength is not self-evident (e.g., between single-factor and multi-factor authenticators of a given type), the CSP SHALL assume that the weaker authenticator has been used unless it can establish that the stronger authenticator is being used (e.g., by verification with the issuer or manufacturer of the authenticator).\n\nRenewal\n\nThe subscriber SHOULD bind a new or updated authenticator before an existing authenticator’s expiration. The process for this SHOULD conform closely to the binding process for an additional authenticator described in Sec. 4.1.2. The CSP MAY periodically take other actions (e.g., confirming contact addresses), either as a part of the renewal process or separately. Following the successful use of the replacement authenticator, the CSP SHOULD invalidate the expiring authenticator.\n\nAccount Recovery\n\nAccount recovery is when a subscriber recovers from losing control of the authenticators necessary to authenticate at a desired AAL. This may be accomplished by repeating portions of the identity proofing process or by presenting one or more recovery codes, perhaps in conjunction with using an authenticator that is still available to the subscriber bound to their subscriber account. Once this is completed, the subscriber can bind one or more new authenticators to their subscriber account. An account recovery event always causes one or more notifications to be sent to the subscriber to aid in detecting the fraudulent use of account recovery.\n\nAccount recovery differs from authentication in several ways. Since account recovery is rarely expected to be invoked, it is generally less convenient than authentication and — depending on the situation and recovery methods offered by the CSP — may involve extended waiting times.\n\n\\clearpage\n\n\nAccount Recovery Methods\n\nFour general classes of account recovery methods are recognized. CSPs SHALL support one or more of the following:\n\n\n Saved recovery codes\n Issued recovery codes\n Use of recovery contacts\n Repeated identity proofing\n\n\nIn addition to these methods, the CSP MAY support an application-specific method (e.g., interaction with a CSP agent) to recover a subscriber account. The use of alternative methods SHALL be based on a risk analysis and documented by the CSP.\n\nSaved Recovery Codes\n\nAt enrollment, a CSP that supports this recovery option SHOULD issue a recovery code to the subscriber. The recovery code SHALL include at least 64 bits from an approved random bit generator. The saved recovery code may be presented as numeric or alphanumeric (e.g., Base64) for manual entry or as a machine-readable optical label (e.g., QR code) that contains the recovery code. At any point following enrollment, the subscriber MAY request a replacement recovery code. The issuance of a replacement recovery code SHALL result in an account recovery notification, as described in Sec. 4.6.\n\nSaved recovery codes are intended to be maintained offline (e.g., printed or written down) and stored securely by the subscriber for future use. The verification of saved recovery codes SHALL be subject to the throttling requirements in Sec. 3.2.2. Saved recovery codes SHALL be stored in the subscriber account in hashed form using an approved one-way function, as described in Sec. 3.1.1.2. Following the use of a saved recovery code, the CSP SHALL invalidate that recovery code and SHALL issue a new saved recovery code to the subscriber.\n\nIssued Recovery Codes\n\nCSPs that support this option allow the subscriber to maintain one or more recovery addresses (e.g., postal, email, text message, or voice). When recovery is required, a recovery code will be sent to a claimant-chosen address. The issued recovery code SHALL include at least six decimal digits (or equivalent) from an approved random bit generator, as described in Sec. 3.2.12). The issued recovery code may be presented as a numeric or alphanumeric (e.g., Base64) for manual entry, a secure (e.g., https) link with a representation of the confirmation code, or a machine-readable optical label (e.g., QR code) that contains the recovery code.\n\n\\clearpage\n\n\nIssued recovery codes SHALL be valid for at most:\n\n\n 21 days when sent to a postal address within the contiguous United States,\n 30 days when sent to a postal address outside the contiguous United States,\n 10 minutes when sent via text messaging or voice, or\n 24 hours when sent to an email address.\n\n\nThe verification of issued recovery codes SHALL be subject to the throttling requirements in Sec. 3.2.2.\n\nWhen establishing recovery addresses, the CSP SHALL send a confirmation code with the same characteristics as a recovery code to the newly established recovery address. The recovery address SHALL be established only after the subscriber successfully confirms it. CSPs SHALL allow the subscriber to establish at least two recovery addresses.\n\nRecovery Contacts\n\nCSPs that support the use of recovery contacts SHALL allow the subscriber to specify one or more addresses of trusted associates to receive issued recovery codes. The requirements for recovery contacts are very similar to those for issued recovery codes with the following exceptions:\n\n\n The validity time for recovery codes sent to recovery contacts MAY be extended by 24 hours (i.e., valid for no more than 24 hours and 10 minutes if sent via text messaging) to provide additional time for the recovery contact to communicate the recovery code to the subscriber.\n Confirmation of the recovery code address MAY also be extended by 24 hours to allow the recovery contact to send the confirmation code to the subscriber for entry.\n\n\nRepeated Identity Proofing\n\nWhen the subscriber account has been identity proofed at a minimum of IAL1, CSPs SHOULD support account recovery by repeating a portion of the identity proofing process. The CSP SHALL repeat the necessary steps of identity proofing consistent with the level of initial identity proofing and SHALL confirm that the claimant’s identity is consistent with the previously established account. If the CSP has retained a biometric sample from the user or a copy of the evidence used during the initial proofing and it is of sufficient quality and resolution, the CSP MAY repeat only the verification portion of the identity proofing process, as described in [SP800-63A].\n\nRecovery Requirements by IAL/AAL\n\nDifferent recovery methods apply depending on the IAL and the maximum AAL associated with the subscriber account.\n\nRecovery at AAL1\n\nSince identity proofing requires issuing authenticators that are sufficient for multi-factor authentication to allow the subscriber to access personal information about themselves, subscriber accounts at AAL1 are without identity proofing, and therefore, repeated identity proofing is not possible. The CSP SHALL require the successful use of a saved recovery code, issued recovery code, or recovery contact.\n\nRecovery at AAL2\n\nTo recover an account that can authenticate at a maximum of AAL2, the CSP SHALL require the subscriber to complete one of the following:\n\n\n Two recovery codes obtained using different methods from the set (saved, issued, and recovery contacts)\n One recovery code from the set (saved, issued, and recovery contacts) plus authentication with a single-factor authenticator bound to the subscriber account\n Repeated identity proofing (provided that the subscriber account has been identity proofed)\n\n\nRecovery at AAL3\n\nIf an account that can authenticate at AAL3 has been identity proofed at IAL1 or IAL2, the requirements are the same as those for recovery at AAL2.\n\nIf an account that can authenticate at AAL3 has been identity proofed at IAL3, the CSP SHALL successfully perform a successful biometric comparison against the biometric characteristic collected during the initial identity proofing session, in an onsite attended identity proofing session, as described in [SP800-63A]. The CSP MAY also require the presentation of evidence used in the initial identity proofing process.\n\nAccount Recovery Notification\n\nIn all cases, account recovery SHALL cause a notification to be sent to the subscriber, as described in Sec. 4.6.\n\nLoss, Theft, Damage, and Compromise\n\nCompromised authenticators include those that have been lost, stolen, or subject to unauthorized duplication or that have activation factors that are no longer in the subscriber’s control. Generally, one must assume that a lost authenticator has been stolen or compromised by someone other than the legitimate holder of the authenticator. Damaged or malfunctioning authenticators are also considered compromised to guard against any possibility of the extraction of the authenticator’s secret. One notable exception is a password that has been forgotten without other indications of having been compromised, such as having been obtained by an attacker.\n\nThe CSP SHALL suspend, invalidate, or destroy compromised authenticators from the subscriber’s account promptly following compromise detection. Organizations SHOULD establish time limits for this process.\n\nTo facilitate the secure reporting of an authenticator’s loss, theft, damage, or compromise, the CSP SHOULD provide the subscriber with a method of authenticating using a backup or alternate authenticator. This backup authenticator SHALL be a password or a physical authenticator. Either could be used, but only one authentication factor is required to make this report. Alternatively, the subscriber MAY establish an authenticated protected channel for the CSP to verify the information collected during identity proofing. The CSP MAY choose to verify a contact address (i.e., the email address, telephone number, or postal address) and suspend or invalidate authenticators that are reported to have been compromised.\n\nCSPs MAY support the temporary suspension of authenticators that are suspected of possible compromise. If suspension is supported, it SHOULD be reversed if the subscriber successfully authenticates to the CSP using a valid (i.e., not suspended) authenticator and requests reactivation of the suspended authenticator. The CSP MAY set a time limit after which a suspended authenticator can no longer be reactivated.\n\nExpiration\n\nCSPs MAY issue authenticators that expire. If and when an authenticator expires, it SHALL NOT be usable for authentication. When an authentication is attempted using an expired authenticator, the CSP SHOULD indicate to the subscriber that the authentication failure is due to expiration rather than some other cause.\n\nThe CSP SHOULD retrieve any authenticator that contains personal information or provide for its zeroization (erasure) or destruction promptly following expiration.\n\nThe replacement of expired authenticators SHALL conform to the binding process for an additional authenticator, as described in Sec. 4.1.2.\n\nInvalidation\n\nThe invalidation of an authenticator (sometimes referred to as revocation or termination) is the removal of the binding between the authenticator and a subscriber account.\n\nCSPs SHALL promptly invalidate authenticators when a subscriber account ceases to exist (e.g., subscriber’s death, the discovery of a fraudulent subscriber) when requested by the subscriber, when the authenticator is compromised, or when the CSP determines that the subscriber no longer meets its eligibility requirements. The CSP SHALL make a risk-based determination of the authenticity of invalidation requests from the subscriber, noting that the consequences of not invalidating a compromised authenticator are usually more significant than the denial-of-service potential of invalidating one in error.\n\nThe CSP SHOULD retrieve any authenticator that contains personal information or provide for its zeroization (erasure) or destruction promptly following invalidation.\n\nFurther requirements on the invalidation of PIV authenticators are found in [FIPS201].\n\nAccount Notifications\n\nCertain subscriber account events, such as the binding of an authenticator and account recovery, require the subscriber to be independently notified. These notifications help the subscriber detect possible fraud associated with their subscriber account.\n\nEvents that require notification SHALL cause a notification to be sent to the notification addresses stored in the subscriber account. Notification addresses may be a:\n\n\n Postal address\n Email address\n Address (e.g., telephone number) to which a text message or voice message is to be sent\n\n\nCSPs SHALL support at least two notification addresses per subscriber account, and at least one SHALL be validated during the identity proofing process. The CSP SHOULD allow subscribers with authentication at AAL2 or higher (or at AAL1 if that is the highest AAL available for the subscriber account) to update their notification addresses. The CSP SHOULD encourage the subscriber to maintain multiple notification addresses.\n\nNotifications SHALL be sent to all notification addresses except postal addresses. However, notifications SHALL be sent to postal addresses if no other form of notification address is stored in the subscriber account or if the notification is for account recovery at AAL3.\n\nThe notification SHALL provide clear instructions, including contact information, in case the recipient repudiates the event associated with the notification.\n\n"
} ,
{
"title" : "Session Management",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/session/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Session Management\n\nThis section is normative.\n\nOnce an authentication event has occurred, it is often desirable to allow the subscriber to continue using the application across multiple subsequent interactions without requiring them to repeat the authentication event. This is particularly the case with federation scenarios (described in [SP800-63C]) in which the authentication event necessarily involves the coordination of several components and parties across a network.\n\nTo facilitate this behavior, a session MAY be started in response to an authentication event and continue until it is terminated. The session MAY be terminated for any number of reasons, including but not limited to an inactivity timeout or an explicit logout event. The session MAY be extended through a reauthentication event (described in Sec. 5.2) in which the subscriber repeats some of the initial authentication process or performs a full authentication, thereby reestablishing the authenticated session.\n\nSession management is preferable to the continual presentation of credentials, as the poor usability of continual presentation often creates incentives for workarounds (e.g., caching activation factors), thereby negating authentication intent and obscuring the freshness of the authentication event.\n\nSession Bindings\n\nA session occurs between the software (i.e., the session subject) that a subscriber is running (e.g., browser, application, or operating system) and the RP or CSP that the subscriber is accessing (i.e., the session host). A session secret SHALL be shared between the subscriber’s software and the accessed service. This secret binds the two ends of the session and allows the subscriber to continue using the service over time. The secret SHALL be presented directly by the subscriber’s software, or possession of the secret SHALL be proven using a cryptographic mechanism.\n\nThe continuity of authenticated sessions SHALL be based upon the possession of a session secret that is issued by the verifier at the time of authentication and optionally refreshed during the session. The nature of a session depends on the application, such as:\n\n\n A web browser session with a “session” cookie or\n An instance of a mobile application that retains a session secret.\n\n\nSession secrets SHOULD NOT be persistent (i.e., retained across a restart of the associated application or a reboot of the host device) because they are tied to specific sessions that a restart or reboot would end. Cookies and similar “remember my browser” features SHALL NOT be used instead of authentication except as provided for reauthentication at AAL2 in Sec. 2.2.3 when the inactivity limit has been exceeded but the time limit has not.\n\nThe secret used for session binding SHALL be generated by the session host in direct response to an authentication event. A session SHOULD inherit the AAL properties of the authentication event that triggered its creation. A session MAY be considered at a lower AAL than the authentication event but SHALL NOT be considered at a higher AAL than the authentication event.\n\nThe secrets used for session binding SHALL meet all of the following requirements:\n\n\n Secrets are established during or immediately following authentication.\n Secrets are established using input from an approved random bit generator as described in Sec. 3.2.12, and are at least 64 bits in length.\n Secrets are erased or invalidated by the session subject when the subscriber logs out.\n Secrets are either transferred from the session host to the RP or CSP via an authenticated protected channel or derived from keys that are established as part of establishing a valid, mutually authenticated protected channel.\n Secrets will time out and are not accepted after the times specified in Sec. 2.1.3, Sec. 2.2.3, and Sec. 2.3.3, as appropriate for the AAL.\n Secrets are unavailable to intermediaries between the host and the subscriber’s endpoint.\n\n\nIn addition, secrets used for session binding SHOULD be erased on the subscriber endpoint when they log out or when the secret is deemed to have expired. They SHOULD NOT be placed in insecure locations (e.g., HTML5 Local Storage) due to the potential exposure of local storage to cross-site scripting (XSS) attacks.\n\nFollowing authentication, authenticated sessions SHALL NOT fall back to an insecure transport (e.g., from https to http).\n\nPOST/PUT content SHALL contain a session identifier that the RP SHALL verify to protect against cross-site request forgery.\n\nSeveral mechanisms exist for managing a session over time. The following sections give different examples, additional requirements, and considerations for each example technology. Additional informative guidance is available in the OWASP Session Management Cheat Sheet [OWASP-session].\n\nSessions SHOULD provide a readily accessible mechanism for subscribers to terminate (i.e., log off) their session when their interaction is complete. Session logoff gives the subscriber additional confidence and control over the security of their session, particularly in situations where the endpoint might be accessible to others.\n\nBrowser Cookies\n\nBrowser cookies are the predominant mechanism by which a session is created and tracked when a subscriber accesses a service. Cookies are not authenticators but are suitable as short-term secrets for the duration of a session.\n\nCookies used for session maintenance:\n\n\n SHALL be tagged to be accessible only on secure (HTTPS) sessions.\n SHALL be accessible to the minimum practical hostnames and paths.\n SHOULD be tagged as inaccessible via JavaScript (HttpOnly).\n SHOULD be tagged to expire at or soon after the session’s validity period. This requirement is intended to limit the accumulation of cookies but SHALL NOT be relied upon to enforce session timeouts.\n SHOULD have the “__Host-“ prefix and set “Path=/”.\n SHOULD set “SameSite=Lax” or “SameSite=Strict”.\n SHOULD contain only an opaque string (e.g., a session identifier) and SHALL NOT contain cleartext personal information.\n\n\nAccess Tokens\n\nAn access token (e.g., OAuth [RFC6749]) is used to allow an application to access a set of services on a subscriber’s behalf following an authentication event. The RP SHALL NOT interpret the presence of an OAuth access token as an indicator of the subscriber’s presence in the absence of other signals. The OAuth access token and any associated refresh tokens could be valid long after the authentication session has ended and the subscriber has left the application.\n\nReauthentication\n\nPeriodic reauthentication of sessions SHALL be performed to confirm the subscriber’s continued presence at an authenticated session (i.e., that the subscriber has not walked away without logging out).\n\nSession management uses two types of timeouts. An overall timeout limits the duration of an authenticated session to a specific period following authentication or a previous reauthentication. An inactivity timeout terminates a session without activity from the subscriber for a specific period. For both types of timeouts, the RP MAY alert the subscriber that the session is about to be terminated and allow the subscriber to make the session active or reauthenticate as appropriate before the session expires. When either timeout expires, the session SHALL be terminated. Session activity SHALL reset the inactivity timeout, and successful reauthentication during a session SHALL reset both timeouts.\n\nThe overall and inactivity timeout expiration limits depend on several factors, including the AAL of the session, the environment in which the session is conducted (e.g., whether the subscriber is in a restricted area), the type of endpoint being used (e.g., mobile application or web-based), whether the endpoint is a managed device1, and the nature of the application itself. Agencies SHALL establish and document the inactivity and overall time limits being enforced in a system security plan such as that described in [SP800-39].\n\nDetailed requirements for each AAL are given in Sec. 2.1.3, Sec. 2.2.3, and Sec. 2.3.3.\n\nSpecial considerations apply to session management and reauthentication when using a federation protocol and IdP to authenticate at the RP, as described in [SP800-63C], special considerations apply to session management and reauthentication. The federation protocol communicates an authentication event at the IdP to the RP using an assertion, and the RP then begins an authenticated session based on the successful validation of this assertion. Since the IdP and RP manage sessions separately from each other and the federation protocol does not connect the session management between the IdP and RP, the termination of the subscriber’s sessions at an IdP and an RP are independent of each other. Likewise, the subscriber’s sessions at multiple different RPs are established and terminated independently of each other.\n\nConsequently, when an RP session expires and the RP requires reauthentication, it is possible that the session at the IdP has not expired and that a new assertion could be generated from this session at the IdP without explicitly reauthenticating the subscriber. The IdP can communicate the time and details of the authentication event to the RP, but it is up to the RP to determine whether the reauthentication requirements have been met. Section 4.7 of [SP800-63C] provides additional details and requirements for session management within a federation context.\n\n\\clearpage\n\n\nSession Monitoring\n\nSession monitoring (sometimes called continuous authentication) is the ongoing evaluation of session characteristics to detect possible fraud during a session.\n\nSession monitoring MAY be performed by the RP, in coordination with the CSP/verifier, as a risk reduction measure. When potential fraud is detected during a session, the RP SHOULD take action in conjunction with the CSP/verifier, such as to reauthenticate, terminate the session, or notify appropriate support personnel. Session characteristics that MAY be evaluated include:\n\n\n Usage patterns, velocity, and timing\n Behavioral biometric characteristics (e.g., typing cadence)\n Device and browser characteristics\n Geolocation\n IP address characteristics (e.g., whether the IP address is in a block known for abuse)\n\n\nMost of these characteristics have privacy implications. Collection, storage of expected subscriber characteristics, and processing of session characteristics SHALL be included in the privacy risk assessment described in Sec. 7.\n\n\n \n \n Managed devices include personal computers, laptops, mobile devices, virtual machines, or infrastructure components that are equipped with a management agent that allows information technology staff to discover, maintain, and control them. ↩\n \n \n\n"
} ,
{
"title" : "Threats and Security Considerations",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/security/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Threats and Security Considerations\n\nThis section is informative.\n\nAuthenticator Threats\n\nAn attacker who can gain control of an authenticator will often be able to masquerade as the authenticator’s owner. Threats to authenticators can be categorized based on attacks on the types of authentication factors that comprise the authenticator:\n\n\n \n “Something you know” may be disclosed to an attacker. For example, the attacker may guess a password. If the authenticator is a shared secret, the attacker could access the CSP or verifier and obtain the secret value or perform a dictionary attack on a hash of that value. An attacker may observe the entry of a PIN or passcode, find a written record or journal entry of a PIN or passcode, or install malicious software (e.g., a keyboard logger) to capture the secret. Additionally, an attacker may determine the secret through offline attacks on a password database maintained by the verifier.\n \n \n “Something you have” may be lost, damaged, stolen from the owner, or cloned by an attacker. For example, an attacker who gains access to the owner’s computer may copy a software authenticator. A hardware authenticator may be stolen, tampered with, or duplicated. Out-of-band secrets may be intercepted by an attacker and used to authenticate their own session. A subscriber may be socially engineered to provide access to secrets without intentional collusion.\n \n \n “Something you are” may be replicated. For example, an attacker may obtain a copy of the subscriber’s fingerprint and construct a replica.\n \n\n\nSubscribers sometimes collude with attackers, and virtually nothing can be done from an authentication perspective to prevent these attacks. With this caveat in mind, threats to the authenticators used for digital authentication are listed in Table 2 along with some examples.\n\nTable 2. Authenticator threats\n\n\n \n \n Authenticator Threat/Attack\n Description\n Examples\n \n \n \n \n Theft\n An attacker steals a physical authenticator.\n A hardware cryptographic authenticator is stolen.\n \n \n \n \n An OTP authenticator is stolen.\n \n \n \n \n A look-up secret authenticator is stolen.\n \n \n \n \n A cell phone is stolen.\n \n \n Duplication\n The subscriber’s authenticator has been copied with or without their knowledge.\n Passwords written on paper are disclosed.\n \n \n \n \n Passwords stored in an electronic file are copied.\n \n \n \n \n A vulnerability in an insufficiently secure password manager is exploited.\n \n \n \n \n A software PKI authenticator (i.e., private key) is copied.\n \n \n \n \n A Look-up secret authenticator is copied.\n \n \n \n \n A counterfeit biometric authenticator is manufactured.\n \n \n \n \n Exportable cryptographic keys are obtained from a device or cloud-based sync fabric.\n \n \n Eavesdropping\n The attacker observes the authenticator secret or authenticator output as the subscriber is authenticating.\n Passwords are obtained by watching keyboard entries.\n \n \n \n \n Passwords or authenticator outputs are intercepted by keystroke logging software.\n \n \n \n \n A PIN is captured from a PIN pad device.\n \n \n \n \n A hashed password is obtained and used by an attacker for another authentication (i.e., pass-the-hash attack).\n \n \n \n The attacker intercepts an out-of-band secret by compromising the communication channel.\n An out-of-band secret is transmitted via unencrypted Wi-Fi and received by the attacker.\n \n \n Offline Cracking\n The authenticator is exposed using analytical methods outside of the authentication mechanism.\n A software PKI authenticator is subjected to a dictionary attack to identify the correct password to decrypt the private key.\n \n \n Side-Channel Attack\n The authenticator’s secret is exposed using the physical characteristics of the authenticator.\n A key is extracted by differential power analysis on a hardware cryptographic authenticator.\n \n \n \n \n A cryptographic authenticator secret is extracted by analysis of the authenticator’s response time over several attempts.\n \n \n Phishing or Pharming\n The authenticator output is captured by fooling the claimant into thinking that the attacker is a verifier or RP.\n A claimant reveals a password to a website impersonating the verifier.\n \n \n \n \n A password is revealed by a bank subscriber in response to an email inquiry from a phisher pretending to represent the bank.\n \n \n \n \n A password is revealed by the claimant at a bogus verifier website reached through DNS spoofing.\n \n \n Social Engineering\n The attacker establishes a level of trust with a subscriber to convince them to reveal their authenticator secret or authenticator output.\n A password is revealed by the subscriber to an officemate asking for the password on behalf of the subscriber’s boss.\n \n \n \n \n A password is revealed by a subscriber in a telephone inquiry from an attacker masquerading as a system administrator.\n \n \n \n \n An attacker who has convinced the mobile operator to redirect the victim’s mobile phone to them receives an out-of-band secret sent via SMS.\n \n \n \n \n A subscriber erroneously approves a push-based authentication request coming from a repeated “fatigue” attack.\n \n \n Online Guessing\n The attacker connects to the verifier online and attempts to guess a valid authenticator output in the context of that verifier.\n Online dictionary attacks are used to guess passwords.\n \n \n \n \n Online guessing is used to guess authenticator outputs for an OTP authenticator that is registered to a legitimate subscriber.\n \n \n Endpoint Compromise\n Malicious code on the endpoint proxies allow remote access to a connected authenticator without the subscriber’s consent.\n A cryptographic authenticator connected to the endpoint is used to authenticate remote attackers.\n \n \n \n Malicious code on the endpoint causes authentication to other than the intended verifier.\n Authentication is performed on behalf of an attacker rather than the subscriber.\n \n \n \n \n A malicious app on the endpoint reads an out-of-band secret sent via SMS, and the attacker uses the secret to authenticate.\n \n \n \n Malicious code on the endpoint compromises a multi-factor software cryptographic authenticator.\n Malicious code proxies authenticate or export authenticator keys from the endpoint.\n \n \n Unauthorized Binding\n An attacker causes an authenticator under their control to be bound to a subscriber account.\n An attacker intercepts an authenticator or provisioning key en route to the subscriber.\n \n \n Latent Keys\n A decommissioned device retains authentication keys\n A device (e.g., laptop computer) is sold without recognition that device-based authentication keys are present and could be used by a new owner.\n \n \n Proliferation of Keys\n Transferring device-based authentication keys between devices increases the attack surface\n A subscriber copies authentication keys to many devices, possibly some that are not under their direct control, and loses track of where the keys are stored\n \n \n Key Transfer Security\n Authentication keys are transferred between devices through an insufficiently secure cloud service\n Access to a cloud service that stores authentication keys requires only single-factor authentication\n \n \n \n \n Keys are made available to others through a URL sent via email\n \n \n Insider Threats\n An insider with access to the CSP (e.g., customer support representative) colludes with an attacker to give access to subscriber accounts.\n \n \n \n\n\nThreat Mitigation Strategies\nTable 3 summarizes related mechanisms that assist in mitigating the threats described in Table 2.\n\nTable 3. Mitigating authenticator threats\n\n\n \n \n Authenticator Threat/Attack\n Threat Mitigation Mechanisms\n Normative Reference Sections\n \n \n \n \n Theft\n Use multi-factor authenticators that must be activated through a password or biometric.\n 2.2.1, 2.3.1\n \n \n \n Use a combination of authenticators that includes a password or biometric.\n 2.2.1, 2.3.1\n \n \n Duplication\n Use authenticators from which it is difficult to extract and duplicate long-term authentication secrets.\n 2.2.2, 2.3.2, 3.1.6.1\n \n \n \n Enforce AAL2 requirements for access to sync fabrics that contain exported authentication keys, and only allow them to be imported into trusted devices.\n 3.1.7.1\n \n \n Eavesdropping\n Ensure the endpoint’s security before use, especially with respect to freedom from malware (e.g., such as key loggers).\n 2.2.2\n \n \n \n Avoid using unauthenticated and unencrypted communication channels to send out-of-band authenticator secrets.\n 3.1.3.1\n \n \n \n Authenticate over authenticated protected channels (e.g., observe the lock icon in the browser window).\n 2.1.2, 2.2.2, 2.3.2\n \n \n \n Use authentication protocols that are resistant to replay attacks (e.g., pass-the-hash).\n 3.2.7\n \n \n \n Use authentication endpoints that employ trusted input and display capabilities.\n 3.1.6.1, 3.1.7.1\n \n \n Offline Cracking\n Use an authenticator with a high entropy authenticator secret.\n 3.1.2.1, 3.1.4.1, 3.1.5.1, 3.1.6.1, 3.1.7.1\n \n \n \n Store centrally verified passwords in a salted, hashed form, including a keyed hash.\n 3.1.1.1.2\n \n \n Side-Channel Attack\n Use authenticator algorithms that maintain constant power consumption and timing regardless of secret values.\n 2.3.2\n \n \n Phishing or Pharming\n Use authenticators that provide phishing resistance.\n 3.2.5\n \n \n Social Engineering\n Avoid using authenticators that present a social engineering risk to third parties (e.g., customer service agents).\n 4.1.2.1, 4.2\n \n \n Online Guessing\n Use authenticators that generate high entropy output.\n 3.1.2.1, 3.1.6.1, 3.1.7.1\n \n \n \n Use an authenticator that locks after repeated failed activation attempts.\n 3.2.2\n \n \n Endpoint Compromise\n Use hardware authenticators that require physical action by the claimant.\n 3.2.8\n \n \n \n Maintain software-based keys in restricted-access storage.\n 3.1.3.1, 3.1.6.1, 3.1.7.1, 3.2.13\n \n \n Unauthorized Binding\n Provision authenticators and associated keys using authenticated protected channels or in person.\n 4.1\n \n \n Latent Keys\n Ensure the secure disposal of equipment that contains device-based authentication keys\n 4.4, 4.5\n \n \n \n In enterprise applications, limit the transfer of keys to organizationally managed or trusted devices\n B.2\n \n \n Key Transfer Security\n Encourage or require subscribers to use cloud services that have been approved for key storage and transfer\n B.2\n \n \n\n\nSeveral other strategies may be applied to mitigate the threats described in Table 3:\n\n\n \n Multiple factors make successful attacks more difficult to accomplish. If an attacker must steal a cryptographic authenticator and guess a password, then the work to discover both factors may be too high.\n \n \n Physical security mechanisms may be employed to protect a stolen authenticator from duplication. Physical security mechanisms can provide tamper evidence, detection, and response.\n \n \n Requiring long passwords that do not appear in common dictionaries may force attackers to try every possible value.\n \n \n System and network security controls may be employed to prevent an attacker from gaining access to a system or installing malicious software.\n \n \n Periodic training may be performed to ensure that subscribers understand when and how to report a compromise or a suspicion of compromise and to recognize patterns of behavior that may signify that an attacker is attempting to compromise the authentication process.\n \n \n Out-of-band techniques may be employed to verify the proof of possession of registered devices (e.g., cell phones).\n \n\n\nAuthenticator Recovery\n\nThe weak point in many authentication mechanisms is the process followed when a subscriber loses control of one or more authenticators and needs to replace them. In many cases, the options for authenticating the subscriber are limited, and economic concerns (e.g., the cost of maintaining call centers) motivate the use of inexpensive and often less secure backup authentication methods. To the extent that authenticator recovery is human-assisted, social engineering attacks are also risky.\n\nTo maintain the integrity of the authentication factors, it is essential that one authentication factor cannot be leveraged to obtain an authenticator of a different factor. For example, a password must not be usable to obtain a new list of look-up secrets.\n\nSession Attacks\n\nHijacking attacks on the session following an authentication event can have similar security impacts. The session management guidelines in Sec. 5 are essential to maintaining session integrity against attacks (e.g., XSS). It is also important to sanitize all information to be displayed [OWASP-XSS-prevention] to ensure that it does not contain executable content. These guidelines recommend that session secrets be made inaccessible to mobile code to provide extra protection against the exfiltration of session secrets.\n\nAnother post-authentication threat is cross-site request forgery (CSRF), which takes advantage of users’ tendency to have multiple sessions active simultaneously. It is essential to embed and verify a session identifier for web requests to prevent a valid URL or request from being unintentionally or maliciously activated.\n"
} ,
{
"title" : "Privacy Considerations",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/privacy/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Privacy Considerations\n\nThese privacy considerations supplement the guidance in Sec. 4. This section is informative.\n\nPrivacy Risk Assessment\n\nThe authentication requirements in Sec. 2 and the optional session monitoring guidelines in Sec. 5.3 require the CSP to conduct a privacy risk assessment for records retention. Such a privacy risk assessment would include:\n\n\n The likelihood that the records retention could create a problem for the subscriber, such as invasiveness or unauthorized access to the information.\n The impact if such a problem did occur.\n\n\nCSPs should be able to reasonably justify any response to identified privacy risks, including accepting, mitigating, and sharing the risk. Subscriber consent is a form of sharing the risk. It is therefore only appropriate for use when a subscriber could reasonably be expected to have the capacity to assess and accept the shared risk.\n\nPrivacy Controls\n\nSection 2.4.3 requires CSPs to employ appropriately tailored privacy controls. [SP800-53] provides a set of privacy controls for CSPs to consider when deploying authentication mechanisms, including notices, redress, and other important considerations for successful and trustworthy deployments.\n\nUse Limitation\n\nSection 2.4.3 requires CSPs to maintain the objectives of predictability (enabling reliable assumptions by individuals, owners, and operators about PII and its processing by an information system) and manageability (i.e., providing the capability for the granular administration of PII, including alteration, deletion, and selective disclosure) commensurate with privacy risks that can arise from the processing of attributes for purposes other than identity proofing, authentication, authorization, or attribute assertion; related fraud mitigation; or to comply with law or legal process [NISTIR8062].\n\nCSPs may have various business purposes for processing attributes, including providing non-identity services to subscribers. However, processing attributes for purposes other than those specified at collection can create privacy risks. CSPs can identify appropriate measures that are commensurate with the privacy risks that arise from additional processing. For example, absent applicable laws, regulations, or policies, obtaining consent may not be necessary when processing attributes to provide non-identity services requested by subscribers. However, notices may help subscribers maintain reliable assumptions about the processing (i.e., predictability). Other processing of attributes may carry different privacy risks that call for obtaining consent or allowing subscribers more control over the use or disclosure of specific attributes (i.e., manageability). Subscriber consent must be meaningful. Therefore, as stated in Sec. 2.4.3, when CSPs use consent measures, the subscriber’s acceptance of additional uses shall not be a condition of providing authentication services.\n\nConsult the agency SAOP if there are questions about whether the proposed processing falls outside of the scope of the permitted processing or appropriate privacy risk mitigation measures.\n\nAgency-Specific Privacy Compliance\n\nSection 2.4.3 describes specific compliance obligations for federal CSPs. It is critical to involve the agency SAOP in the earliest stages of digital authentication system development to assess and mitigate privacy risks and advise the agency on compliance requirements, such as whether or not the collection of PII to issue or maintain authenticators triggers the Privacy Act of 1974 [PrivacyAct] or the E-Government Act of 2002 [E-Gov] requirement to conduct a PIA. For example, concerning the centralized maintenance of biometrics, Privacy Act requirements will likely be triggered and require coverage by a new or existing Privacy Act system of records notice due to the collection and maintenance of PII and any other attributes that are necessary for authentication. The SAOP can similarly assist the agency in determining whether a PIA is required.\n\nThese considerations should not be read as a requirement to develop a Privacy Act SORN or PIA for authentication alone. In many instances, a PIA and SORN can encompass the entire digital identity process or include the digital authentication process as part of a larger programmatic PIA that discusses the online services or benefits that the agency is establishing.\n\nDue to the many components of digital authentication, the SAOP needs to be aware of and understand each component. For example, other privacy artifacts may apply to an agency that offers or uses federated CSP or RP services (e.g., Data Use Agreements, Computer Matching Agreements). The SAOP can assist the agency in determining what additional requirements apply. Moreover, a thorough understanding of the individual components of digital authentication will enable the SAOP to assess and mitigate privacy risks through compliance processes or other means.\n"
} ,
{
"title" : "Usability Considerations",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/usability/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Usability Considerations\n\nThis section is informative.\n\n\n To align with the standard terminology of user-centered design and usability, the term “user” is used throughout this section to refer to the human party. In most cases, the user in question will be the subject in the role of applicant, claimant, or subscriber, as described elsewhere in these guidelines.\n\n\n[ISO/IEC9241-11] defines usability as the “extent to which a system, product, or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” This definition focuses on users, their goals, and the contexts of use as the key elements necessary for achieving effectiveness, efficiency, satisfaction, and usability.\n\nA user’s goal when accessing an information system is to perform an intended task. Authentication is the function that enables this goal. However, from the user’s perspective, authentication stands between them and their intended task. Effective design and implementation of the authentication process makes it easy to do the right thing, hard to do the wrong thing, and easy to recover if the wrong thing happens.\n\nOrganizations need to be cognizant of the overall implications of their stakeholders’ entire digital authentication ecosystem. Users often employ multiple authenticators, each for a different RP. They then struggle to remember passwords, recall which authenticator goes with which RP, and carry multiple physical authentication devices. Evaluating the usability of authentication is critical, as poor usability often results in coping mechanisms and unintended workarounds that can ultimately degrade the effectiveness of security controls.\n\nIntegrating usability into the development process can lead to authentication solutions that are secure and usable while still addressing users’ authentication needs and organizations’ business goals. The impacts of usability across digital systems needs to be considered as part of the risk assessment when deciding on the appropriate AAL. Authenticators with a higher AAL sometimes offer better usability and should be allowed for use with lower AAL applications.\n\nLeveraging federation for authentication can alleviate many usability issues, though such an approach has its tradeoffs, as discussed in [SP800-63C].\n\nThis section provides general usability considerations and possible implementations but does not recommend specific solutions. The implementations mentioned are examples that encourage innovative technological approaches to address specific usability needs. Furthermore, usability considerations and their implementations are sensitive to many factors that prevent a one-size-fits-all solution. For example, a font size that works in a desktop computing environment may force text to scroll off of a small OTP authenticator screen. Performing a usability evaluation on the selected authenticator is a critical component of implementation. It is important to conduct evaluations with representative users, set realistic goals and tasks, and identify appropriate contexts of use.\n\nGuidelines and considerations are described from the users’ perspective.\n\nSection 508 of the Rehabilitation Act of 1973 [Section508] was enacted to eliminate barriers in information technology and require federal agencies to make electronic and information technology accessible to people with disabilities. While these guidelines do not directly assert requirements from Section 508, identity service providers are expected to comply with Section 508 provisions. Beyond compliance with Section 508, federal agencies and their service providers are generally expected to design services and systems with the experiences of people with disabilities in mind to ensure that accessibility is prioritized throughout identity system lifecycles.\n\nCommon Usability Considerations for Authenticators\nWhen selecting and implementing an authentication system, consider usability across the entire lifetime of the selected authenticators (e.g., their typical use and intermittent events) while being mindful of users, their goals, and their contexts of use.\n\nA single authenticator type does not usually suffice for the entire user population. Therefore, whenever possible and based on AAL requirements, CSPs should support alternative authenticator types and allow users to choose the type that best meets their needs. Task immediacy, perceived cost-benefit trade-offs, and unfamiliarity with certain authenticators often impact choices. Users tend to choose options that incur the least burden or cost at that moment. For example, if a task requires immediate access to an information system, a user may prefer to create a new subscriber account and password rather than select an authenticator that requires more steps. Alternatively, users may choose a federated identity option that is approved at the appropriate IAL, AAL, and FAL if they already have a subscriber account with an identity provider. Users may understand some authenticators better than others and have different levels of trust based on their understanding and experience.\n\nPositive user authentication experiences are integral to achieving desired business outcomes. Therefore, organizations should strive to consider authenticators from the users’ perspective. The overarching authentication usability goal is to minimize user burden and authentication friction (e.g., the number of times a user has to authenticate, the steps involved, and the amount of information they have to track). Single sign-on exemplifies one such minimization strategy.\n\nUsability considerations applicable to most authenticators are described below. Subsequent sections describe usability considerations specific to a particular authenticator.\n\nUsability considerations that are applicable to most authenticators include:\n\n\n \n Provide information on the use and maintenance of the authenticator (e.g., what to do if the authenticator is lost or stolen), and instructions for use, especially if there are different requirements for first-time use or initialization.\n \n \n Authenticator availability, as users will need to remember to have their authenticator readily available. Consider the need for alternative authentication options to protect against loss, damage, or other negative impacts on the original authenticator and the potential loss of battery power, if applicable.\n \n \n Alternative authentication options whenever possible and based on AAL requirements. This allows users to choose an authenticator based on their context, goals, and tasks (e.g., the frequency and immediacy of the task). Alternative authentication options also help address availability issues that may occur with a particular authenticator.\n \n Characteristics of user-facing text:\n \n Write user-facing text (e.g., instructions, prompts, notifications, error messages) in plain language for the intended audience. Avoid technical jargon, and write for the audience’s expected literacy level.\n Consider the legibility of user-facing and user-entered text, including font style, size, color, and contrast with the surrounding background. Illegible text contributes to user entry errors. To enhance legibility, consider the use of:\n \n High contrast (i.e., black on white)\n Sans serif fonts for electronic displays and serif fonts for printed materials.\n Fonts that clearly distinguish between characters that are easily confused (e.g., the capital letter “O” and the number zero “0”)\n A minimum font size of 12 points as long as the text fits for display on the device\n \n \n Avoid using icons (e.g., padlocks or shields) that might be confused with security indicators in browsers.\n \n \n User experience during authenticator entry:\n \n Offer the option to display text during entry, as masked text entry is error-prone. Once a given character is displayed long enough for the user to see, it can be hidden. Consider the device when determining masking delay time, as it takes longer to enter passwords on mobile devices (e.g., tablets and smartphones) than on traditional desktop computers. Ensure that masking delay durations are consistent with user needs.\n Ensure that the time allowed for text entry is adequate (i.e., the entry screen does not time out prematurely). Ensure that the allowed text entry times are consistent with user needs.\n Provide clear, meaningful, and actionable feedback on entry errors to reduce user confusion and frustration. Significant usability implications arise when users do not know that they have entered text incorrectly.\n Allow at least 10 entry attempts for authenticators that require the entry of the authenticator output by the user. The longer and more complex the entry text, the greater the likelihood of user entry errors.\n Provide clear, meaningful feedback on the number of remaining allowed attempts. For rate limiting (i.e., throttling), inform users how long they have to wait until the next attempt.\n \n \n Minimize the impact of form-factor constraints, such as limited touch and display areas on mobile devices:\n \n Larger touch areas improve usability for text entry since typing on small devices is significantly more error-prone and time-consuming than typing on a full-size keyboard due to the size of the input mechanism (e.g., a finger) relative to the size of the on-screen target.\n Follow good user interface and information design for small displays.\n \n \n\n\nUsability considerations for intermittent events (e.g., reauthentication, subscriber account lock-out, expiration, revocation, damage, loss, theft, and non-functional software) across authenticator types include:\n\n\n \n Prompt users to perform some activity just before (e.g., two minutes before) an inactivity timeout would otherwise occur.\n \n \n Prompt users to save their work before a fixed reauthentication timeout occurs regardless of user activity.\n \n \n Clearly communicate how and where to acquire technical assistance (e.g., provide users with a link to an online self-service feature, chat sessions, or a phone number for help desk support). Ideally, sufficient information can be provided to enable users to recover from intermittent events on their own without outside intervention.\n \n \n Provide an accessible means for the subscriber to end their session (i.e., logoff).\n \n\n\nUsability Considerations by Authenticator Type\nThe following sections describe other usability considerations that are specific to particular authenticator types.\n\nPasswords\nTypical Usage\n\nUsers often manually input the password (sometimes referred to as a passphrase or PIN). Alternatively, they may use a password manager to assist in the selection of a secure password and in maintaining distinct passwords for each authenticated service. The use of distinct passwords is important to avoid “password stuffing” attacks in which an attacker uses a compromised password from one site on other sites where the user might also have an account. Agencies should carefully evaluate password managers before making recommendations or mandates to confirm that they meet expectations for secure implementation.\n\nUsability considerations for typical usage without a password manager include:\n\n\n Memorability of the password\n \n The likelihood of a recall failure increases as there are more items for users to remember. With fewer passwords, users can more easily recall the specific password needed for a particular RP.\n The memory burden is greater for a less frequently used password.\n \n \n User experience during entry of the password\n \n Support copy and paste functionality in fields for entering passwords, including passphrases.\n \n \n\n\nIntermittent Events\n\nUsability considerations for intermittent events include:\n\n\n When users create and change passwords\n \n Clearly communicate information on how to create and change passwords.\n Clearly communicate password requirements, as specified in Sec. 3.1.1.\n Allow at least 64 characters in length to support the use of passphrases. Encourage users to make passwords as lengthy as they want and use any characters that they like (including spaces) to aid memorization. Ensure that user interfaces support sufficient password lengths.\n Do not impose other composition rules (e.g., mixtures of different character types) on passwords.\n Do not require that passwords be changed arbitrarily (e.g., periodically) unless there is a user request or evidence of authenticator compromise (see Sec. 3.1.1 for additional information).\n \n \n Provide clear, meaningful, and actionable feedback when chosen passwords are rejected (e.g., when it appears on a “blocklist” of unacceptable passwords or has been used previously).\n\n\nLook-Up Secrets\nTypical Usage\n\nSubscribers use a printed or electronic authenticator to look up the appropriate secrets needed to respond to a verifier’s prompt. For example, a user may be asked to provide a specific subset of the numeric or character strings printed on a card in table format.\n\nUsability considerations for typical usage include:\n\n\n User experience during entry of look-up secrets.\n \n Consider the complexity and size of the prompts. There are greater usability implications with larger subsets of secrets that a user is prompted to look up. Both the cognitive workload and physical difficulty for entry should be taken into account.\n \n \n\n\nOut-of-Band\nTypical Usage\n\nOut-of-band authentication requires that users have access to a primary and secondary communication channel.\n\nUsability considerations for typical usage include:\n\n\n \n Notify users of the receipt of a secret on a lockable device. If the out-of-band device is locked, authentication to the device should be required to access the secret.\n \n \n Depending on the implementation, consider form-factor constraints, which are particularly problematic when users must enter text on mobile devices. Providing larger touch areas will improve usability for entering secrets on mobile devices.\n \n \n Consider offering features that do not require text entry on mobile devices (e.g., a copy-paste feature), which are particularly helpful when the primary and secondary channels are on the same device. For example, it is difficult for users to transfer the authentication secret manually using a smartphone because they must switch back and forth — potentially multiple times — between the out-of-band application and the primary channel.\n \n \n Messages and notifications to out-of-band devices should contain contextual information for the user, such as the name of the service being accessed.\n \n \n Out-of-band messages should be delivered in a consistent manner and style to aid the subscriber in identifying potentially suspicious authentication requests.\n \n\n\nSingle-Factor OTP\nTypical Usage\n\nUsers access the OTP generated by the single-factor OTP authenticator. The authenticator output is typically displayed on the authenticator, and the user enters it during the session being authenticated.\n\nUsability considerations for typical usage include:\n\n\n \n Authenticator output allows at least one minute between changes but ideally allows users two full minutes, as specified in Sec. 3.1.4.1. Users need adequate time to enter the authenticator output, including looking back and forth between the single-factor OTP authenticator and the entry screen.\n \n \n Depending on the implementation, the following are additional usability considerations for implementers:\n \n It is preferable for the single-factor OTP authenticator to supply its output via an electronic interface (e.g., USB port) so that users do not have to manually enter the authenticator output. However, if a physical input (e.g., pressing a button) is required to operate, the location of the USB ports could pose usability difficulties. For example, the USB ports of some computers are located on the back of the computer and may be difficult for users to reach.\n Limited availability of a direct computer interface (e.g., USB port) could pose usability difficulties. For example, the number of USB ports on laptop computers is often very limited. This may force users to unplug other USB peripherals to use the single-factor OTP authenticator.\n \n \n\n\nMulti-Factor OTP\nTypical Usage\n\nUsers access the OTP generated by the multi-factor OTP authenticator through a second authentication factor. The OTP is typically displayed on the device, and the user manually enters it during the session being authenticated. The second authentication factor may be achieved through some kind of integral entry pad to enter a password, an integral biometric (e.g., fingerprint) reader, or a direct computer interface (e.g., USB port). Usability considerations for the additional factor also apply (see Sec. 8.2.1 for passwords and Sec. 8.4 for biometrics used in multi-factor authenticators).\n\n\\clearpage\n\n\nUsability considerations for typical usage include:\n\n\n User experience during manual entry of the authenticator output\n \n For time-based OTP, provide a grace period in addition to the time during which the OTP is displayed. Users need adequate time to enter the authenticator output, including looking back and forth between the multi-factor OTP authenticator and the entry screen.\n Consider form-factor constraints if users must unlock the multi-factor OTP authenticator via an integral entry pad or enter the authenticator output on mobile devices. Typing on small devices is significantly more error-prone and time-consuming than typing on a traditional keyboard. Providing larger touch areas improves usability for unlocking the multi-factor OTP authenticator or entering the authenticator output on mobile devices.\n Limited availability of a direct computer interface (e.g., USB port) could pose usability difficulties. For example, laptop computers often have a limited number of USB ports, which may force users to unplug other USB peripherals to use the multi-factor OTP authenticator.\n \n \n\n\nSingle-Factor Cryptographic Authenticator\nTypical Usage\n\nUsers authenticate by proving possession and control of the cryptographic key.\n\nUsability considerations for typical usage include:\n\n\n \n Give cryptographic keys appropriately descriptive names that are meaningful to users so that they can recognize and recall which cryptographic key to use for which authentication task. This prevents users from having to deal with multiple similarly and ambiguously named cryptographic keys. Selecting from multiple cryptographic keys on smaller mobile devices may be particularly problematic if the names of the cryptographic keys are shortened due to reduced screen sizes.\n \n \n Requiring a physical input (e.g., pressing a button) to operate a single-factor cryptographic authenticator could pose usability difficulties. For example, some USB ports are located on the back of computers, making it difficult for users to reach the port.\n \n \n For connected authenticators, the limited availability of a direct computer interface (e.g., USB port) could pose usability difficulties. For example, laptop computers often have a limited number of USB ports, which may force users to unplug other USB peripherals to use the authenticator.\n \n\n\nMulti-Factor Cryptographic Authenticator\nTypical Usage\n\nTo authenticate, users prove possession and control of the cryptographic key and control of the activation factor. Usability considerations for the additional factor also apply (see Sec. 8.2.1 for passwords and Sec. 8.4 for biometrics used as activation factors).\n\nUsability considerations for typical usage include:\n\n\n \n Give cryptographic keys appropriately descriptive names that are meaningful to users so that they can recognize and recall which cryptographic key to use for which authentication task. This prevents users from having to deal with multiple similarly and ambiguously named cryptographic keys. Selecting from multiple cryptographic keys on smaller mobile devices may be particularly problematic if the names of the cryptographic keys are shortened due to reduced screen sizes.\n \n Do not require users to keep external multi-factor cryptographic authenticators connected following authentication. Users may forget to disconnect the authenticator when they are done with it (e.g., forgetting a smartcard in the smartcard reader and walking away from the computer).\n \n Users need to be informed about whether the authenticator is required to stay connected or not.\n \n \n For connected authenticators, the limited availability of a direct computer interface (e.g., USB port) could pose usability difficulties. For example, laptop computers often have a limited number of USB ports, which may force users to unplug other USB peripherals to use the authenticator.\n\n\nSummary of Usability Considerations\nFigure 4 summarizes the usability considerations for typical usage and intermittent events for each authenticator type. Many of the usability considerations for typical usage apply to most of the authenticator types, as demonstrated in the rows. The table highlights common and divergent usability characteristics across the authenticator types. Each column allows readers to easily identify the usability attributes to address for each authenticator. Depending on the users’ goals and context of use, certain attributes may be valued over others. Whenever possible, provide alternative authenticator types, and allow users to choose between them.\n\nMulti-factor authenticators (e.g., multi-factor OTPs and multi-factor cryptographic) also inherit their activation factor’s usability considerations. As biometrics are only allowed as an activation factor in multi-factor authentication solutions, usability considerations for biometrics are not included in Fig. 4 and are discussed in Sec. 8.4.\n\nFig. 4 Usability considerations by authenticator type\n\n\n\nUsability Considerations for Biometrics\nThis section provides a high-level overview of general usability considerations for biometrics. A more detailed discussion of biometric usability can be found in Usability & Biometrics, Ensuring Successful Biometric Systems [UsabilityBiometrics].\n\nUser familiarity and practice with the device improve performance for all modalities. Device affordances (i.e., properties of a device that allow a user to perform an action), feedback, and clear instructions are critical to a user’s success with the biometric device. For example, provide clear instructions on the required actions for liveness detection. Ideally, users can select the modality that they are most comfortable with for their second authentication factor. Various user populations may be more comfortable, familiar with, and accepting of some biometric modalities than others. Additionally, user experience with biometrics is an activation factor. Provide clear, meaningful feedback on the number of remaining allowed attempts. For example, for rate limiting (i.e., throttling), inform users of the time period they have to wait until their next attempt.\n\nTypical Usage\n\nThe three biometric modalities that are most commonly used for authentication are fingerprint, face, and iris.\n\n\n Fingerprint usability considerations:\n \n Users have to remember which fingers they used for initial enrollment.\n The amount of moisture on the finger affects the sensor’s ability for successful capture.\n Additional factors that influence fingerprint capture quality include age, gender, and occupation (e.g., users who handle chemicals or work extensively with their hands may have degraded friction ridges).\n \n \n Face usability considerations:\n \n Users have to remember whether they wore any artifacts (e.g., glasses) during enrollment, which affects facial recognition accuracy.\n Differences in environmental lighting conditions may affect facial recognition accuracy.\n Facial expressions affect facial recognition accuracy (e.g., smiling versus a neutral expression).\n Facial poses affect facial recognition accuracy (e.g., looking down or away from the camera).\n \n \n Iris usability considerations:\n \n Wearing colored contacts may affect iris recognition accuracy.\n Users who have had eye surgery may need to re-enroll after surgery.\n Differences in environmental lighting conditions may affect iris recognition accuracy, especially for certain iris colors.\n \n \n\n\nIntermittent Events\n\nSince biometrics are only permitted as a second factor for multi-factor authentication, usability considerations for intermittent events with the primary factor still apply. Intermittent events that may affect recognition accuracy using biometrics include:\n\n\n Degraded fingerprints or finger injuries\n Dirty, dry, or wet hands; wearing gloves; or wearing a mask\n Natural facial or weight changes over time\n Eye surgery\n\n\nAcross all biometric modalities, usability considerations for intermittent events include:\n\n\n An alternative authentication method must be readily available and clearly communicated. Users should never be required to attempt biometric authentication and should be permitted to use a password as an alternative second factor.\n There should be provisions for technical assistance:\n \n Clearly communicate information on how and where to acquire technical assistance. For example, provide users with a link to an online self-service feature or a phone number for help desk support. Ideally, provide sufficient information to enable users to recover from intermittent events on their own without outside intervention.\n Inform users of factors that may affect the sensitivity of the biometric sensor (e.g., cleanliness of the sensor).\n \n \n\n"
} ,
{
"title" : "Equity Considerations",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/equity/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Equity Considerations\n\nThis section is informative.\n\nAccurate and equitable authentication service is an essential element of a digital identity system. While the accuracy aspects of authentication are primarily the subject of the security requirements found elsewhere in this document, the ability for all subscribers to reliably authenticate is required to provide equitable access to government services, as specified in Executive Order 13985, Advancing Racial Equity and Support for Underserved Communities Through the Federal Government [EO13985]. When assessing equity risks, a CSP should consider the overall user population for its authentication service. Additionally, the CSP further identifies groups of users within the population whose shared characteristics may cause them to be subject to inequitable access, treatment, or outcomes when using that service. Section 8 describes considerations to help ensure the overall usability and equity for all persons who use authentication services.\n\nA primary aspect of equity is that the CSP needs to anticipate the needs of its subscriber population and offer authenticator options that are suitable for that population. Some examples of authenticator suitability problems are:\n\n\n SMS-based out-of-band authentication may not be usable for subscribers in rural areas without mobile phone service.\n OTP authenticators may be difficult for subscribers with vision issues to read.\n Out-of-band authentication secrets sent via a voice telephone call may be difficult for subscribers with hearing difficulties to understand.\n Facial matching algorithms may not match the facial characteristics of all ethnicities or those wearing glasses equally well.\n Some subscribers may be missing fingers, have degraded fingerprints (e.g., from working with chemicals or extensively using their hands), or have dexterity problems that interfere with fingerprint collection.\n The cost of hardware-based authenticators may be beyond the means of some subscribers.\n Accurate manual entry of passwords may be difficult for subscribers with mobility and dexterity-related physical disabilities.\n Certain authenticator types may be challenging for subscribers with intellectual, developmental, learning, or neurocognitive difficulties.\n Lower-income subscribers are less likely to have up-to-date devices that are required by some authentication modes.\n Lower-income subscribers may be limited to the use of a smartphone and, therefore, may be unable to use USB-connected authenticators.\n Subscribers with less technological skill may need help to enter OTP codes from one device to another.\n Older subscribers may need help with the small form factor of some authenticators.\n Subscribers experiencing addiction, sexual exploitation, or other trauma may struggle to remember passwords or activation secrets.\n\n\nWhile CSPs are required to mitigate the common and expected problems in this area, it is not feasible to anticipate all potential equity problems, which will vary for different applications. Accordingly, CSPs need to provide mechanisms for subscribers to report inequitable authentication requirements and advise them on potential alternative authentication strategies.\n\nThis guideline recommends the binding of additional authenticators to minimize the need for account recovery (see Sec. 4.2). However, a subscriber may need help to purchase a second hardware-based authenticator as a backup. This inequity can be addressed by making inexpensive authenticators such as look-up secrets (see Sec. 3.1.2) available for use in the event of an authenticator failure or loss.\n\nCSPs need to be responsive to subscribers who experience authentication challenges that cannot be solved using the authenticators that they currently support. This might involve supporting a new authenticator type or allowing federated authentication through a trusted service that meets the subscriber’s needs.\n"
} ,
{
"title" : "Strength of Passwords",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/passwords/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Strength of Passwords\n\nThis appendix is informative.\n\nThis appendix uses the word “password” for ease of discussion. Where used, it should be interpreted to include passphrases, PINs, and passwords.\n\nIntroduction\n\nPasswords are a widely used form of authentication despite concerns about their use from both a usability and security standpoint [Persistence]. Humans have a limited ability to memorize complex, arbitrary secrets, so they often choose passwords that can be easily guessed. To address the resultant security concerns, online services have introduced rules to increase the complexity of these passwords. The most notable form is composition rules, which require users to choose passwords that are constructed using a mix of character types (e.g., at least one digit, uppercase letter, and symbol). However, analyses of breached password databases reveal that the benefit of such rules is less significant than initially thought [Policies], and the impacts on usability and memorability are severe.\n\nThe complexity of user-chosen passwords has often been characterized using the information theory concept of entropy [Shannon]. While entropy can be readily calculated for data with deterministic distribution functions, estimating the entropy for user-chosen passwords is challenging, and past efforts to do so have not been particularly accurate. For this reason, a different and somewhat more straightforward approach based primarily on password length is presented herein.\n\nMany attacks associated with password use are not affected by password complexity and length. Keystroke logging, phishing, and social engineering attacks are equally effective on lengthy and complex passwords as they are on simple ones. These attacks are outside of the scope of this Appendix.\n\nLength\n\nPassword length is a primary factor in characterizing password strength [Strength] [Composition]. Passwords that are too short yield to brute-force attacks and dictionary attacks. The minimum password length required depends on the threat model being addressed. Online attacks in which the attacker attempts to log in by guessing the password can be mitigated by limiting the permitted login attempt rate. To prevent an attacker (or a persistent claimant with poor typing skills) from quickly inflicting a denial-of-service attack on the subscriber by making many incorrect guesses, passwords need to be complex enough that a reasonable number of attempts can be permitted with a low probability of a successful guess, and rate limiting can be applied before there is a significant chance of a successful guess.\n\nOffline attacks are sometimes possible when the attacker obtains one or more hashed passwords through a database breach. The ability of the attacker to determine one or more users’ passwords depends on how the password is stored. Commonly, passwords are salted with a random value and hashed, preferably using a computationally expensive algorithm. Even with such measures, the current ability of attackers to compute many billions of hashes per second in an offline environment that is not subject to rate limiting requires passwords to be orders of magnitude more complex than those expected to resist only online attacks.\n\nUsers should be encouraged to make their passwords as lengthy as they want, within reason. Since the size of a hashed password is independent of its length, there is no reason to prohibit the use of lengthy passwords (or passphrases) if the user wishes. Extremely long passwords (perhaps megabytes long) could require excessive processing time to hash, so it is reasonable to have some limit.\n\nComplexity\n\nComposition rules are commonly used in an attempt to increase the difficulty of guessing user-chosen passwords. However, research has shown that users respond in very predictable ways to the requirements imposed by composition rules [Policies]. For example, a user who might have chosen “password” as their password would be relatively likely to choose “Password1” if required to include an uppercase letter and a number or “Password1!” if a symbol is also required.\n\nUsers also express frustration when online services reject their attempts to create complex passwords. Many services reject passwords with spaces and various special characters. Characters that are not accepted are sometimes the result of an effort to avoid attacks that depend on those characters (e.g., SQL injection). However, an unhashed password would not be sent intact to a database, so such precautions are unnecessary. Users should also be able to include space characters to allow the use of phrases. Space characters add little to the complexity of passwords and may introduce usability issues (e.g., the undetected use of two spaces rather than one), so removing repeated spaces in typed passwords may be beneficial if initial verification fails.\n\nSince users’ password choices are often predictable, so attackers are likely to guess passwords that have previously proven successful. These include dictionary words and passwords from previous breaches, such as the “Password1!” example above. For this reason, passwords chosen by users should be compared against a blocklist of unacceptable passwords. This list should include passwords from previous breach corpuses, dictionary words used as passwords, and specific words (e.g., the name of the service itself) that users are likely to choose. Since a minimum length requirement will also govern the user’s choice of passwords, this dictionary only needs to include entries that meet that requirement. As noted in Sec. 3.1.1.2, it is not beneficial for the blocklist to be excessively large or comprehensive, since its primary purpose is to prevent the use of very common passwords that might be guessed in an online attack before throttling restrictions take effect. An excessively large blocklist will likely frustrate users who attempt to choose a memorable password.\n\nHighly complex passwords introduce a new potential vulnerability: they are less likely to be memorable and more likely to be written down or stored electronically in an unsafe manner. While these practices are not necessarily vulnerable, some methods of recording such secrets will be. This is an additional motivation for not requiring excessively long or complex passwords.\n\nCentral vs. Local Verification\n\nWhile passwords that are used as a separate authentication factor are often centrally verified by the CSP’s verifier, those that are used as an activation factor for a multi-factor authenticator are either verified locally by the authenticator or used to derive the authenticator output, which will be incorrect if the wrong activation factor is used. Both of these situations are referred to as “local verification.”\n\nThe attack surfaces and vulnerabilities for central and local verification are very different. Accordingly, the requirements for centrally verified passwords differ from those verified locally. Centrally verified passwords require the verifier (i.e., an online resource) to store salted and iteratively hashed verification secrets for all of the subscribers’ passwords. Although the salting and hashing process increases the computational effort to determine the passwords from the hashes, the verifier is an attractive target for attackers, particularly those interested in compromising an arbitrary subscriber rather than a specific one.\n\nLocal verifiers do not have the same concerns with large-scale attacks on a central online verifier but depend to a greater extent on the physical security of the authenticator and the integrity of its associated endpoint. To the extent that the authenticator stores the activation factor, that factor must be protected against physical and side-channel (e.g., power and timing analysis) attacks on the authenticator. When the activation factor is entered through the associated endpoint, the endpoint needs to be free of malware (e.g., key-logging software). Since such threats are less dependent on the length and complexity of the password, these requirements are relaxed for local verification.\n\nOnline password-guessing attacks are a similar threat to centrally and locally verified passwords. Throttling, which is the primary defense against online attacks, can be particularly challenging for local verifiers because of the limited ability of some authenticators to securely store information about unsuccessful attempts. Throttling can be performed by either keeping a count of invalid attempts in the authenticator or generating an authenticator output rejected by the CSP verifier, which does the throttling. In this case, the invalid outputs must not be evident to the attacker, who could otherwise make offline attempts until a valid-looking authenticator output appears.\n\nSummary\n\nLength and complexity requirements beyond those recommended here significantly increase the difficulty of using passwords and increase user frustration. As a result, users often work around these restrictions counterproductively. Other mitigations (e.g., blocklists, secure hashed storage, machine-generated random passwords, and rate limiting) are more effective at preventing modern brute-force attacks, so no additional complexity requirements are imposed.\n"
} ,
{
"title" : "Syncable Authenticators",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/syncable/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Syncable Authenticators\n\nThis appendix is normative.\n\nIntroduction\n\nThe ability to “sync” authenticators — specifically to copy (i.e., clone) their authentication secrets to the cloud and thence to additional authenticators — is a relatively new development in authentication. This appendix provides additional guidelines on the use of syncable authenticators.\n\nCloning of Authentication Keys\n\nIn some cases, the secret keys associated with multi-factor cryptographic authenticators (e.g., those based on the WebAuthn standard [WebAuthn]) may be stored in a sync fabric. This allows the keys to be backed up and transferred to other devices. The following requirements apply to keys managed in this manner:\n\n\n All keys SHALL be generated using approved cryptography.\n Private keys that are cloned or exported from a device SHALL only be stored in an encrypted form.\n All authentication transactions SHALL perform private-key operations on the local device using cryptographic keys that are generated on-device or recovered from the sync fabric (e.g., in cloud storage).\n Private keys stored in cloud-based accounts SHALL be protected by access control mechanisms such that only the authenticated user can access their private keys in the sync fabric.\n User access to private keys in the sync fabric SHALL be protected by AAL2-equivalent MFA to preserve the integrity of the authentication protocols using the synced keys.\n These general requirements and any other agency-specific requirements for using syncable authenticators SHALL be documented and communicated, including on public-facing websites and digital service policies, where applicable.\n\n\nAdditional requirements for federal enterprise1 use of syncable authenticators:\n\n\n Federal enterprise private keys (i.e., federal keys) SHALL be stored in sync fabrics that have achieved FISMA Moderate protections or equivalent.\n Devices (e.g., mobile phones, laptops, tablets) that generate, store, and sync authenticators containing federal enterprise private keys SHALL be protected by mobile device management software or other device configuration controls that prevent the syncing or sharing of keys to unauthorized devices or sync fabrics.\n Access to the sync fabric SHALL be controlled by agency-managed accounts (e.g., a central identity and access management solution or platform-based managed account) to maintain enterprise control over the private key’s life cycle.\n Authenticators that generate private keys SHOULD support attestation features that can be used to verify the capabilities and sources of the authenticator (e.g., enterprise attestation).\n\n\nThese controls specifically support syncing and should be considered additive to the existing multi-factor cryptographic authenticator requirements and AAL2 requirements, including [FIPS140] validation.\n\n\n Syncing authentication keys inherently means that the key can be exported. Authentication at AAL2 may be supported subject to the above requirements. However, syncing violates the non-exportability requirements of AAL3. Similar protocols using keys not stored in an exportable manner that meet the other requirements of AAL3 may be used.\n\n\nImplementation Requirements\n\nMany syncable authenticators are built upon W3C’s [WebAuthn] specification, which provides a common data structure, a challenge-response cryptographic protocol, and an API for leveraging public-key credentials. The specification is flexible and adaptive, meaning that not all deployments of WebAuthn credentials will meet the requirements of federal agencies for implementation.\n\nThe specification has a series of flags that the RP application can request from the authenticator to provide additional context for the authentication event and determine whether it meets the RP’s access policies. This section describes certain flags in the WebAuthn specification that federal agencies acting as RPs should understand and interrogate when building their syncable authenticator implementations to align with NIST AAL2 guidelines.\n\nThe following requirements apply to WebAuthn Level 3 flags:\n\n\n User Present (UP)\n The User Present flag indicates that a “presence” test was used to confirm that the user has interacted with the authenticator (e.g., tapping a hardware token inserted into a USB port). This supports authentication intent, as described in Sec. 3.2.8. Verifiers SHOULD confirm that the User Present flag has been set.\n User Verified (UV)\n The User Verified flag indicates that the authenticator has locally authenticated the user using one of the available “user verification” methods. Verifiers SHALL indicate that UV is preferred and SHALL inspect responses to confirm the value of the UV flag. This indicates whether the authenticator can be treated as a multi-factor cryptographic authenticator. If the user is not verified, agencies SHALL treat the authenticator as a single-factor cryptographic authenticator. A further extension to the WebAuthn Level 3 specification (see Sec. 10.3 of [WebAuthn]) provides additional data on verification methods if agencies seek to gain context on the local authentication event.\n Backup Eligible\n The Backup Eligible flag indicates whether the authenticator can be synced to a different device (i.e., whether the key can be stored elsewhere). It is important to note that just because an authenticator can be synced does not mean that it has been synced. Verifiers MAY use this flag to establish policies that restrict the use of syncable authenticators. This flag is necessary to distinguish authenticators that are device-bound from those that may be cloned to more than one device.\n Backup State\n The Backup State flag indicates whether an authenticator has been synced to a different device. Verifiers MAY use this flag to establish restrictions on authenticators that are synced to other devices. Agencies SHOULD NOT condition acceptance based on this flag for public-facing applications due to user experience concerns. This flag MAY be used for enterprise applications to support the restriction of syncable authenticators for specific applications.\n\n\nIn addition to the flags specified above, agencies may wish to gain additional information on the origins and capabilities of the syncable authenticators that they choose to implement and accept. Within the context of FIDO2 WebAuthn, some authenticators support attestation features that can be used to determine the capabilities and manufacturers of specific authenticators. For enterprise use cases, agencies SHOULD implement attestation capabilities based on the functionality offered by their platform providers. This would take the form of an enterprise attestation in which the RP requests identifying information about the authenticator.\n\nAttestations SHOULD NOT block the use of syncable authenticators for broad public-facing applications. Due to their limited availability in consumer products, requiring their use is likely to divert users to less secure authentication options that are vulnerable to phishing (e.g., PSTN-based out-of-band authentication). While authentication transaction metadata, such as the User Verified flag indicating the use of a local activation factor, is available in WebAuthn responses, attestation can provide stronger assurance of the characteristics of the authenticator used in a transaction. RPs MAY use attestation to determine the level of confidence they have in a syncable authenticator.\n\nEven if the RP requests flag and attestation data, the authenticator may not return all of the requested information, or it may return information that is inconsistent with the expected response mandated for access to a resource. Agencies SHALL evaluate the use cases for syncable authenticators and determine the appropriate access policy decisions that they intend to make based on the returned information.\n\nSharing\n\nCybersecurity guidelines have historically cautioned against sharing authenticators between users, expecting different users to maintain their own unique authenticators. Despite this guidance, authenticator and password sharing occurs within some user groups and applications to allow individuals to share access to a digital account.\n\nAs indicated in Table 5, some syncable authenticator implementations have embraced this user behavior and established methods for sharing authentication keys between different users. Further, some implementations actively encourage sharing syncable authenticators as a convenient and more secure alternative to sharing passwords for common services.\n\nFor enterprise use cases, concerns over sharing keys can be effectively mitigated using device management techniques that limit the ability for keys to be moved off of approved devices or sync fabrics. However, similar mitigations are not currently available for public-facing use cases, leaving RPs dependent on the sharing models adopted by syncable authenticator providers. Owners of public-facing applications should be aware of the risks associated with shared authenticators. When interacting with the public, agencies have limited visibility into which specific authenticators are being employed by their users and should assume that all syncable authenticators may be subject to sharing. While many sharing models have substantial controls that minimize risks (e.g., requiring close proximity between devices to allow sharing), other implementations are less restrictive.\n\nThe risk of sharing posed by this new class of authenticators is not unique. It applies to all authenticator types, some of which are weaker than syncable authenticators. Any authenticator can be shared by a user who is determined to share it. Users can actively share passwords, OTPs, out-of-band authenticators, and even push authentication events that allow a designee (whether formal or not) to authenticate on behalf of an end user.\n\nAgencies determine which authenticators they will accept for their applications based on the specific risks, threats, and usability considerations they face. Syncable authenticators may be offered as a new option for applications that seek to implement up to AAL2. The trade-offs of this technology should be well-balanced based on their expected outcomes for security, privacy, equity, and usability.\n\nExample\n\nA common use of syncable authenticators is in an AAL2 authentication transaction. The following items summarize how WebAuthn syncable authenticators satisfy various aspects of AAL2 requirements:\n\n\n Phishing resistance (recommended; required for federal enterprise)\n Achieved: Properly configured syncable authenticators create a unique public or private key pair whose use is constrained to the domain in which it was created (i.e., the key can only be used with a specific website or RP). This prevents a falsified web page from being able to capture and reuse an authenticator output.\n Replay resistance (required)\n Achieved: Syncable authenticators prevent replay resistance (i.e., prevention of reuse in future transactions) through a random nonce that is incorporated into each authentication transaction.\n Authentication intent (required)\n Achieved: Syncable authenticators require users to input an activation secret to initiate the cryptographic authentication protocol. This serves as authentication intent, as the event cannot proceed without the user’s active participation.\n Multi-factor (required)\n Achieved: The user verified (UV) flag value indicates whether a local authentication mechanism (i.e., an activation factor) was used to complete the transaction. Without user verification, the verifier prompts for an additional authentication factor as part of the transaction.\n\n\nSecurity Considerations\n\nSyncable authenticators present distinct threats and challenges that agencies should evaluate before implementation or deployment, as shown in Table 4.\n\n\\clearpage\n\n\nTable 4. Syncable authenticator threats, challenges, and mitigations\n\n\n \n \n Threat or Challenge\n Description\n Mitigations\n \n \n \n \n Unauthorized key use or loss of control\n Some syncable authenticator deployments support sharing private keys to devices that belong to other users who can then misuse the key\n Enforce enterprise device management features or managed profiles that prevent synced keys from being shared.\n \n \n \n \n Notify users of key-sharing events through all available notification channels.\n \n \n \n \n Provide mechanisms for users to view keys, key statuses, and whether/where keys have been shared.\n \n \n \n \n Educate users about the risks of unauthorized key use through existing awareness and training mechanisms.\n \n \n Sync fabric compromise\n To support key syncing, most implementations clone keys to a sync fabric (i.e., a cloud-based service connected to multiple devices associated with an account).\n Store only encrypted key material.\n \n \n \n \n Implement syncing fabric access controls that prevent anyone other than the authenticated user from accessing the private key.\n \n \n \n \n Evaluate cloud services for baseline security features (e.g., FISMA Moderate protections or comparable).\n \n \n \n \n Leverage hardware security modules to protect encrypted keys.\n \n \n Unauthorized access to sync fabric and recovery\n Synced keys are accessible via cloud-based account recovery processes, which represent a potential weakness to the authenticators.\n Implement authentication recovery processes that are consistent with SP 800-63B.\n \n \n \n \n Restrict recovery capabilities for federal enterprise keys through device management or managed account capabilities.\n \n \n \n \n Bind multiple authenticators at AAL2 and above to support recovery.\n \n \n \n \n Require AAL2 authentication to add any new authenticators for user access to the sync fabric.\n \n \n \n \n Use only as a derived authenticator in federal enterprise scenarios [SP800-157].\n \n \n \n \n Notify the user of any recovery activities.\n \n \n \n \n Leverage a user-controlled secret (i.e., something not known to the sync fabric provider) to encrypt and recover keys.\n \n \n Revocation\n Since syncable authenticators use RP-specific keys, the ability to centrally revoke access based on those keys is challenging. For example, with traditional PKI, CRLs can be used centrally to revoke access. A similar process is not available for syncable authenticators (or any FIDO WebAuthn-based credentials).\n Implement a central identity management (IDM) account for users to manage authenticators and remove them from the “home agency” account if they are compromised or expired.\n \n \n \n \n Leverage SSO and federation to limit the number of RP-specific keys that will need to be revoked in an incident.\n \n \n \n \n Establish policies and tools to request that users periodically review keys for validity and currency.\n \n \n\n\n \n \n With respect to these requirements, federal enterprise systems and keys include what would be considered in scope for PIV guidance, such as government contractors, government employees, and mission partners. It does not include government-to-consumer or public-facing use cases. ↩\n \n \n\n"
} ,
{
"title" : "List of Symbols, Abbreviations, and Acronyms",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/abbreviations/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "List of Symbols, Abbreviations, and Acronyms\n\n\n AAL\n Authentication Assurance Level\n CSP\n Credential Service Provider\n CSRF\n Cross-Site Request Forgery\n XSS\n Cross-Site Scripting\n DNS\n Domain Name System\n FEDRAMP\n Federal Risk and Authorization Management Program\n FMR\n False Match Rate\n FNMR\n False Non-Match Rate\n IAL\n Identity Assurance Level\n IdP\n Identity Provider\n KBA\n Knowledge-Based Authentication\n MAC\n Message Authentication Code\n NARA\n National Archives and Records Administration\n OTP\n One-Time Password\n PAD\n Presentation Attack Detection\n PIA\n Privacy Impact Assessment\n PII\n Personally Identifiable Information\n PIN\n Personal Identification Number\n PKI\n Public Key Infrastructure\n PSTN\n Public Switched Telephone Network\n RP\n Relying Party\n SAOP\n Senior Agency Official for Privacy\n SSL\n Secure Sockets Layer\n SMS\n Short Message Service\n SORN\n System of Records Notice\n TEE\n Trusted Execution Environment\n TLS\n Transport Layer Security\n TPM\n Trusted Platform Module\n VOIP\n Voice-Over-IP\n XSS\n Cross-Site Scripting\n\n"
} ,
{
"title" : "SP 800-63B",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/abstract/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "\nABSTRACT\n\nThis guideline focuses on the authentication of subjects who interact with government information systems over networks to establish that a given claimant is a subscriber who has been previously authenticated. The result of the authentication process may be used locally by the system performing the authentication or may be asserted elsewhere in a federated identity system. This document defines technical requirements for each of the three authenticator assurance levels. The guidelines are not intended to constrain the development or use of standards outside of this purpose. This publication supersedes NIST Special Publication (SP) 800-63B.\n\nKeywords\n\nauthentication; authentication assurance; credential service provider; digital authentication; digital credentials; electronic authentication; electronic credentials; passwords.\n"
} ,
{
"title" : "Change Log",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/changelog/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Change Log\n\nThis appendix is informative. It provides an overview of the changes to SP 800-63B since its initial release.\n\n\n \n Throughout: Removed Purpose and Definitions and Abbreviations numbered sections and renumbered sections accordingly. Section numbers referenced below are the new section numbers.\n \n \n Throughout: Changed the name of memorized secrets to passwords.\n \n \n Section 3.1.3: Disallowed the comparison of secrets from primary and secondary channel for out-of-band authentication.\n \n \n Section 3.1.3.1: Removed the prohibition on the use of VoIP phone numbers for out-of-band authentication.\n \n \n Section 3.1.3.4: Recognized multi-factor out-of-band authenticators that require an activation factor.\n \n \n Section 3.1.4 and Sec. 3.1.5: Removed “devices” from the authenticator name to recognize OTP applications.\n \n \n Section 3.1.6 and Sec. 3.1.7: Removed “software” and “device” distinction from the authenticator name; these are now authenticator characteristics.\n \n \n Section 3.1.7.4 and Appendix B : Added requirements for syncable authenticators.\n \n \n Section 3.2.3: Updated biometric performance requirements and metrics and included a discussion of equity impacts.\n \n \n Section 3.2.5: Added a definition and updated requirements for phishing-resistant authenticators.\n \n \n Section 3.2.10: Established separate requirements for locally verified memorized secrets known as activation secrets.\n \n \n Section 3.2.11: Added requirements for authenticators that are connected via wireless technologies such as NFC and Bluetooth.\n \n \n Section 3.2.12: Centralized the requirements for random values used throughout the document.\n \n \n Section 3.2.13: Added a new section on requirements for the non-exportability of authenticator secrets.\n \n \n Removed verifier compromise resistance as a distinct named requirement because it is generally a characteristic of the chosen authenticator type.\n \n \n Section 4: Section renamed “Authenticator Event Management.”\n \n \n Section 4.1.1: Moved binding at enrollment to SP 800-63A.\n \n \n Section 4.1.2.1: Generalized binding an additional authenticator to all AALs.\n \n \n Section 4.1.2.2: Added requirements for binding authenticators that are not connected to an endpoint.\n \n \n Section 4.2: Revised the requirements and methods for account recovery.\n \n \n Section 4.6: Revised the requirements for notifications sent to subscribers.\n \n \n Section 5.1.1: Added requirements for browser cookies used for session maintenance.\n \n \n Section 5.2: Revised reauthentication requirements to define the overall structure of reauthentication here and specify timeout values in the AAL requirements.\n \n \n Section 5.3: Added guidelines for the use of session monitoring (continuous authentication).\n \n \n Section 9: Added a section on equity considerations.\n \n\n\n"
} ,
{
"title" : "Glossary",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/glossary/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Glossary\n\nA wide variety of terms are used in the realm of digital identity. While many definitions are consistent with earlier versions of SP 800-63, some have changed in this revision. Many of these terms lack a single, consistent definition, warranting careful attention to how the terms are defined here.\n\n\n account recovery\n The ability to regain ownership of a subscriber account and its associated information and privileges.\n activation\n The process of inputting an activation factor into a multi-factor authenticator to enable its use for authentication.\n activation factor\n An additional authentication factor that is used to enable successful authentication with a multi-factor authenticator.\n activation secret\n A password that is used locally as an activation factor for a multi-factor authenticator.\n approved cryptography\n An encryption algorithm, hash function, random bit generator, or similar technique that is Federal Information Processing Standard (FIPS)-approved or NIST-recommended. Approved algorithms and techniques are either specified or adopted in a FIPS or NIST recommendation.\n assertion\n A statement from an IdP to an RP that contains information about an authentication event for a subscriber. Assertions can also contain identity attributes for the subscriber.\n asymmetric keys\n Two related keys, comprised of a public key and a private key, that are used to perform complementary operations such as encryption and decryption or signature verification and generation.\n attestation\n Information conveyed to the CSP, generally at the time that an authenticator is bound, describing the characteristics of a connected authenticator or the endpoint involved in an authentication operation.\n attribute\n A quality or characteristic ascribed to someone or something. An identity attribute is an attribute about the identity of a subscriber.\n authenticate\n See authentication.\n authenticated protected channel\n An encrypted communication channel that uses approved cryptography where the connection initiator (client) has authenticated the recipient (server). Authenticated protected channels are encrypted to provide confidentiality and protection against active intermediaries and are frequently used in the user authentication process. Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) [RFC9325] are examples of authenticated protected channels in which the certificate presented by the recipient is verified by the initiator. Unless otherwise specified, authenticated protected channels do not require the server to authenticate the client. Authentication of the server is often accomplished through a certificate chain that leads to a trusted root rather than individually with each server.\n authenticated session\n See protected session.\n authentication\n The process by which a claimant proves possession and control of one or more authenticators bound to a subscriber account to demonstrate that they are the subscriber associated with that account.\n Authentication Assurance Level (AAL)\n A category that describes the strength of the authentication process.\n authentication factor\n The three types of authentication factors are something you know, something you have, and something you are. Every authenticator has one or more authentication factors.\n authentication intent\n The process of confirming the claimant’s intent to authenticate or reauthenticate by requiring user intervention in the authentication flow. Some authenticators (e.g., OTPs) establish authentication intent as part of their operation. Others require a specific step, such as pressing a button, to establish intent. Authentication intent is a countermeasure against use by malware at the endpoint as a proxy for authenticating an attacker without the subscriber’s knowledge.\n authentication protocol\n A defined sequence of messages between a claimant and a verifier that demonstrates that the claimant has possession and control of one or more valid authenticators to establish their identity, and, optionally, demonstrates that the claimant is communicating with the intended verifier.\n authentication secret\n A generic term for any secret value that an attacker could use to impersonate the subscriber in an authentication protocol.\n\n These are further divided into short-term authentication secrets, which are only useful to an attacker for a limited period of time, and long-term authentication secrets, which allow an attacker to impersonate the subscriber until they are manually reset. The authenticator secret is the canonical example of a long-term authentication secret, while the authenticator output — if it is different from the authenticator secret — is usually a short-term authentication secret.\n \n authenticator\n Something that the subscriber possesses and controls (e.g., a cryptographic module or password) and that is used to authenticate a claimant’s identity. See authenticator type and multi-factor authenticator.\n authenticator binding\n The establishment of an association between a specific authenticator and a subscriber account that allows the authenticator to be used to authenticate for that subscriber account, possibly in conjunction with other authenticators.\n authenticator output\n The output value generated by an authenticator. The ability to generate valid authenticator outputs on demand proves that the claimant possesses and controls the authenticator. Protocol messages sent to the verifier depend on the authenticator output, but they may or may not explicitly contain it.\n authenticator secret\n The secret value contained within an authenticator.\n authenticator type\n A category of authenticators with common characteristics, such as the types of authentication factors they provide and the mechanisms by which they operate.\n authenticity\n The property that data originated from its purported source.\n authorize\n A decision to grant access, typically automated by evaluating a subject’s attributes.\n biometric sample\n An analog or digital representation of biometric characteristics prior to biometric feature extraction, such as a record that contains a fingerprint image.\n biometrics\n Automated recognition of individuals based on their biological or behavioral characteristics. Biological characteristics include but are not limited to fingerprints, palm prints, facial features, iris and retina patterns, voiceprints, and vein patterns. Behavioral characteristics include but are not limited to keystrokes, angle of holding a smart phone, screen pressure, typing speed, mouse or mobile phone movements, and gyroscope position.\n blocklist\n A documented list of specific elements that are blocked, per policy decision. This concept has historically been known as a blacklist.\n claimant\n A subject whose identity is to be verified using one or more authentication protocols.\n credential\n An object or data structure that authoritatively binds an identity — via an identifier — and (optionally) additional attributes, to at least one authenticator possessed and controlled by a subscriber.\n\n A credential is issued, stored, and maintained by the CSP. Copies of information from the credential can be possessed by the subscriber, typically in the form of one or more digital certificates that are often contained in an authenticator along with their associated private keys.\n \n credential service provider (CSP)\n A trusted entity whose functions include identity proofing applicants to the identity service and registering authenticators to subscriber accounts. A CSP may be an independent third party.\n\n\n\\clearpage\n\n\n\n cross-site request forgery (CSRF)\n An attack in which a subscriber who is currently authenticated to an RP and connected through a secure session browses an attacker’s website, causing the subscriber to unknowingly invoke unwanted actions at the RP.\n\n For example, if a bank website is vulnerable to a CSRF attack, it may be possible for a subscriber to unintentionally authorize a large money transfer by clicking on a malicious link in an email while a connection to the bank is open in another browser window.\n \n cross-site scripting (XSS)\n A vulnerability that allows attackers to inject malicious code into an otherwise benign website. These scripts acquire the permissions of scripts generated by the target website to compromise the confidentiality and integrity of data transfers between the website and clients. Websites are vulnerable if they display user-supplied data from requests or forms without sanitizing the data so that it is not executable.\n cryptographic authenticator\n An authenticator that proves possession of an authentication secret through direct communication with a verifier through a cryptographic authentication protocol.\n cryptographic key\n A value used to control cryptographic operations, such as decryption, encryption, signature generation, or signature verification. For the purposes of these guidelines, key requirements shall meet the minimum requirements stated in Table 2 of [SP800-57Part1]. See asymmetric keys or symmetric keys.\n cryptographic module\n A set of hardware, software, or firmware that implements approved security functions including cryptographic algorithms and key generation.\n digital authentication\n The process of establishing confidence in user identities that are digitally presented to a system. In previous editions of SP 800-63, this was referred to as electronic authentication.\n digital identity\n An attribute or set of attributes that uniquely describes a subject within a given context.\n digital signature\n An asymmetric key operation in which the private key is used to digitally sign data and the public key is used to verify the signature. Digital signatures provide authenticity protection, integrity protection, and non-repudiation support but not confidentiality or replay attack protection.\n digital transaction\n A discrete digital event between a user and a system that supports a business or programmatic purpose.\n electronic authentication (e-authentication)\n See digital authentication.\n endpoint\n Any device that is used to access a digital identity on a network, such as laptops, desktops, mobile phones, tablets, servers, Internet of Things devices, and virtual environments.\n enrollment\n The process through which a CSP/IdP provides a successfully identity-proofed applicant with a subscriber account and binds authenticators to grant persistent access.\n entropy\n The amount of uncertainty that an attacker faces to determine the value of a secret. Entropy is usually stated in bits. A value with n bits of entropy has the same degree of uncertainty as a uniformly distributed n-bit random value.\n equity\n The consistent and systematic fair, just, and impartial treatment of all individuals, including individuals who belong to underserved communities that have been denied such treatment, such as Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders, and other persons of color; members of religious minorities; lesbian, gay, bisexual, transgender, and queer (LGBTQ+) persons; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality. [EO13985]\n factor\n See authentication factor\n\n\n\\clearpage\n\n\n\n Federal Information Processing Standard (FIPS)\n Under the Information Technology Management Reform Act (Public Law 104-106), the Secretary of Commerce approves the standards and guidelines that the National Institute of Standards and Technology (NIST) develops for federal computer systems. NIST issues these standards and guidelines as Federal Information Processing Standards (FIPS) for government-wide use. NIST develops FIPS when there are compelling federal government requirements, such as for security and interoperability, and there are no acceptable industry standards or solutions. See background information for more details.\n\n FIPS documents are available online on the FIPS home page: https://www.nist.gov/itl/fips.cfm\n \n federation\n A process that allows for the conveyance of identity and authentication information across a set of networked systems.\n hash function\n A function that maps a bit string of arbitrary length to a fixed-length bit string. Approved hash functions satisfy the following properties:\n\n \n \n One-way — It is computationally infeasible to find any input that maps to any pre-specified output.\n \n \n Collision-resistant — It is computationally infeasible to find any two distinct inputs that map to the same output.\n \n \n \n identifier\n A data object that is associated with a single, unique entity (e.g., individual, device, or session) within a given context and is never assigned to any other entity within that context.\n identity\n See digital identity\n Identity Assurance Level (IAL)\n A category that conveys the degree of confidence that the subject’s claimed identity is their real identity.\n identity proofing\n The processes used to collect, validate, and verify information about a subject in order to establish assurance in the subject’s claimed identity.\n identity provider (IdP)\n The party in a federation transaction that creates an assertion for the subscriber and transmits the assertion to the RP.\n identity resolution\n The process of collecting information about an applicant to uniquely distinguish an individual within the context of the population that the CSP serves.\n injection attack\n An attack in which an attacker supplies untrusted input to a program. In the context of federation, the attacker presents an untrusted assertion or assertion reference to the RP in order to create an authenticated session with the RP.\n manageability\n Providing the capability for the granular administration of personally identifiable information, including alteration, deletion, and selective disclosure. [NISTIR8062]\n memorized secret\n See password.\n message authentication code (MAC)\n A cryptographic checksum on data that uses a symmetric key to detect both accidental and intentional modifications of the data. MACs provide authenticity and integrity protection, but not non-repudiation protection.\n mobile code\n Executable code that is normally transferred from its source to another computer system for execution. This transfer is often through the network (e.g., JavaScript embedded in a web page) but may transfer through physical media as well.\n multi-factor authentication (MFA)\n An authentication system that requires more than one distinct type of authentication factor for successful authentication. MFA can be performed using a multi-factor authenticator or by combining single-factor authenticators that provide different types of factors.\n multi-factor authenticator\n An authenticator that provides more than one distinct authentication factor, such as a cryptographic authentication device with an integrated biometric sensor that is required to activate the device.\n network\n An open communications medium, typically the Internet, used to transport messages between the claimant and other parties. Unless otherwise stated, no assumptions are made about the network’s security; it is assumed to be open and subject to active (e.g., impersonation, session hijacking) and passive (e.g., eavesdropping) attacks at any point between the parties (e.g., claimant, verifier, CSP, RP).\n nonce\n A value used in security protocols that is never repeated with the same key. For example, nonces used as challenges in challenge-response authentication protocols must not be repeated until authentication keys are changed. Otherwise, there is a possibility of a replay attack. Using a nonce as a challenge is a different requirement than a random challenge, because a nonce is not necessarily unpredictable.\n non-repudiation\n The capability to protect against an individual falsely denying having performed a particular transaction.\n offline attack\n An attack in which the attacker obtains some data (typically by eavesdropping on an authentication transaction or by penetrating a system and stealing security files) that the attacker is able to analyze in a system of their own choosing.\n online attack\n An attack against an authentication protocol in which the attacker either assumes the role of a claimant with a genuine verifier or actively alters the authentication channel.\n online guessing attack\n An attack in which an attacker performs repeated logon trials by guessing possible values of the authenticator output.\n passphrase\n A password that consists of a sequence of words or other text that a claimant uses to authenticate their identity. A passphrase is similar to a password in usage but is generally longer for added security.\n password\n A type of authenticator consisting of a character string that is intended to be memorized or memorable by the subscriber to permit the claimant to demonstrate something they know as part of an authentication process. Passwords are referred to as memorized secrets in the initial release of SP 800-63B.\n personal identification number (PIN)\n A password that typically consists of only decimal digits.\n personal information\n See personally identifiable information.\n personally identifiable information (PII)\n Information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other information that is linked or linkable to a specific individual. [A-130]\n pharming\n An attack in which an attacker corrupts an infrastructure service such as DNS (e.g., Domain Name System [DNS]) and causes the subscriber to be misdirected to a forged verifier/RP, which could cause the subscriber to reveal sensitive information, download harmful software, or contribute to a fraudulent act.\n phishing\n An attack in which the subscriber is lured (usually through an email) to interact with a counterfeit verifier/RP and tricked into revealing information that can be used to masquerade as that subscriber to the real verifier/RP.\n phishing resistance\n The ability of the authentication protocol to prevent the disclosure of authentication secrets and valid authenticator outputs to an impostor verifier without reliance on the vigilance of the claimant.\n physical authenticator\n An authenticator that the claimant proves possession of as part of an authentication process.\n possession and control of an authenticator\n The ability to activate and use the authenticator in an authentication protocol.\n predictability\n Enabling reliable assumptions by individuals, owners, and operators about PII and its processing by an information system. [NISTIR8062]\n private key\n In asymmetric key cryptography, the private key (i.e., a secret key) is a mathematical key used to create digital signatures and, depending on the algorithm, decrypt messages or files that are encrypted with the corresponding public key. In symmetric key cryptography, the same private key is used for both encryption and decryption.\n presentation attack\n Presentation to the biometric data capture subsystem with the goal of interfering with the operation of the biometric system.\n presentation attack detection (PAD)\n Automated determination of a presentation attack. A subset of presentation attack determination methods, referred to as liveness detection, involves the measurement and analysis of anatomical characteristics or voluntary or involuntary reactions, to determine if a biometric sample is being captured from a living subject that is present at the point of capture.\n Privacy Impact Assessment (PIA)\n A method of analyzing how personally identifiable information (PII) is collected, used, shared, and maintained. PIAs are used to identify and mitigate privacy risks throughout the development lifecycle of a program or system. They also help ensure that handling information conforms to legal, regulatory, and policy requirements regarding privacy.\n protected session\n A session in which messages between two participants are encrypted and integrity is protected using a set of shared secrets called “session keys.”\n\n A protected session is said to be authenticated if — during the session — one participant proves possession of one or more authenticators in addition to the session keys, and if the other party can verify the identity associated with the authenticators. If both participants are authenticated, the protected session is said to be mutually authenticated.\n \n pseudonym\n A name other than a legal name.\n pseudonymity\n The use of a pseudonym to identify a subject.\n pseudonymous identifier\n A meaningless but unique identifier that does not allow the RP to infer anything regarding the subscriber but that does permit the RP to associate multiple interactions with a single subscriber.\n public key\n The public part of an asymmetric key pair that is used to verify signatures or encrypt data.\n public key certificate\n A digital document issued and digitally signed by the private key of a certificate authority that binds an identifier to a subscriber’s public key. The certificate indicates that the subscriber identified in the certificate has sole control of and access to the private key. See also [RFC5280].\n public key infrastructure (PKI)\n A set of policies, processes, server platforms, software, and workstations used to administer certificates and public-_private key_ pairs, including the ability to issue, maintain, and revoke public key certificates.\n reauthentication\n The process of confirming the subscriber’s continued presence and intent to be authenticated during an extended usage session.\n relying party (RP)\n An entity that relies upon a verifier’s assertion of a subscriber’s identity, typically to process a transaction or grant access to information or a system.\n remote\n A process or transaction that is conducted through connected devices over a network, rather than in person.\n replay attack\n An attack in which the attacker is able to replay previously captured messages (between a legitimate claimant and a verifier) to masquerade as that claimant to the verifier or vice versa.\n replay resistance\n The property of an authentication process to resist replay attacks, typically by the use of an authenticator output that is valid only for a specific authentication.\n restricted\n An authenticator type, class, or instantiation that has additional risk of false acceptance associated with its use and is therefore subject to additional requirements.\n risk assessment\n The process of identifying, estimating, and prioritizing risks to organizational operations (i.e., mission, functions, image, or reputation), organizational assets, individuals, and other organizations that result from the operation of a system. A risk assessment is part of risk management, incorporates threat and vulnerability analyses, and considers mitigations provided by security controls that are planned or in-place. It is synonymous with “risk analysis.”\n risk management\n The program and supporting processes that manage information security risk to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, and other organizations and includes (i) establishing the context for risk-related activities, (ii) assessing risk, (iii) responding to risk once determined, and (iv) monitoring risk over time.\n salt\n A non-secret value used in a cryptographic process, usually to ensure that the results of computations for one instance cannot be reused by an attacker.\n Secure Sockets Layer (SSL)\n See Transport Layer Security (TLS).\n Senior Agency Official for Privacy (SAOP)\n Person responsible for ensuring that an agency complies with privacy requirements and manages privacy risks. The SAOP is also responsible for ensuring that the agency considers the privacy impacts of all agency actions and policies that involve PII.\n session\n A persistent interaction between a subscriber and an endpoint, either an RP or a CSP. A session begins with an authentication event and ends with a session termination event. A session is bound by the use of a session secret that the subscriber’s software (e.g., a browser, application, or OS) can present to the RP to prove association of the session with the authentication event.\n session hijack attack\n An attack in which the attacker is able to insert themselves between a claimant and a verifier subsequent to a successful authentication exchange between the latter two parties. The attacker is able to pose as a subscriber to the verifier or vice versa to control session data exchange. Sessions between the claimant and the RP can be similarly compromised.\n shared secret\n A secret used in authentication that is known to the subscriber and the verifier.\n side-channel attack\n An attack enabled by the leakage of information from a physical cryptosystem. Characteristics that could be exploited in a side-channel attack include timing, power consumption, and electromagnetic and acoustic emissions.\n single-factor\n A characteristic of an authentication system or an authenticator that requires only one authentication factor (i.e., something you know, something you have, or something you are) for successful authentication.\n single sign-on (SSO)\n An authentication process by which one account and its authenticators are used to access multiple applications in a seamless manner, generally implemented with a federation protocol.\n social engineering\n The act of deceiving an individual into revealing sensitive information, obtaining unauthorized access, or committing fraud by associating with the individual to gain confidence and trust.\n subject\n A person, organization, device, hardware, network, software, or service. In these guidelines, a subject is a natural person.\n subscriber\n An individual enrolled in the CSP identity service.\n subscriber account\n An account established by the CSP containing information and authenticators registered for each subscriber enrolled in the CSP identity service.\n symmetric key\n A cryptographic key used to perform both the cryptographic operation and its inverse. (e.g., to encrypt and decrypt or create a message authentication code and to verify the code).\n sync fabric\n Any on-premises, cloud-based, or hybrid service used to store, transmit, or manage authentication keys generated by syncable authenticators that are not local to the user’s device.\n syncable authenticators\n Software or hardware cryptographic authenticators that allow authentication keys to be cloned and exported to other storage to sync those keys to other authenticators (i.e., devices).\n system of record (SOR)\n An SOR is a collection of records that contain information about individuals and are under the control of an agency. The records can be retrieved by the individual’s name or by an identifying number, symbol, or other identifier.\n System of Record Notice (SORN)\n A notice that federal agencies publish in the Federal Register to describe their systems of records.\n token\n See authenticator.\n transaction\n See digital transaction\n Transport Layer Security (TLS)\n An authentication and security protocol widely implemented in browsers and web servers. TLS is defined by [RFC5246]. TLS is similar to the older SSL protocol, and TLS 1.0 is effectively SSL version 3.1. SP 800-52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations [SP800-52], specifies how TLS is to be used in government applications.\n usability\n The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use. [ISO/IEC9241-11]\n verifier\n An entity that verifies the claimant’s identity by verifying the claimant’s possession and control of one or more authenticators using an authentication protocol. To do this, the verifier needs to confirm the binding of the authenticators with the subscriber account and check that the subscriber account is active.\n verifier impersonation\n See phishing.\n zeroize\n Overwrite a memory location with data that consists entirely of bits with the value zero so that the data is destroyed and unrecoverable. This is often contrasted with deletion methods that merely destroy references to data within a file system rather than the data itself.\n\n"
} ,
{
"title" : "Note to Reviewers",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/reviewers/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Note to Reviewers\n\nIn December 2022, NIST released the Initial Public Draft (IPD) of SP 800-63, Revision 4. Over the course of a 119-day public comment period, the authors received exceptional feedback from a broad community of interested entities and individuals. The input from nearly 4,000 specific comments has helped advance the improvement of these Digital Identity Guidelines in a manner that supports NIST’s critical goals of providing foundational risk management processes and requirements that enable the implementation of secure, private, equitable, and accessible identity systems. Based on this initial wave of feedback, several substantive changes have been made across all of the volumes. These changes include but are not limited to the following:\n\n\n Updated text and context setting for risk management. Specifically, the authors have modified the process defined in the IPD to include a context-setting step of defining and understanding the online service that the organization is offering and intending to potentially protect with identity systems.\n Added recommended continuous evaluation metrics. The continuous improvement section introduced by the IPD has been expanded to include a set of recommended metrics for holistically evaluating identity solution performance. These are recommended due to the complexities of data streams and variances in solution deployments.\n Expanded fraud requirements and recommendations. Programmatic fraud management requirements for credential service providers and relying parties now address issues and challenges that may result from the implementation of fraud checks.\n Restructured the identity proofing controls. There is a new taxonomy and structure for the requirements at each assurance level based on the means of providing the proofing: Remote Unattended, Remote Attended (e.g., video session), Onsite Unattended (e.g., kiosk), and Onsite Attended (e.g., in-person).\n Integrated syncable authenticators. In April 2024, NIST published interim guidance for syncable authenticators. This guidance has been integrated into SP 800-63B as normative text and is provided for public feedback as part of the Revision 4 volume set.\n Added user-controlled wallets to the federation model. Digital wallets and credentials (called “attribute bundles” in SP 800-63C) are seeing increased attention and adoption. At their core, they function like a federated IdP, generating signed assertions about a subject. Specific requirements for this presentation and the emerging context are presented in SP 800-63C-4.\n\n\nThe rapid proliferation of online services over the past few years has heightened the need for reliable, equitable, secure, and privacy-protective digital identity solutions.\nRevision 4 of NIST Special Publication SP 800-63, Digital Identity Guidelines, intends to respond to the changing digital landscape that has emerged since the last major revision of this suite was published in 2017, including the real-world implications of online risks. The guidelines present the process and technical requirements for meeting digital identity management assurance levels for identity proofing, authentication, and federation, including requirements for security and privacy as well as considerations for fostering equity and the usability of digital identity solutions and technology.\n\nBased on the feedback provided in response to the June 2020 Pre-Draft Call for Comments, research into real-world implementations of the guidelines, market innovation, and the current threat environment, this draft seeks to:\n\n\n Address comments received in response to the IPD of Revision 4 of SP 800-63\n Clarify the text to address the questions and issues raised in the public comments\n Update all four volumes of SP 800-63 based on current technology and market developments, the changing digital identity threat landscape, and organizational needs for digital identity solutions to address online security, privacy, usability, and equity\n\n\nNIST is specifically interested in comments and recommendations on the following topics:\n\n\n \n Authentication and Authenticator Management\n\n \n Are the syncable authenticator requirements sufficiently defined to allow for reasonable risk-based acceptance of syncable authenticators for public and enterprise-facing uses?\n Are there additional recommended controls that should be applied? Are there specific implementation recommendations or considerations that should be captured?\n Are wallet-based authentication mechanisms and “attribute bundles” sufficiently described as authenticators? Are there additional requirements that need to be added or clarified?\n \n \n \n General\n\n \n What specific implementation guidance, reference architectures, metrics, or other supporting resources could enable more rapid adoption and implementation of this and future iterations of the Digital Identity Guidelines?\n What applied research and measurement efforts would provide the greatest impacts on the identity market and advancement of these guidelines?\n \n \n\n\nReviewers are encouraged to comment and suggest changes to the text of all four draft volumes of the SP 800-63-4 suite. NIST requests that all comments be submitted by 11:59pm Eastern Time on October 7th, 2024. Please submit your comments to [email protected]. NIST will review all comments and make them available on the NIST Identity and Access Management website. Commenters are encouraged to use the comment template provided on the NIST Computer Security Resource Center website for responses to these notes to reviewers and for specific comments on the text of the four-volume suite.\n"
} ,
{
"title" : "Preface",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/preface/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Preface\n\nThis publication and its companion volumes — [SP800-63], [SP800-63A], and [SP800-63C] — provide technical guidelines for organizations to implement digital identity services.\n\nThis document, SP 800-63B, provides requirements to credential service providers (CSPs) for remote user authentication at each of three Authentication Assurance Levels (AALs).\n"
} ,
{
"title" : "References",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63b/references/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "References\n\nThis section is informative.\n\n[A-130] Office of Management and Budget (2016) Managing Information as a Strategic Resource. (The White House, Washington, DC), OMB Circular A-130, July 28, 2016. Available at https://obamawhitehouse.archives.gov/sites/default/files/omb/assets/OMB/circulars/a130/a130revised.pdf\n\n[Blocklists] Habib H, Colnago J, Melicher W, Ur B, Segreti S, Bauer L, Christin N, Cranor L (2017) Password Creation in the Presence of Blacklists. Proceedings 2017 Workshop on Usable Security (Internet Society, San Diego, CA). https://doi.org/10.14722/usec.2017.23043\n\n[Composition] Komanduri S, Shay R, Kelley PG, Mazurek ML, Bauer L, Christin N, Cranor LF, Egelman S (2011) Of Passwords and People: Measuring the Effect of Password-Composition Policies. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM, New York, NY), pp 2595–2604. Available at https://www.ece.cmu.edu/~lbauer/papers/2011/chi2011-passwords.pdf\n\n[CTAP2.2] Bradley J, Hodges J, Jones MB, Kumar A, Lindemann R, Verrept J (2023) Client to Authenticator Protocol (CTAP), version 2.2. (FIDO Alliance, Beaverton, OR) Available at https://fidoalliance.org/specs/fido-v2.2-rd-20230321/fido-client-to-authenticator-protocol-v2.2-rd-20230321.html\n\n[E-Gov] E-Government Act of 2002, P.L. 107-347, 116 Stat. 2899, 44 U.S.C. § 101 (2002). Available at https://www.gpo.gov/fdsys/pkg/PLAW-107publ347/pdf/PLAW-107publ347.pdf\n\n[EO13681] Obama B (2014) Improving the Security of Consumer Financial Transactions. (The White House, Washington, DC), Executive Order 13681, October 17, 2014. Available at https://www.federalregister.gov/d/2014-25439\n\n[EO13985] Biden J (2021) Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. (The White House, Washington, DC), Executive Order 13985, January 25, 2021. Available at https://www.federalregister.gov/documents/2021/01/25/2021-01753/advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government\n\n[FEDRAMP] General Services Administration (2022), How to Become FedRAMP Authorized. Available at https://www.fedramp.gov/\n\n[FIDO2] Bradley J, Hodges J, Jones MB, Kumar A, Lindemann R, Verrept J (2022) Client to Authenticator Protocol (CTAP). (FIDO Alliance, Beaverton, OR). Available at https://fidoalliance.org/specs/fido-v2.1-ps-20210615/fido-client-to-authenticator-protocol-v2.1-ps-errata-20220621.html\n\n\\clearpage\n\n\n[FIPS140] National Institute of Standards and Technology (2019) Security Requirements for Cryptographic Modules. (U.S. Department of Commerce, Washington, DC), Federal Information Processing Standards Publication (FIPS) 140-3. https://doi.org/10.6028/NIST.FIPS.140-3\n\n[FIPS201] National Institute of Standards and Technology (2022) Personal Identity Verification (PIV) of Federal Employees and Contractors. (U.S. Department of Commerce, Washington, DC), Federal Information Processing Standards Publication (FIPS) 201-3. https://doi.org/10.6028/NIST.FIPS.201-3\n\n[ISO/IEC9241-11] International Standards Organization (2018) ISO/IEC 9241-11 Ergonomics of human-system interaction – Part 11: Usability: Definitions and concepts (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/63500.html\n\n[ISO/IEC2382-37] International Standards Organization (2022) Information technology — Vocabulary — Part 37: Biometrics (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/73514.html\n\n[ISO/IEC10646] International Standards Organization (2020) Information technology — Universal coded character set (UCS) (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/76835.html\n\n[ISO/IEC19795-1] International Standards Organization (2021) Information technology - Biometric performance testing and reporting Part 1: Principles and framework (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/73515.html\n\n[ISO/IEC30107-1] International Standards Organization (2023) Information technology — Biometric presentation attack detection — Part 1: Framework (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/83828.html\n\n[ISO/IEC30107-3] International Standards Organization (2023) Information technology — Biometric presentation attack detection — Part 3: Testing and reporting (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/79520.html\n\n[Managers] Lyastani SG, Schilling M, Fahl S, Backes M, Bugiel S (2018) Better managed than memorized? Studying the Impact of Managers on Password Strength and Reuse. 27th USENIX Security Symposium (USENIX Security 18) (USENIX Association, Baltimore, MD), pp 203–220. Available at https://www.usenix.org/conference/usenixsecurity18/presentation/lyastani\n\n[NISTIR8062] Brooks S, Garcia M, Lefkovitz N, Lightman S, Nadeau E (2017) An Introduction to Privacy Engineering and Risk Management in Federal Systems. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Interagency or Internal Report (IR) 8062, January 2017. https://doi.org/10.6028/NIST.IR.8062\n\n[OWASP-session] Open Web Application Security Project (2021) Session Management Cheat Sheet. Available at https://cheatsheetseries.owasp.org/cheatsheets/Session_Management_Cheat_Sheet.html\n\n[OWASP-XSS-prevention] Open Web Application Security Project (2021) XSS (Cross Site Scripting) Prevention Cheat Sheet. Available at https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html\n\n[Persistence] Herley C, Van Oorschot P (2012) A Research Agenda Acknowledging the Persistence of Passwords. IEEE Security & Privacy Magazine, (IEEE, Garden Grove, CA) 10(1):28–36. https://doi.org/10.1109/MSP.2011.150\n\n[Policies] Weir M, Aggarwal S, Collins M, Stern H (2010) Testing Metrics for Password Creation Policies by Attacking Large Sets of Revealed Passwords. Proceedings of the 17th ACM Conference on Computer and Communications Security, CCS ‘10, (ACM, New York, NY, USA), pp 162–175. https://doi.org/10.1145/1866307.1866327\n\n[PrivacyAct] Privacy Act of 1974, Pub. L. 93-579, 5 U.S.C. § 552a, 88 Stat. 1896 (1974). Available at https://www.govinfo.gov/content/pkg/USCODE-2020-title5/pdf/USCODE-2020-title5-partI-chap5-subchapII-sec552a.pdf\n\n[PSL] Mozilla Foundation (2022) Public Suffix List. Available at https://publicsuffix.org/list/\n\n[RBG] National Institute of Standards and Technology (2023) Random Bit Generation. Available at https://csrc.nist.gov/projects/random-bit-generation\n\n[RFC20] Cerf V (1969) ASCII format for network interchange. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 20. https://doi.org/10.17487/RFC0020\n\n[RFC5246] Rescorla E, Dierks T (2008) The Transport Layer Security (TLS) Protocol Version 1.2. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 5246. https://doi.org/10.17487/RFC5246\n\n[RFC5280] Cooper D, Santesson S, Farrell S, Boeyen S, Housley R, Polk W (2008) Internet X.509 Public Key Infrastructure Certification and Certificate Revocation List (CRL) Profile. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 5280. https://doi.org/10.17487/RFC5280\n\n[RFC6749] Hardt D (2012) The OAuth 2.0 Authorization Framework. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 6749. https://doi.org/10.17487/RFC6749\n\n[RFC9325] Sheffer Y, Saint-Andre P, Fossati T (2022) Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS). (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 9325. https://doi.org/10.17487/RFC9325\n\n[Section508] General Services Administration (2022) IT Accessibility Laws and Policies. Available at https://www.section508.gov/manage/laws-and-policies/\n\n[Shannon] Shannon CE (1948) A Mathematical Theory of Communication. Bell System Technical Journal 27(3):379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x\n\n[SP800-39] Joint Task Force (2011) Managing Information\nSecurity Risk. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-39. https://doi.org/10.6028/NIST.SP.800-39\n\n[SP800-52] McKay K, Cooper D (2019) Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations. (National Institute of Standards and Technology), NIST Special Publication (SP) 800-52 Rev. 2. https://doi.org/10.6028/NIST.SP.800-52r2\n\n[SP800-53] Joint Task Force (2020) Security and Privacy Controls for Information Systems and Organizations. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-53 Rev. 5, Includes updates as of December 10, 2020. https://doi.org/10.6028/NIST.SP.800-53r5\n\n[SP800-57Part1] Barker EB (2020) Recommendation for Key Management: Part 1 – General. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-57 Part 1, Rev. 5. https://doi.org/10.6028/NIST.SP.800-57pt1r5\n\n[SP800-63] Temoshok D, Proud-Madruga D, Choong YY, Galluzzo R, Gupta S, LaSalle C, Lefkovitz N, Regenscheid A (2024) Digital Identity Guidelines. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63-4 2pd. https://doi.org/10.6028/NIST.SP.800-63-4.2pd\n\n[SP800-63A] Temoshok D, Abruzzi C, Choong YY, Fenton JL, Galluzzo R, LaSalle C, Lefkovitz N, Regenscheid A (2024) Digital Identity Guidelines: Identity Proofing and Enrollment. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63A-4 2pd. https://doi.org/10.6028/NIST.SP.800-63a-4.2pd\n\n[SP800-63C] Temoshok D, Richer JP, Choong YY, Fenton JL, Lefkovitz N, Regenscheid A, Galluzzo R (2024) Digital Identity Guidelines: Federation and Assertions. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63C-4 2pd. https://doi.org/10.6028/NIST.SP.800-63c-4.2pd\n\n[SP800-73] Cooper C, Ferraiolo H, Mehta K, Francomacaro S, Chandramouli R, Mohler J (2015) Interfaces for Personal Identity Verification. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP)800-73-4, Includes updates as of February 8, 2016. https://doi.org/10.6028/NIST.SP.800-73-4\n\n[SP800-90A] Barker E, Kelsey J (2015) Recommendation for Random Number Generation Using Deterministic Random Bit Generators. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-90A, Rev. 1. https://doi.org/10.6028/NIST.SP.800-90Ar1\n\n[SP800-90B] Turan MS, Barker E, Kelsey J, McKay K, Baish M, Boyle M (2018) Recommendation for the Entropy Sources Used for Random Bit Generation. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-90B. https://doi.org/10.6028/NIST.SP.800-90B\n\n[SP800-90C] Barker E, Kelsey J, McKay K, Roginsky A, Turan MS (2022) Recommendation for Random Bit Generator (RBG) Constructions. (National Institute of Standards and Technology, Gaithersburg, MD), Draft NIST Special Publication (SP) 800-90C. https://doi.org/10.6028/NIST.SP.800-90C.3pd\n\n[SP800-116] Ferraiolo H, Mehta KL, Ghadiali N, Mohler J, Johnson V, Brady S (2018) A Recommendation for the Use of PIV Credentials in Physical Access Control Systems (PACS). (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-116, Rev. 1 [or as amended]. https://doi.org/10.6028/NIST.SP.800-116r1\n\n[SP800-131A] Barker E, Roginsky A (2019) Transitioning the Use of Cryptographic Algorithms and Key Lengths. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-131A Rev. 2. https://doi.org/10.6028/NIST.SP.800-131Ar2\n\n[SP800-132] Turan M, Barker E, Burr W, Chen L (2010) Recommendation for Password-Based Key Derivation. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-132. https://doi.org/10.6028/NIST.SP.800-132\n\n[SP800-157] Ferraiolo H, Regenscheid AR, Fenton J (2023) Guidelines for Derived Personal Identity Verification (PIV) Credentials. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-157r1 ipd (initial public draft). https://doi.org/10.6028/NIST.SP.800-157r1.ipd\n\n[Strength] Kelley PG, Komanduri S, Mazurek ML, Shay R, Vidas T, Bauer L, Christin N, Cranor LF, Lopez J (2012) Guess again (and again and again): Measuring password strength by simulating password-cracking algorithms. 2012 IEEE Symposium On Security and Privacy (SP), pp 523–537. Available at http://ieeexplore.ieee.org/iel5/6233637/6234400/06234434.pdf\n\n[TLS] Rescorla E (2018) The Transport Layer Security (TLS) Protocol Version 1.3. (Internet Engineering Task Force, Reston, VA), RFC 8446. https://doi.org/10.17487/RFC8446\n\n[TOTP] M’Raihi D, Machani S, Pei M, Rydell J (2011) TOTP: Time-Based One-Time Password Algorithm. (Internet Engineering Task Force, Reston, VA), RFC 6238. https://doi.org/10.17487/RFC6238\n\n[UsabilityBiometrics] National Institute of Standards and Technology (2008) Usability & Biometrics: Ensuring Successful Biometric Systems. (National Institute of Standards and Technology, Gaithersburg, MD). Available at https://www.nist.gov/system/files/usability_and_biometrics_final2.pdf\n\n[UAX15] Whistler K (2022) Unicode Normalization Forms. (The Unicode Consortium, South San Francisco, CA), Unicode Standard Annex 15, Version 15.0.0, Rev. 53. Available at https://www.unicode.org/reports/tr15/\n\n[WebAuthn] Hodges J, Jones JC, Jones MB, Kumar A, Lundberg E (2021) Web Authentication: An API for accessing Public Key Credentials - Level 2. (World Wide Web Consortium, Cambridge, MA). Available at https://www.w3.org/TR/2021/REC-webauthn-2-20210408/\n"
} ,
{
"title" : "Examples",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/examples/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Examples\n\nThis section is informative.\n\nThis appendix contains several example scenarios of federation used in conjunction with the requirements in these guidelines.\n\nThe scenarios in this section are for illustrative purposes and do not convey additional requirements beyond those imposed by these guidelines.\n\nMapping FALs to Common Federation Protocols\n\nOf protocols commonly in use today, OpenID Connect [OIDC] and SAML [SAML] both provide a variety of capabilities that can be leveraged to reach the requirements at different FALs. Table 5 provides examples of specific options in these protocols that could be deployed to reach a given FAL. It’s important to note that these guidelines do not represent a normative mapping to the given FALs and the entirety of the federation process has to be considered when establishing an FAL. Additionally, each FAL could be reached by processes, deployments, and procedures that are not listed in this table.\n\nTable 5. FAL Protocol Examples\n\n\n \n \n \n OIDC\n SAML\n \n \n \n \n FAL1\n All core flows in [OIDC] (Authorization Code, Implicit, and Hybrid) can all be configured to require signing of the assertion (the ID Token) using JSON Web Signatures. Assertions are presented in a variety of front and back channel methods. Each of these flows can be built using both static and dynamic client registration. Profiles such as [OIDC-Basic] and [OIDC-Implicit] can provide additional guidance for interoperable deployments.\n The [SAML-WebSSO] profile allows for the signing of assertions using XML D-Sig and presentation of the assertion using the front channel. SAML deployments are generally set up with a static registration, sometimes managed through a federation authority, which can meet the requirements at this FAL and above.\n \n \n FAL2\n Flows that present the ID Token in the back channel (such as Authorization Code and Hybrid) can provide a level of injection protection.\n The Artifact Binding of SAML defined in [SAML-Bindings] allows for a back-channel presentation of SAML assertions that can provide a level of injection protection.\n \n \n FAL3\n The ID Token can include the claims necessary for Holder-of-Key and Bound Authenticator assertion presentations, though to date there are not industry standard profiles for doing so.\n The SAML Holder-of-Key profile can fit the assertion requirements at this level, if combined with other deployment choices.\n \n \n\n\nFor OpenID Connect in particular, it is common practice to give access to both an identity API (the UserInfo Endpoint) as well as additional APIs. While the security of API access is outside the scope of these guidelines (which are concerned with the identity assertion primarily), it is sensible for an OpenID Connect implementation to want to increase the security of all API calls in tandem with the FAL. For example, in addition to requiring a Holder-of-Key assertion at FAL3, which requires verification of a subscriber-held key, an OpenID Connect system might also require sender-constrained access tokens for API access, which require the verification of a key held by the RP for each API call.\n\nDirect Connection to an Agency’s IdP\n\nAgency A, which issues and manages subscriber accounts, sets up and operates an OpenID Connect IdP in order to make these subscriber accounts available online through a federation process.\n\nThe RP enters into a pairwise trust agreement with the IdP to accept assertions for subscribers from Agency A. The RP declares the set of attributes that it needs from the IdP as part of this agreement. The trust agreement stipulates that the subscriber is the authorized party for determining the release of attributes in the federation transaction.\n\nThe IdP generates a federated identifier for the subscriber account by taking the unique internal identifier for the subscriber account (such as an employee record number) and passing it through a one-way cryptographic function to create a unique identifier for the subscriber account. Such an identifier does not allow an RP to calculate the internal identifier but will be stable across attribute changes.\n\nPer the terms of the trust agreement, the subscriber is prompted by the IdP the first time they log on to the RP. The IdP asks for the subscriber’s consent at runtime to share their attributes with the RP, displaying to the subscriber the RP’s requested uses for these attributes on the consent screen. The IdP also prompts the subscriber to allow the IdP to remember this consent decision. This stored decision causes the IdP to act on the stored consent in a future request and not prompt the subscriber if the same RP requests the same attributes.\n\nThe assertion, formatted as an OpenID Connect ID Token, contains the minimum set of attributes to facilitate the federated log in. Apart from the federated identifier, the assertion contains no identifying information about the subscriber. In addition to the assertion, the RP is given an OAuth 2.0 access token that allows the RP to access the identity API hosted by the IdP, the OpenID Connect UserInfo Endpoint. The RP can choose to call this API to get additional attributes as needed, such as the first time the subscriber uses the RP. Since this RP follows a just-in-time provisioning model, when the RP sees the subscriber’s federated identifier for the first time, the RP creates an RP subscriber account for that federated identifier and calls the identity API to populate the RP subscriber account with the subscriber’s attributes. For future authentications with this subscriber, the RP can decide if its cache of attributes is reasonably recent enough or if it should be refreshed by calling the identity API.\n\nMultilateral Federation Network\n\nAgencies A, B, and C each have an IdP running OpenID Connect for their subscriber accounts. All three agencies join a multilateral federation run by an independent agency set up to provide inter-agency connections. The federation authority independently verifies that each IdP represents the agency in question. The federation authority publishes the discovery records of the IdPs for all agencies that are part of the multilateral federation. This publication allows RPs within the federation to discover which IdP is to be used to access accounts for a given agency under the rules of the federation agreement.\n\nRPs X and Y wish to allow logins from agencies A, B, and C, and the RPs declare their intent and a list of required attributes to the federation authority. The federation authority assesses both RP requests and adds them to the multilateral federation’s trust agreement. This allows both RPs to register at each of the three separate IdPs as needed for each agency.\n\nBoth RPs interface directly with each of the three IdPs and not through a federation proxy. When a new IdP or RP is added to the multilateral federation agreement, the existing IdPs and RPs are notified of the new component and its parameters.\n\nThe IdPs and RPs establish a shared signaling channel under the auspices of the federation authority. This allows any IdP and any RP to report suspicious or malicious behavior that involves a specific account to the rest of the members under the federation authority.\n\nIssuance of a Credential to a Digital Wallet\n\nAgency B makes its subscriber accounts available for federation through the use of digital wallet technology. The agency’s agreement for issuing credentials into wallets is facilitated by a federation authority that is set up to manage digital wallets across the federal government. The federation authority establishes the identity of the CSP for each agency under the multilateral agreement, and it ensures that only the CSP for Agency B can onboard subscriber-controlled wallets for Agency B within the multilateral trust agreement.\n\nA subscriber has a digital wallet running on their device that they want to use with their subscriber account from Agency B. Within these guidelines, the digital subscriber-controlled wallet needs to be onboarded by the CSP before it can act as an IdP. To begin this process, the subscriber directs their digital wallet software to Agency B’s CSP. The subscriber uses a biometric factor to activate their digital wallet, and the digital wallet makes an onboarding request to the CSP for the subscriber account. This onboarding request includes proof of a key held by the digital wallet. The CSP verifies the wallet’s proof and processes any additional attestations from the wallet device.\n\nThe subscriber authenticates to the CSP during the onboarding process. The CSP prompts the subscriber with the terms of the trust agreement from the federation authority, and asks the subscriber to confirm that they wish to issue an identity to the digital wallet in question. The subscriber is informed of the sets of attributes that are made available to the wallet.\n\nThe CSP creates an attribute bundle that includes the subscriber’s attributes as well as a reference to the digital wallet’s key. The CSP signs this attribute bundle with its own key and returns the bundle to the digital wallet.\n\nWhen the subscriber needs to authenticate to an RP, the RP sends a query to the subscriber’s wallet for a credential that fits the RP’s needs. The RP has a trust agreement with the same federation authority, agreeing to trust identities issued under the multilateral trust agreement’s rules. The digital wallet, acting as an IdP, identifies that the RP’s request can be fulfilled by the attribute bundle issued from Agency B’s CSP. The digital wallet prompts the subscriber to activate the IdP function of the digital wallet software using a local biometric factor. The digital wallet prompts the subscriber to confirm that they want to present the requested attributes to the RP in question. When the subscriber accepts, the IdP function of the digital wallet creates an assertion for the RP that is signed with the digital wallet’s keys. The assertion includes the attribute bundle from the CSP, which itself is covered by the signature from the IdP function. The IdP delivers the assertion to the RP.\n\nThe RP receives the signed assertion and validates the signature of the attribute bundle from the CSP, using the CSP’s keys identified by the federation authority. The RP then validates the signature of the assertion using the key identified in the assertion. When these checks pass successfully, the RP creates an RP subscriber account to represent the subscriber at the RP, based on the information in the assertion.\n\nEnterprise Application Single-Sign-On\n\nFor enterprise applications, it is a common pattern for the organization to make the application available to all potential subscribers within the agency, through the use of an allowlist and pre-provisioned accounts.\n\nIn this scenario, Agency E establishes a pairwise agreement with an RP to provide an enterprise-class service to all employees of Agency E through the agency’s OpenID Connect IdP. As part of this trust agreement, the IdP allows access to a SCIM-based provisioning API for the RP. The IdP creates a federated identifier for each subscriber account and uses the provisioning API to push the federated identifiers and their associated attributes to the RP. In this way, the RP can pre-provision an RP subscriber account for every subscriber in the IdP’s system, allowing the RP to offer functions like access rights, data sharing, and messaging to all accounts on the system, whether or not a specific account has logged in to the RP yet.\n\nUnder the terms of the trust agreement, the RP is placed on an allowlist with the IdP. The allowlist entry states that:\n\n\n The subscriber has an active subscriber account at Agency E\n The subscriber has authenticated with the IdP at AAL2 or greater\n The RP is allowed to request only the federated identifier and basic authentication event information, since all other necessary attributes will be available through the provisioning API\n The federation transaction is at FAL2\n\n\nConsequently, subscribers are not prompted for consent at runtime because the agency consented to use the service on behalf of all accounts at the time the RP was onboarded. This gives subscribers a seamless single sign-on experience, even though a federation protocol is being used across security domain boundaries. Since the IdP does not use any runtime decisions, any deviation from the allowlist parameters causes the federation transaction to fail.\n\nThe RP subscriber accounts are synchronized using the provisioning API. When a new subscriber account is created, modified, or deleted at the IdP, the IdP updates the status of the RP subscriber account using the provisioning API. This allows the RP to always have an up-to-date status for each subscriber account. For example, when the subscriber account is terminated at the IdP, the provisioning API signals to the RP that the corresponding RP subscriber account is to be terminated immediately. The RP removes all locally cached attributes for the account in question, except for the identifiers and references in audit and access logs.\n\nFAL3 With a Smart Card\n\nA subscriber has a cryptographic authenticator on a smart card. The certificate on this smart card can be verified independently by both the IdP and RP thanks to the use of a shared PKI system stipulated by the trust agreement. This type of authenticator can be used in a holder-of-key assertion at FAL3.\n\nThe subscriber starts the federation process and authenticates to the IdP using their authenticator. The IdP creates an assertion that includes a flag indicating that the assertion is intended for use at FAL3. The assertion also contains the certificate common name (CN) and thumbprint of the certificate to be used as a bound authenticator.\n\nWhen the RP receives the assertion, the RP processes the assertion as usual and sees the FAL3 flag and the certificate attributes. The subscriber authenticates to the RP using their authenticator, and the RP verifies that the certificate presented by the subscriber matches the certificate in the assertion from the IdP. When these match, the RP creates a secure session with the subscriber at FAL3.\n\nFAL3 With a non-PKI Authenticator\n\nA subscriber has a hardware cryptographic authenticator that speaks the WebAuthn protocol. This authenticator is not tied to any PKI system, and in fact the authenticator device presents completely different and unlinked keys to both the IdP and RP during its normal authentication process. This kind of authenticator can still be used at FAL3 if the RP manages the bound authenticator.\n\nIn this example, when the subscriber uses this authentication device at the IdP, it presents proof of Key1. When the subscriber uses the same device at the RP, it presents proof of Key2. These are logically two separate authenticators, but from the perspective of the subscriber, they are using the same device in multiple places.\n\nTo start a federation transaction, the subscriber authenticates to the IdP using Key1. The IdP then creates an assertion that is flagged as FAL3. Since the IdP has no visibility into the existence and use of Key2, the assertion says that the subscriber is using a bound authenticator to reach FAL3. When the RP processes this assertion, the RP checks the RP subscriber account associated with the federated identifier in the assertion to find an RP bound authenticator for that account using Key2. The RP prompts the subscriber to authenticate using Key2. When that key is verified, the RP creates a secure session with the subscriber at FAL3.\n\nFAL3 With Referred Token Binding\n\nA subscriber authenticates to their IdP using a certificate that is trusted by the IdP but not known to the RP, since the IdP and RP are not in a shared PKI environment. However, the IdP and RP support the referred token binding extension of TLS. When the subscriber presents their certificate to the IdP, the IdP creates an assertion with the CN and thumbprint of the subscriber’s certificate. Along with the assertion or assertion reference, the IdP returns token binding headers. When these headers are presented to the RP, the RP can use them to associate the contents of the assertion with the subscriber’s bound authenticator. The RP still has to verify the certificate, but the token binding allows the RP to do so without having to separately trust the certificate chain of the authenticator’s certificate.\n\nEphemeral Federated Attribute Exchange\n\nAn RP needs to access a specific attribute for a subscriber, such as proof of age or affiliation with a known entity like a specific agency, without needing to know the identity of the subscriber. The RP requests only the derived attribute values that it needs in order to process its transaction, in this case a simple boolean of whether the subscriber is of age or is affiliated with the entity. The federation process creates an authenticated session between the RP and the subscriber. However, the RP uses an ephemeral provisioning mechanism, retaining only a record of the transaction and no further identifying attributes of the subscriber. The IdP provides a pairwise pseudonymous identifier to the RP. Since the IdP knows of the ephemeral nature of the RP subscriber account, the IdP can provide a distinct PPI to the RP on each request without affecting the subscriber’s usage of the RP. The IdP prompts the subscriber at runtime to release the derived attributes, preventing the RP from silently polling subscriber accounts against changes in information over time.\n\nMultiple Different Authorized Parties and Trust Agreements\n\nAs a subscriber uses services at multiple RPs, different trust agreements can come into play, and those agreements can have different requirements and experiences. In this scenario, the subscriber has an account through a single IdP which they use at three different RPs, each with a different kind of trust agreement and different requirements for consent and notification.\n\n\n Organizational Authorized Party:\n An apriori trust agreement is established for an agency connecting to an enterprise service (the RP) to be made available to all subscribers at the agency. The authorized party for this trust agreement is the agency, and the IdP is configured with an allowlist entry for the RP with the set of common attributes requested by the RP for its use. When a subscriber logs in to the enterprise service, they are not prompted with any runtime decisions regarding the service, since the trust agreement establishes this connection as trusted. The details of this trust agreement are available to the subscriber from the IdP, including the list of attributes that are released to the RP and for what purpose.\n Individual Authorized Party:\n A separate a priori trust agreement is established by the agency for another service (a different RP), and this service is made available to all subscribers at the same agency. This trust agreement stipulates that the subscriber is the authorized party for release of attribute information to the RP. When logging in to the service, each subscriber is prompted for their consent to release their attributes to the RP. The prompt includes the context for the subscriber to make an appropriate security decision, including a link to the details of the trust agreement and a list of attributes being released and their purpose of use. The IdP allows the subscriber to save this consent decision so that when this subscriber logs in to this same RP in the future, the subscriber is not prompted again for their consent so long as the trust agreement and the request from the RP have not changed.\n Subscriber-driven Service Access:\n A subscriber-driven trust agreement is established when the subscriber goes to access an RP that is otherwise unknown by their IdP. The RP informs the subscriber about the uses of all attributes being requested from the IdP, and the IdP prompts the subscriber for consent to release their attributes to the RP. The IdP also warns the subscriber that the RP is unknown to the agency, and provides the subscriber with information received by the RP to help the subscriber make a secure decision.\n\n\nAll of these scenarios are involve the same subscriber account.\n\nShared Pairwise Pseudonymous Identifiers for Multiple RPs\n\nA group of three applications is deployed in support of a specific mission, giving collaboration, document storage, and calendar capabilities. Due to the nature of the separate applications, they are deployed as separate RPs, but all are bound to the same IdP using a common trust agreement. The trust agreement stipulates that the three RPs are to be issued a shared PPI, so that the applications can coordinate individual subscriber accounts with each other but not with any other applications in the deployed environment. The IdP uses an algorithm to generate a shared PPI that incorporates a randomized identifier for the set of applications as well as a unique identifier for each subscriber accounts. As a result, all three RPs get the same PPI for each subscriber, but no other RP is issued that same identifier.\n\nRP Authentication to an IdP\n\nA federation transaction typically takes place over multiple network calls. Throughout this process, it is important for the IdP and RP to know that they are talking to the same party that they were in a previous step, and ultimately to the party that they expect to be in the transaction with in the first place.\n\nDifferent techniques exist that provide different degrees of assurance, depending on the federation protocol in use and the needs of the system. For example, the Authorization Code Flow of [OIDC] allows the RP to register a shared secret or private key with the IdP prior to the transaction, allowing the IdP to strongly authenticate the RP’s request in the back channel to retrieve the assertion. In addition, the Proof Key for Code Exchange protocol in [RFC7636] allows the RP to dynamically create an unguessable secret that is transmitted in hashed form in the front channel and then transmitted in full in the back channel along with the assertion reference. These techniques can of course be combined for even greater assurance.\n\nFederation authorities can also facilitate the authentication process. If the RP registers its public key and identifier with the federation authority, the IdP needs only to retrieve the appropriate keys from the federation authority instead of requiring the RP to register itself ahead of time.\n\nTechnical profiles of specific federation protocols are out of scope of these guidelines, but high security profiles such as [FAPI] provide extensive guidelines for implementers to deploy secure federation protocols.\n"
} ,
{
"title" : "Introduction",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/introduction/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Introduction\n\nThis section is informative.\n\nFederation is a process that enables the subscriber account defined in [SP800-63A] to be used with an RP that does not verify one of the authenticators bound to the subscriber account. Instead, a service known as an identity provider, or IdP, makes the subscriber account available through a federation protocol to the relying party, or RP. The IdP sends a verifiable statement, called an assertion, about the subscriber account to the RP, triggered by an authentication event of the subscriber. The RP verifies the assertion provided by the IdP and creates an authenticated session with the subscriber, granting the subscriber access to the RP’s functions.\n\nThe IdP works in one of two modes:\n\n\n As a verifier for authenticators bound to the subscriber account as described in [SP800-63B] (see details in Sec. 4), or\n As a subscriber-controlled device onboarded by the CSP, often known as a digital wallet (see details in Sec. 5).\n\n\nThe federation process allows the subscriber to obtain services from multiple RPs without the need to hold or maintain separate authenticators at each RP, a process sometimes known as single sign-on. The federation process also is generally the preferred approach to authentication when the RP and the subscriber account are not administered together under a common security domain, since the RP does not need to verify an authenticator in the subscriber account. Even so, federation can be still applied within a single security domain for a variety of benefits including centralized account management and technical integration.\n\nThe federation process can be facilitated by additional parties acting in other roles, such as a federation authority to facilitate the trust agreements in place and federation proxies to facilitate the protocol connections.\n\nNotations\n\nThis guideline uses the following typographical conventions in text:\n\n\n Specific terms in CAPITALS represent normative requirements. When these same terms are not in CAPITALS, the term does not represent a normative requirement.\n \n The terms “SHALL” and “SHALL NOT” indicate requirements to be followed strictly in order to conform to the publication and from which no deviation is permitted.\n The terms “SHOULD” and “SHOULD NOT” indicate that among several possibilities, one is recommended as particularly suitable without mentioning or excluding others, that a certain course of action is preferred but not necessarily required, or that (in the negative form) a certain possibility or course of action is discouraged but not prohibited.\n The terms “MAY” and “NEED NOT” indicate a course of action permissible within the limits of the publication.\n The terms “CAN” and “CANNOT” indicate a possibility and capability—whether material, physical, or causal—or, in the negative, the absence of that possibility or capability.\n \n \n\n\nDocument Structure\n\nThis document is organized as follows. Each section is labeled as either normative (i.e., mandatory for compliance) or informative (i.e., not mandatory).\n\n\n Section 1 provides an introduction to the document. This section is informative.\n Section 2 describes requirements for Federation Assurance Levels. This section is normative.\n Section 3 describes general requirements for federation systems. This section is normative.\n Section 4 describes requirements for general-purpose IdPs. This section is normative.\n Section 5 describes requirements for subscriber-controlled wallets. This section is normative.\n Section 6 provides security considerations. This section is informative.\n Section 7 provides privacy considerations. This section is informative.\n Section 8 provides usability considerations. This section is informative.\n Section 9 provides equity considerations. This section is informative.\n Section 10 provides additional example scenarios. This section is informative.\n References contains a list of publications referred to from this document. This section is informative.\n Appendix A contains a selected list of abbreviations used in this document. This appendix is informative.\n Appendix B contains a glossary of selected terms used in this document. This appendix is informative.\n Appendix C contains a summarized list of changes in this document’s history. This appendix is informative.\n\n"
} ,
{
"title" : "Federation Assurance Level",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/fal/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Federation Assurance Level (FAL)\n\nThis section is normative.\n\nThis section defines federation assurance levels (FALs) and the requirements for securing federation transactions at each FAL. In order to fulfill the requirements for a a given FAL, the federation transaction SHALL meet or exceed all requirements listed for that FAL.\n\nEach FAL is characterized by a set of requirements that increase the security and complexity as the FAL increases. These requirements are listed here and expanded in other sections of this document:\n\n\n Audience Restriction\n The assertion presented in the federation protocol is targeted to a specific RP and the RP can confirm that it is the intended audience of the assertion.\n Injection Protection\n The RP is strongly protected from an attacker presenting an assertion in circumstances outside a current federation transaction request. (See Sec. 3.10.1 for details on injection protection.)\n Trust Agreement Establishment\n The agreement to participate in a federation transaction for the purposes of creating an authenticated session for the subscriber at the RP. (See Sec. 3.4 for details of the trust agreement.)\n Identifier and Key Establishment\n The IdP and RP have exchanged identifiers and key material to allow for the verification of assertions and other artifacts during future federation transactions. (See Sec. 3.5 for details of key establishment.)\n Presentation\n The assertion can be presented to the RP either on its own (as a bearer assertion) or in concert with an authenticator presented by the subscriber.\n\n\nTable 1 provides a non-normative summary of aspects for each FAL. Each successive level subsumes and fulfills all requirements of lower levels (e.g., a federation process at FAL3 can be accepted at FAL2 or FAL1 since FAL3 satisfies all the requirements of these lower levels). Combinations not found in Table 1 are possible, and agencies can choose to implement stronger protections in one or more areas of requirements at a given FAL.\n\nTable 1. Federation Assertion Levels\n\n\n \n \n Requirement\n FAL1\n FAL2\n FAL3\n \n \n \n \n Audience Restriction\n Multiple RPs allowed per assertion, Single RP per assertion recommended\n Single RP per assertion\n Single RP per assertion\n \n \n Injection Protection\n Recommended for all transactions\n Required; transaction begins at the RP\n Required; transaction begins at the RP\n \n \n Trust Agreement Establishment\n Subscriber-driven or A priori\n A priori\n A priori\n \n \n Identifier and Key Establishment\n Dynamic or Static\n Dynamic or Static\n Static\n \n \n Presentation\n Bearer Assertion\n Bearer Assertion\n Holder-of-Key Assertion or Bound Authenticator\n \n \n\n\nWhile many different federation implementation options are possible, the FAL is intended to provide clear guidance representing increasingly secure deployment options. See [SP800-63] for details on how to choose the most appropriate FAL.\n\n\n Note: In these guidelines, assertions, attribute bundles, and other elements of the federation protocol are protected by asymmetric digital signatures or symmetric MACs. When either asymmetric or symmetric cryptography is specifically required, the terms “sign” and “signature” will be qualified as appropriate to indicate the requirement. When either option is possible, the terms “sign” and “signature” are used without a qualifier.\n\n\nCommon FAL Requirements\n\nAt all FALs, all federation transactions SHALL comply with the requirements in Sec. 3 to deliver an assertion to the RP and create an authenticated session at the RP. Examples of assertions used in federation protocols include the ID Token in OpenID Connect [OIDC] and the Security Assertion Markup Language [SAML] Assertion format.\n\nAt all FALs, the RP needs to trust the IdP to provide valid assertions representing the subscriber’s authentication event and SHALL validate the assertion.\n\nIdPs and RPs SHALL employ appropriately tailored security controls from the moderate baseline security controls defined in [SP800-53] or an equivalent federal (e.g., [FEDRAMP]) or industry standard that the organization has determined for the information systems, applications, and online services that these guidelines are used to protect. IdPs and RPs SHALL ensure that the minimum assurance-related controls for the appropriate systems, or equivalent, are satisfied. Additional security controls are discussed in Sec. 3.10.\n\nIf no FAL is specified by the trust agreement or federation transaction, the requirements of this section still apply.\n\nAn IdP or RP can be capable of operating at multiple FALs simultaneously, depending on use case and needs. For example, an IdP could provide FAL3 federation transactions to a high-risk RP while providing FAL2 to an RP with a lower risk profile. Similarly, an RP could require FAL2 for normal actions but require the subscriber to re-authenticate with FAL3 for higher impact or more sensitive actions. This capability extends to other dimensions, as an IdP could simultaneously have access to subscriber accounts that have been proofed at any IAL and allow authentication at any AAL. However, an RP talking to that IdP could have restrictions on the lowest IAL and AAL it is willing to accept for access. As a consequence, it is imperative that the trust agreement establish the xALs allowed and required for different use cases.\n\nFederation Assurance Level 1 (FAL1)\n\nFAL1 provides a basic level of protection for federation transactions, allowing for a wide range of use cases and deployment decisions.\n\nAt FAL1, the IdP SHALL sign the assertion using approved cryptography. The RP SHALL validate the signature using the key associated with the expected IdP. The signature protects the integrity of the assertion contents and allows for the IdP to be verified as the source of the assertion.\n\nAll assertions at FAL1 SHALL be audience-restricted to a specific RP or set of RPs, and the RP SHALL validate that it is one of the targeted RPs for the given assertion.\n\nAt FAL1, the trust agreement MAY be established by the subscriber during the federation transaction. Note that at FAL1, it is still possible for the trust agreement to be established a priori by the RP and IdP.\n\nAt FAL1, the federation protocol SHOULD apply injection protection as discussed in Sec. 3.10.1. The federation transaction SHOULD be initiated by the RP.\n\nFederation Assurance Level 2 (FAL2)\n\nFAL2 provides a high level of protection for federation transactions, providing protections against a variety of attacks against federated systems. All the requirements for FAL1 apply at FAL2 except where overridden by more specific or stringent requirements here.\n\nAt FAL2, the assertion SHALL be strongly protected from injection attacks, as discussed in Sec. 3.10.1. The federation transaction SHALL be initiated by the RP.\n\nAt FAL2, the assertion SHALL audience restricted to a single RP.\n\nAt FAL2, an a priori trust agreement SHALL be established prior to the federation transaction taking place.\n\nIdPs operated by or on behalf of federal agencies that present assertions at FAL2 or higher SHALL protect keys used for signing or encrypting those assertions with mechanisms validated at [FIPS140] Level 1 or higher.\n\nFederation Assurance Level 3 (FAL3)\n\nFAL3 provides a very high level of protection for federation transactions, establishing very high confidence that the subscriber asserted by the IdP is the subscriber present in the authenticated session. All the requirements at FAL1 and FAL2 apply at FAL3 except where overridden by more specific or stringent requirements here.\n\nAt FAL3, the RP SHALL verify that the subscriber is in control of an authenticator in addition to the assertion. This authenticator is either identified in a holder-of-key assertion as described in Sec 3.14 or is a bound authenticator as described in Sec. 3.15.\n\nAt FAL3, the trust agreement SHALL be established such that the IdP can identify and trust the RP to abide by all aspects of the trust agreement prior to any federation transaction taking place. To facilitate this, the key material used to authenticate the RP and IdP to each other is associated with the identifiers for the RP and IdP in a static fashion using a trusted mechanism. For example, a public key file representing the RP is uploaded to the IdP during a static registration process, and the RP downloads the IdP’s public key from a URL indicated in the trust agreement. Alternatively, the trust agreement can dictate that the RP and IdP can upload their respective public keys to a federation authority and then download each other’s keys from that same trusted authority.\n\nIdPs operated by or on behalf of federal agencies that present assertions at FAL3 SHALL protect keys used for signing or encrypting those assertions with mechanisms validated at [FIPS140] Level 1 or higher.\n\nRequesting and Processing xALs\n\nSince an IdP is capable of asserting the identities of many different subscribers with a variety of authenticators using a variety of federation parameters, the IAL, AAL, and FAL could vary across different federation transactions, even to the same RP.\n\nIdPs SHALL support a mechanism for RPs to specify a set of minimum acceptable xALs as part of the trust agreement and SHOULD support the RP specifying a more strict minimum set at runtime as part of the federation transaction. When an RP requests a particular xAL, the IdP SHOULD fulfill that request, if possible, and SHALL indicate the resulting xAL in the assertion. For example, if the subscriber has an active session that was authenticated at AAL1, but the RP has requested AAL2, the IdP needs to prompt the subscriber for AAL2 authentication to step up the security of the session at the IdP during the subscriber’s interaction at the IdP, if possible. The IdP sends the resulting AAL as part of the returned assertion, whether it is AAL1 (the step-up authentication was not met) or AAL2 (the step-up authentication was met successfully).\n\nThe IdP SHALL inform the RP of the following information for each federation transaction:\n\n\n The IAL of the subscriber account being presented to the RP, or an indication that no IAL claim is being made\n The AAL of the currently active session of the subscriber at the IdP, or an indication that no AAL claim is being made\n The FAL of the federation transaction\n\n\nThe RP gets this xAL information from a combination of the terms of the trust agreement as described in Sec. 3.4 and information included in the assertion as described in Sec. 4.9 and Sec. 5.8. If the xAL is unchanging for all messages between the IdP and RP, the xAL information SHALL be included in the terms of the trust agreement between the IdP and RP. If the xAL could be within a range of possible values specified by the trust agreement, the xAL information SHALL be included as part of the assertion contents.\n\nThe IdP MAY indicate that no claim is made to the IAL or AAL for a given federation transaction. In such cases, no default value is assigned to the resulting xAL by the RP. That is to say, a federation transaction without an IAL declaration in either the trust agreement or the assertion is functionally considered to have “no IAL” and the RP cannot assume the account meets “IAL1”, the lowest numbered IAL described in this suite.\n\nThe RP SHALL determine the minimum IAL, AAL, and FAL it is willing to accept for access to any offered functionality. An RP MAY vary its functionality based on the IAL, AAL, and FAL of a specific federated authentication. For example, an RP can allow federation transactions at AAL2 for common functionality (e.g., viewing the status of a dam system) but require AAL3 be used for higher risk functionality (e.g., changing the flow rates of a dam system). Similarly, an RP could restrict management functionality to only certain subscriber accounts which have been identity proofed at IAL2, while allowing federation transactions from all subscriber accounts regardless of IAL.\n\nIn a federation process, only the IdP has direct access to the details of the subscriber account, which determines the applicable IAL, and the authentication event at the IdP, which determines the applicable AAL. Consequently, the IdP declares the IAL, AAL, and intended FAL for each federation transaction.\n\nThe RP SHALL ensure that it meets its obligations in the federation transaction for the FAL declared in the assertion. For example, the RP needs to ensure the presentation method meets the injection protection requirements at FAL2 and above, and that the appropriate bound authenticator is presented at FAL3.\n"
} ,
{
"title" : "Common Federation Requirements",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/Federation/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Common Federation Requirements\n\nThis section is normative.\n\nA federation transaction serves to allow the subscriber to establish an authenticated session with the RP based on a subscriber account known to the IdP. The federation transaction can also provide the RP with a set of identity attributes within the authenticated session. The authenticated session can then be used by the RP for:\n\n\n logging in the subscriber to access functionality at the RP,\n identifying the subscriber based on presented attributes, and\n processing the subscriber attributes presented in the federation transaction.\n\n\nA federation transaction requires relatively complex multiparty protocols that have subtle security and privacy requirements. When evaluating a particular federation protocol, profile, or deployment structure, it is often instructive to break it down into its component relationships and evaluate the needs for each of these:\n\n\n the subscriber to the CSP,\n the CSP to the IdP,\n the subscriber to the IdP,\n the IdP to the RP, and\n the subscriber to the RP.\n\n\nIn addition, the subscriber often interacts with the CSP, IdP, and RP through a user agent like a web browser. The user agent is therefore often involved in the federation process, but it is not necessary for all types of applications and interactions. As such, the actions of the subscriber described throughout these guidelines can optionally be performed through a user agent. Where necessary, requirements on the user agent are called out directly.\n\nEach party in a federation protocol bears specific responsibilities and expectations that must be fulfilled in order for the federated system to function as intended.\n\nThe subscriber account is augmented by the IdP with federation-specific items, including but not limited to the following:\n\n\n One or more external subject identifiers, for use with a federation protocol\n A set of access rights, detailing which RPs can access which attributes of the subscriber account (such as allowlists and saved runtime decisions by the subscriber)\n Federated account usage information\n Additional attributes collected by or assigned by the IdP to the account\n\n\nA subset of these attributes is made available to the RP through the federation process, either in the assertion or through an identity API (see Sec 3.11.3). These attributes are often used in determining access privileges for attribute-based access control (ABAC) or facilitating a transaction (e.g., providing a shipping address). The details of authorization and access control are outside the scope of these guidelines.\n\nTo keep and manage these attributes, the RP often maintains an RP subscriber account for the subscriber. The RP subscriber account also contains information local to the RP itself, as described in Sec. 3.7.\n\nFederation transactions take place across three dimensions:\n\n\n Trust Agreements:\n The establishment of a policy decision that allows the CSP, IdP, and RP to connect for the purposes of federation. This policy is governed by a trust agreement, which establishes the permission to connect.\n Associating Keys and Identifiers:\n The association of keys and identifiers for the CSP, IdP, and RP that take part in the federation transaction. This process enables the parties to identify each other securely for future exchanges.\n Federation Protocol:\n The verification of the subscriber’s identity by the IdP and subsequent issuance of an assertion to the RP. This results in the passing of subscriber attributes to the RP and establishing an authenticated session for the subscriber at the RP.\n\n\nThese dimensions all need to be fulfilled for a federation process to be complete. The exact order in which that happens, and which parties are involved in which steps, can vary depending on deployment models and other factors.\n\nThe requirements for IdPs in this section apply to both general-purpose IdPs as discussed in Sec. 4 and subscriber-controlled wallets as discussed in Sec. 5.\n\nRoles\n\nCredential Service Provider (CSP)\n\nThe CSP collects and verifies attributes from the subscriber and stores them in a subscriber account. The CSP also binds one or more authenticators to the subscriber account, allowing the subscriber to authenticate directly to systems capable of verifying an authenticator.\n\nIdentity Provider (IdP)\n\nThe IdP provides a bridge between the subscriber account (as established by the CSP) and the RP that the subscriber is accessing. An IdP can be deployed as a service for multiple subscriber accounts or as a component controlled by a single subscriber.\n\nThe IdP establishes an authentication event with the subscriber, either through the verification of an authenticator (for general-purpose IdPs) or presentation of an activation factor (for subscriber-controlled wallets). The IdP creates assertions to represent the authentication event.\n\nThe IdP makes identity attributes of the subscriber available within the assertion or through an identity API (see Sec. 3.11.3).\n\nIn some systems, this is also known as the offering party (OP).\n\nRelying Party (RP)\n\nThe RP processes assertions from the IdP and provides the service that the subscriber is trying to access. Unlike in a direct authentication model, the RP does not provide the verifier function to authenticators tied to the subscriber account.\n\nIn some systems, this is also known as the service provider (SP).\n\nFunctions\n\nTrust Agreement Management\n\nThe trust agreement (see Sec. 3.4) can be managed through a dedicated party, known as a federation authority. The federation authority facilitates the onboarding and management of parties fulfilling different roles and functions within a trust agreement. This management provides a transitive trust to other parties in the agreement.\n\nFor example, an RP can enter a trust agreement with a federation authority and decide that any IdP approved by that federation authority is suitable for its purposes. This trust can hold true whether or not the IdP was covered by the trust agreement at the time the RP joined. Federation authorities are used in multilateral trust agreements as discussed in Sec. 3.4.2.\n\nAuthorized Party\n\nThe authorized party in a trust agreement is the organization, person, or entity that is responsible for the specific release decisions covered by the trust agreement, including the release of subscriber attributes. The trust agreement stipulates who the expected authorized party is, as well as the parameters under which a request could be automatically granted, automatically denied, or require a runtime decision from an individual. For public-facing scenarios, the authorized party is expected to be the subscriber. For enterprise scenarios, the authorized party is expected to be the agency.\n\nIf the authorized party is the operator of the IdP, consent to release attributes is decided for all subscribers and established by an allowlist as described in Sec. 4.6.1.1, allowing for the disclosure of identity attribute without direct decisions and involvement by the subscriber. A trust agreement can alternatively stipulate that an individual, such as the subscriber, is to be prompted at runtime for consent to disclose certain attributes to the RP as discussed in Sec. 4.6.1.3. If specified by the trust agreement, it is also possible for an individual other than a subscriber to act as the authorized party. For example, an administrator of a system being prompted to release attribute information on behalf of a subscriber as part of a provisioning API.\n\nExamples of different authorized parties are found in Sec 10.10.\n\nProxied Federation\n\nA federation proxy acts as an intermediary between the IdP and RP for all communication in the federation protocol. The proxy functions as an RP on the upstream side and an IdP on the downstream side, as shown in Fig. 1. When communicating through a proxy, the upstream IdP and downstream RP communicate with the proxy using a standard federation protocol, and the subscriber takes part in two separate federation transactions. As a consequence, all normative requirements that apply to IdPs and RPs SHALL apply to proxies in their respective roles on each side. Additionally, it is possible for a proxy to act as an upstream IdP to another proxy downstream, and so on in a chain.\n\nFig. 1. Federation Proxy\n\n\n\nThe role of the proxy is limited to the federation protocol; it is not involved in establishment or facilitation of a trust agreement between the upstream IdP and downstream RP. The same party can operate a federation authority as well as a proxy to facilitate federation transactions, but this function is separate from their role in managing the trust agreement. Just like other members of a federation system, the proxy can be involved in separate trust agreements with each of the upstream and downstream components, or a single trust agreement can apply to all parties such as in a multilateral agreement.\n\nThe federated identifier (see Sec. 3.3) of an assertion from a proxy SHALL indicate the proxy as the issuer of the assertion. The downstream RP receives and validates the assertion generated by the proxy, as it would an assertion from any other IdP. This assertion is based on the assertion the proxy receives from the upstream IdP. The contents of the assertion from the upstream IdP can be handled in several ways, depending on the method of proxying in use:\n\n\n The proxy can create an all-new assertion with no information from the assertion from the upstream IdP carried in it. This pattern is useful for blinding the downstream RP, so that the RP does not know which upstream IdP the subscriber originally came from.\n The proxy can copy attributes from the assertion from the upstream IdP into the assertion from the proxy. This pattern is useful for carrying identity attributes in the assertion to the downstream RP.\n The proxy can include the entire assertion from the upstream IdP in the assertion from the proxy. This pattern allows the RP to independently validate the assertion from the upstream IdP as well as the assertion from the proxy.\n\n\nA proxied federation model can provide several benefits. Federation proxies can simplify technical integration between the RP and IdP by providing a common interface for integration. Additionally, to the extent a proxy effectively blinds the RP and IdP from each other, it can provide some business confidentiality for organizations that want to guard their subscriber lists from each other. Proxies can also mitigate some of the privacy risks described in Sec. 3.9, though other risks arise from their use since an additional party is now involved in handling subscriber information. For example, if an attacker is able to compromise the proxy, the attacker need not target the IdP or RP directly in order to gain access to subscriber attributes or activity since all of that information flows through the proxy. Additionally, the proxy can perform additional profiling of the subscriber beyond what the IdP and RP can do, since the proxy brokers the federation transactions between the parties and binds the subscriber account to either side of the connection.\n\nSee Sec. 7.5 for further information on blinding techniques, their uses, and limitations.\n\nThe FAL of the connection between the proxy and the downstream RP is considered as the lowest FAL along the entire path, and the proxy SHALL accurately represent this to the downstream RP. For example, if the connection between the upstream IdP and the proxy is FAL1 and the connection between the proxy and the downstream RP otherwise meets the requirements of FAL2, the connection between the proxy and the downstream RP is still considered FAL1. Likewise, if the connection between the upstream IdP and the proxy is FAL2 and the connection between the proxy and the downstream RP is only FAL1, the overall connection through the proxy is considered FAL1.\n\nFulfilling Roles and Functions of a Federation Model\n\nThe roles in a federation transaction can be connected in a variety of ways, but several common patterns are anticipated by these guidelines. The expected trust agreement structure and connection between components will vary based on which pattern is in use.\n\nDifferent roles and functions can be fulfilled by separate parties who integrate with each other. For example, a CSP can provide attributes of the subscriber account to an IdP that is not operated by the same party or agency as the CSP.\n\nIt is also possible for a single party to fulfil multiple roles within a given federation agreement. For example, if the CSP provides the IdP as part of its identity services, the CSP can provision the subscriber accounts at the IdP as part of the subscriber account establishment process. Similarly, the RP can also be in the same security and administrative domain as the IdP, but still use federation technology to connect for technical, deployment, and account management benefits.\n\nThe same is true for other functions in the overall federation system, such as a federation authority and proxy. While the roles may seem similar, they are fundamentally distinct and do not need to be connected: a federation authority facilitates establishment of a trust agreement between parties, and a proxy facilitates connection of the federation protocol by acting as an RP to the upstream IdP and as an IdP to the downstream RP. The same entity can fulfill both the federation authority and proxy functions in the system, providing both a means of establishing trust agreements and a means of establishing technical connections between IdPs and RPs.\n\nFederated Identifiers\n\nThe subscriber SHALL be identified in the federation transaction using a federated identifier unique to that subscriber. A federated identifier is the logical combination of a subject identifier, representing a subscriber account, and an issuer identifier, representing the IdP. The subject identifier is assigned by the IdP, and the issuer identifier is assigned to the IdP usually through configuration.\n\nThe multi-part federated identifier pattern is required because different IdPs manage their subject identifiers independently, and could therefore potentially collide in their choices of subject identifiers for different subjects. Therefore, it is imperative that an RP never process the subject identifier without taking into account which IdP issued that subject identifier. For most use cases, the federated identifier is stable for the subscriber across multiple sessions and is independent of the authenticator used, allowing the RP to reliably identify the subscriber across multiple authenticated sessions and account changes. However, it is also possible for the federated identifier and its associated use at the RP to be ephemeral, providing some privacy enhancement. Federated identifiers, and their constituent parts, are intended to be machine-readable and not managed by or exposed to the subscriber, unlike a username or other human-facing identifier.\n\nFederated identifiers SHALL contain no plaintext personally-identifiable information (PII), such as usernames, email addresses, or employee numbers, etc.\n\nPairwise Pseudonymous Identifiers (PPI)\n\nIn some circumstances, it is desirable to prevent the subscriber account from being easily linked at multiple RPs through use of a common subject identifier. The use of a pairwise pseudonymous identifier (PPI) allows an IdP to provide multiple distinct federated identifiers to different RPs for a single subscriber account. Use of a PPI prevents different RPs from colluding together to track the subscriber using the federated identifier.\n\nGeneral Requirements\n\nWhen using pairwise pseudonymous identifiers within the assertions generated by the IdP for the RP, the IdP SHALL generate a different federated identifier for each RP as described in Sec. 3.3.1.2 or set of RPs as described in Sec. 3.3.1.3.\n\nSome identity attributes such as names, physical address, phone numbers, email addresses, and others can be used to identify a subscriber outside of a federation transaction. When PPIs are used alongside these kinds of identifying attributes, it may still be possible for multiple colluding RPs to re-identify a subscriber by correlation across systems. For example, if two independent RPs each see the same subscriber identified with a different PPI, the RPs could still determine that the subscriber is the same person by comparing the name, email address, physical address, or other identifying attributes carried alongside the PPI in the respective assertions. Where PPIs are used alongside identifying attributes, privacy policies SHALL be established to prevent correlation of subscriber data consistent with applicable legal and regulatory requirements.\n\nNote that in a proxied federation model (see Sec. 3.2.3), the upstream IdP may be unable to generate a PPI for the downstream RP, since the proxy could blind the IdP from knowing which RP is being accessed by the subscriber. In such situations, the PPI is generally established between the IdP and the federation proxy. The proxy, acting as an IdP, can provide a PPI to the downstream RP. Depending on the protocol, the federation proxy may need to map the PPI back to the associated identifiers from upstream IdPs in order to allow the identity protocol to function. In such cases, the proxy will be able to track and determine which PPIs represent the same subscriber at different RPs. The proxy SHALL NOT disclose the mapping between the PPI and any other identifiers to a third party or use the information for any purpose other than those allowed for transmission of subscriber information defined in Sec. 3.9.1.\n\nPairwise Pseudonymous Identifier Generation\n\nThe PPI SHALL contain no identifying information about the subscriber (e.g., username, email address, employee number, etc.). The PPI SHALL be difficult to guess by a party having access to information about the subscriber, having at least 112 bits of entropy as stated in [SP800-131A]. PPIs can be generated randomly and assigned to subscribers by the IdP or could be derived from other subscriber information if the derivation is done in an irreversible, unguessable manner (e.g., using a keyed hash function with a secret key as discussed in [SP800-131A]).\n\nUnless the PPI is designated as shared by the trust agreement, the PPI SHALL be disclosed to only a single RP.\n\nShared Pairwise Pseudonymous Identifiers\n\nThe same shared PPI SHALL be used for a specific set of RPs if all the following criteria are met:\n\n\n The trust agreement stipulates a shared PPI for a specific set of RPs;\n The authorized party consents to and is notified of the use of a shared PPI;\n Those RPs have a demonstrable relationship that justifies an operational need for the correlation, such as a shared security domain or shared legal ownership; and\n All RPs in the set of a shared PPI consent to being correlated in such a manner (i.e., one RP cannot request to have another RP’s PPI without that other RP’s knowledge and consent).\n\n\nThe RPs SHALL conduct a privacy risk assessment to consider the privacy risks associated with requesting a shared PPI. See Sec. 7.2 for further privacy considerations.\n\nThe IdP SHALL ensure that only intended RPs are included in the set; otherwise, a rogue RP could learn of the shared PPI for a set of RPs by fraudulently posing as part of that set.\n\nThe sector identifier feature of [OIDC] provides a mechanism to calculate a shared PPI for a group of RPs. In this protocol, the identifiers of the RPs are all listed at a URL that can be fetched by the IdP over an authorized protected channel. The shared PPI is calculated by taking into account the sector identifier URL along with other inputs to the algorithm, such that all RPs listed in the sector identifier URL’s contents receive the same shared PPI.\n\nTrust Agreements\n\nAll federation transactions SHALL be defined by one or more trust agreements between the applicable parties.\n\nThe trust agreement SHALL establish a trust relationship between the RP and:\n\n\n The CSP responsible for provisioning and managing the subscriber account,\n The IdP responsible for providing assertions and attributes, or\n Both the CSP and IdP.\n\n\nTrust agreements establish the terms for federation transactions between the parties they affect, including things like the allowed xALs and the intended purposes of identity attributes exchanged in the federation transaction. The trust agreement SHALL establish usability and equity requirements for the federation transaction. The trust agreement SHALL disclose details of the proofing process used at the CSP, including any compensating controls and exception handling processes.\n\nAll trust agreements SHALL define a specific population of subscriber accounts that the agreement is applicable to. The exact means of defining this population are out of scope of this document. In many cases, the population is defined as the full set of subscriber accounts that the CSP manages and makes available through an IdP. In other cases, the population is a demarcated subset of accounts available through an IdP. It is also possible for an RP to have a distinct trust agreement established with an IdP for a single subscriber account, such as in a subscriber-driven trust agreement.\n\nDuring the course of a single federation transaction, it is important for the policies and expectations of all parties be unambiguous for all parties involved. Therefore, there SHOULD be only one set of trust agreements in effect for a given transaction. This will usually be determined by the unique combination of CSP, IdP, and RP participating in the transaction. However, these agreements could vary in other ways, such as different populations of subscribers being governed by different trust agreements.\n\nThe existence of a trust agreement between parties does not preclude the existence of other agreements for each party in the agreement to have with other parties. For example, an IdP can have independent agreements with multiple RPs simultaneously, and an RP can likewise have independent agreements with multiple IdPs simultaneously. The IdP and RP need not disclose the existence or terms of trust agreements to parties outside of or not covered by the agreement in question.\n\nTrust agreements SHALL establish terms regarding expected and acceptable IALs and AALs in connection with the federated relationship.\n\nTrust agreements SHALL define necessary mechanisms and materials to coordinate redress and issues between the different participants in the federation, as discussed in Sec. 3.4.3.\n\nEstablishment of a trust agreement is required for all federation transactions, even those in which the roles and applications exist within a single security domain or shared legal ownership. In such cases, the establishment of the trust agreement can be an internal process and does not need to involve a formal agreement. Even in such cases, it is still required for the IdP to document and disclose the trust agreement to the subscriber upon request.\n\nEven though subscribers are not generally a party directly involved in the trust agreement’s terms, subscribers are affected by the terms of the trust agreement and the resulting federation transactions. As such, the terms of the trust agreement need to be made available to subscribers in clear and understandable language. The means by which the subscriber can access these terms, and the party responsible for informing the subscriber, varies based on the means of establishment of the trust agreement and the terms of the trust agreement itself. Additionally, the subscriber’s user agent is not usually party to the trust agreement, unless it is acting in one of the roles of the federation transaction.\n\nBilateral Trust Agreements\n\nIn a bilateral trust agreement, the establishment of the trust agreement occurs directly between the federated parties, and the trust agreement is not managed or facilitated by a separate party. Bilateral trust agreements allow for a point-to-point connection to be established between organizations wishing to provide federated identity access to services. Bilateral connections can take many forms, including large enterprise applications with static contracts and subscriber-driven dynamic connections to previously unknown RPs. In all cases, the CSP, IdP, and RP manage their policies regarding the federated connection directly.\n\nBilateral trust agreements impose no additional requirements beyond those needed to establish the trust agreement itself.\n\n\\clearpage\n\n\nMultilateral Trust Agreements\n\nIn a multilateral trust agreement, the federated parties look to a federation authority to assist in establishing the trust agreement between parties. In this model, the federation authority facilitates the inclusion of CSPs, IdPs, and RPs under the trust agreement.\n\nWhen onboarding a party in any role, the federation authority conducts vetting on that party to verify its compliance with the tenets of the trust agreement. The level of vetting is unique to the use cases and models employed within the federation, and details are outside the scope of this document. This vetting is depicted in Fig. 2.\n\nFig. 2. Federation Authority\n\n\n\nThe trust agreement SHALL enumerate the required practices for vetting all parties, and SHALL indicate the party or parties responsible for performing the vetting process.\n\nVetting of CSPs, IdPs, and RPs SHALL establish, as a minimum, that:\n\n\n CSPs are performing identity proofing of subscriber accounts in accordance with [SP800-63A]\n CSPs onboard subscriber accounts to IdPs in a secure fashion in adherence to the requirements in Sec. 4.1 or Sec. 5.4 as applicable\n Authenticators used for authenticating the subscriber at the IdP or onboarding a subscriber-controlled wallet are used in accordance with [SP800-63B]\n Assertions generated by IdPs adhere to the requirements in Sec. 4.9 or Sec. 5.8.\n RPs adhere to requirements for handling subscriber attribute data, such as retention, aggregation, and disclosure to third parties.\n RP and IdP systems use approved profiles of federation protocols.\n\n\nThe federation authority MAY provide a programmatic means for parties under the trust agreement to verify membership of other parties under the trust agreement. For example, a federation authority could provide a discovery API that provides the vetted capabilities of an IdP for providing identities to RPs within the system, or it could provide a signed attestation for RPs to present to IdPs during a registration step.\n\nFederation authorities SHALL periodically re-evaluate members for compliance, in terms disclosed in the trust agreement.\n\nWhen information needs to be shared between CSPs, such as during suspicion of fraud on a subscriber account, the federation authority can define the policies that apply for the transfer of this information. While sharing information in this way can be used to mitigate fraud, there are also substantial privacy concerns. The federation authority SHALL include all information sharing between parties other than for identity purposes in its privacy risk assessment.\n\nA federation authority MAY incorporate other multilateral trust agreements managed by other federation authorities in its trust agreement, creating an interfederation agreement. For example, IdP1 has been vetted under a multilateral agreement with FA1, and RP2 has been vetted under a multilateral agreement with FA2. In order to facilitate connection between IdP1 and RP2, a new federation authority FA3 can provide a multilateral agreement that accepts IdPs from FA1 and RPs from FA2. If IdP1 and RP2 accept the authority of FA3, the federation connection can continue under the auspices of this interfederation agreement.\n\nRedress Requirements\n\nFederation transactions occur between multiple parties that are often controlled by multiple entities, and different stages of the federation transaction can lead to different situations in which a subscriber would need to seek redress from the different parties.\n\nAs the recipient of a subscriber’s identity attributes, the RP is the subscriber’s primary view into the federated system, and in some instances the subscriber may be unaware that an IdP is involved with their use of the RP. Therefore it falls to the RP to provide the subscriber with a clear and accessible method of contacting the RP to request redress. For matters that involve the RP subscriber account (including any attributes stored in the account), RP functionality, bound authenticators, RP allowlists, and other items under the RP’s control, the RP SHALL provide clear and accessible means of redress to the subscriber. For matters that involve the IdP or CSP, the RP SHALL provide the subscriber with a means of initiating the redress process with the IdP or CSP, as appropriate.\n\nFor matters involving the use of the subscriber account in federation transactions, including attribute values and derived attribute values made available over federation transactions, IdP functionality, holder-of-key authenticators, IdP allowlists, and other items in the IdP’s control, the IdP SHALL provide clear and accessible means of redress to the subscriber. For matters that also involve a particular RP, the IdP SHALL provide the subscriber with a means of initiating the redress process with the RP. For matters involving the subscriber account that has been made available to the IdP, the IdP SHALL provide the subscriber with a means of initiating the redress process with the CSP.\n\nFor matters involving the subscriber account, including identity attributes and authenticators in the subscriber account, the CSP SHALL provide the subscriber with a clear and accessible means of redress.\n\nSee Sec. 3.6 of [SP800-63] for more requirements on providing redress.\n\nIdentifiers and Cryptographic Key Management for CSPs, IdPs, and RPs\n\nWhile a trust agreement establishes permission to federate, it does not facilitate the secure connection of parties in the federation.\nIn order to communicate over a federation protocol, the CSP, IdP, and RP need to be able to identify each other in a secure fashion, with the ability to associate identifiers with cryptographic keys and related security artifacts.\nIn this way, an RP can ensure that an assertion is coming from the intended IdP, or that an attribute bundle is coming from the intended CSP. Likewise, an IdP can ensure that it is sending an assertion to the intended RP.\n\nThe process of an RP establishing cryptographic keys and identifiers for an IdP or CSP is known as discovery. The process of the IdP establishing cryptographic keys and identifiers for the RP is known as registration. Both the discovery and registration processes can happen prior to any federation transaction happening, or inline as part of the transaction itself. Both the discovery and registration processes can happen directly between parties or be facilitated through use of a third party service. Different federation protocols and processes have different processes for establishing these cryptographic keys and identifiers, but the end result is that each party can properly identify others as necessary within the protocol.\n\nThe discovery and registration processes SHALL be established in a secure fashion as defined by the trust agreement governing the transaction. Protocols requiring the transfer of cryptographic key information SHALL use an authenticated protected channel to exchange cryptographic key information needed to operate the federated relationship, including any shared secrets or public keys. Any symmetric keys used in this relationship SHALL be unique to a pair of federation participants.\n\nCSPs, IdPs (including subscriber-controlled wallets), and RPs MAY have multiple cryptographic keys and identifiers to serve different purposes within a trust agreement, or to serve different trust agreements. For example, an IdP could use one set of assertion signing keys for all FAL1 and FAL2 transactions, but use a separately managed set of cryptographic keys for FAL3 transactions, stored in a higher security container.\n\nWhen domain names, URIs, or other structured identifiers are used to identify parties, wildcards SHALL NOT be used. For example, if an RP is deployed at “www.example.com”, “service.example.com”, and “gateway.example.com”, then each of these identifiers would have to be registered for the RP. A wildcard of “*.example.com” cannot be used, as it would unintentionally allow access to “user.example.com” and “unknown.example.com” under the same RP identifier.\n\nCryptographic Key Rotation\n\nOver time, it can be desirable or necessary to update the cryptographic key associated with a CSP, IdP, or RP. The allowable update process for any cryptographic keys and identifiers SHALL be defined by the trust agreement and SHALL be executed using an authenticated protected channel, as in the initial cryptographic key establishment.\n\nFor example, if the IdP is identified by a URL, the IdP could publish its current public key set at a location underneath that URL. RPs can then fetch the public key from the known location as needed, getting updated public keys as they are made available.\n\nCryptographic Key Storage\n\nCSPs, IdPs (including subscriber-controlled wallets), and RPs SHALL store all private and shared keys used for signing, encryption, and any other cryptographic operations in a secure fashion. Key storage is subject to applicable [FIPS140] requirements of the FAL at which the key is being used, including applicable tamper resistance requirements.\n\nSome circumstances, such as reaching FAL3 with a subscriber-controlled wallet, require the cryptographic keys to be stored in a non-exportable manner. To be considered non-exportable, key storage SHALL either be a separate piece of hardware or an embedded processor or execution environment, e.g., secure element, trusted execution environment (TEE), or trusted platform module (TPM). These hardware modules or embedded processors are separate from a host processor such as the CPU on a laptop or mobile device. Non-exportable key storage SHALL be designed to prohibit the export of the secret keys to the host processor and SHALL NOT be capable of being reprogrammed by the host processor to allow the secret keys to be extracted.\n\nSoftware Attestations\n\nSoftware and device attestation can be used to augment the establishment of cryptographic keys and identifiers, especially in dynamic and distributed systems. Attestations in this usage are cryptographically-bound statements that a particular piece of software, device, or runtime system meets a set of agreed-upon parameters. The attestation is presented by the software in the context of establishing its identity, and allows the receiver to verify the request with a higher degree of certainty than they would otherwise.\n\nFor example, a specific distribution of subscriber-controlled wallet software can be signed by its distributor, allowing RPs to recognize individual instances of that software. Alternatively, an RP could be issued an attestation from a federation authority, allowing IdPs to recognize the RP as part of the federation.\n\nWhen attestations are required by the trust agreement or requested as part of the federation protocol, received attestations SHALL be validated by the receiver.\n\nSee [RFC7591] Sec. 2.3 for more information about software statements, which are a means for OAuth and OpenID Connect RPs to communicate a signed set of software attributes during dynamic client registration.\n\nAuthentication and Attribute Disclosure\n\nOnce the IdP and RP have entered into a trust agreement and have completed registration, the federation protocol can be used to pass subscriber attributes from the IdP to the RP.\n\nA subscriber’s attributes SHALL be transmitted between IdP and RP only for federation transactions or support functions such as identification of compromised subscriber accounts as discussed in Sec. 3.9. A subscriber’s attributes SHALL NOT be transmitted for any other purposes, even when parties are allowlisted.\n\nA subscriber’s attributes SHALL NOT be used by the RP for purposes other than those stipulated in the trust agreement. A subscribers attributes SHALL be stored and managed in accordance with Sec. 3.10.3.\n\nThe subscriber SHALL be informed of the transmission of attributes to an RP. In the case where the authorized party is the organization, the organization SHALL make available to the subscriber the list of approved RPs and the associated sets of attributes sent to those RPs. In the case where the authorized party is the subscriber, the subscriber SHALL be prompted prior to release of attributes using a runtime decision at the IdP as described in Sec. 4.6.1.3.\n\nRP Subscriber Accounts\n\nIt is common for an RP to keep a record representing a subscriber local to the RP itself, known as the RP subscriber account. The RP subscriber account can contain things like access rights at the RP as well as a cache of identity attributes for the subscriber. An active RP subscriber account is bound to one or more federated identifiers from the RP’s trusted IdPs. Successful authentication of one of these federated identifiers through a federation protocol allows the subscriber to access the information and functionality protected by the RP subscriber account.\n\nAn RP subscriber account is provisioned when the RP has associated a set of attributes about the subscriber with a data record representing the subscriber account at the RP. The RP subscriber account SHALL be bound to at least one federated identifier, and a given federated identifier is bound to only one RP subscriber account at a given RP. The provisioning can happen prior to authentication or as a result of the federated authentication process, depending on the deployment patterns as discussed in Sec. 4.6.3. Prior to being provisioned, the RP subscriber account does not exist and has no associated data record at the RP.\n\nAn RP subscriber account is terminated when the RP removes all access to the account at the RP. Termination SHALL include removal of all federated identifiers and bound authenticators from the RP subscriber account (to prevent future federation transactions) as well as removal of attributes and information associated with the account in accordance with Sec. 3.10.3. An RP MAY terminate an RP subscriber account independently from the IdP for a variety of reasons, regardless of the current validity of the subscriber account from which it is derived.\n\nThe RP subscriber account can be provisioned at the RP without an authenticated session, but an authenticated session can only be created on a provisioned account. See Sec. 3.8 for more information.\n\nAccount Linking\n\nA single RP subscriber account MAY be associated with more than one federated identifier. This practice is sometimes known as account linking. If the RP allows a subscriber to manage multiple accounts in this way, the RP SHALL require an authenticated session with the subscriber account for all management and linking functions. This authenticated session SHOULD require one existing federated identifier before linking the new federated identifier to the RP subscriber account. An RP MAY offer a means of recovery of an RP subscriber account with no current means of access.\n\nWhen a federated identifier is removed from an RP subscriber account, the RP SHALL disallow access to the RP subscriber account from the removed federated identifier.\n\nThe RP SHALL document its practices and policies that it enacts when an RP subscriber account reaches a state of having zero associated federated identifiers, no means of access, and no means of recovery. In such cases, the RP subscriber account SHOULD be terminated and information associated with the account in accordance with Sec. 3.10.3.\n\nThe RP SHALL provide notice to the subscriber when:\n\n\n A new federated identifier is added to an existing RP subscriber account\n A federated identifier is removed from an RP subscriber account, but the account is not terminated\n\n\nFor additional considerations on providing notice to a subscriber about account management events, see Sec. 4.6 of [SP800-63B].\n\nThe RP MAY associate different access rights to the same account depending on which federated account is used to access the RP. The means by which an RP determines authorization and access is out of scope of these guidelines.\n\nAccount Resolution\n\nIf the RP has access to existing information about a set of subscribers, and this information is not associated with a federated identifier, the RP performs a process known as account resolution to determine which set of subscriber information to associate with a new RP subscriber account.\n\nAn RP performing account resolution SHALL ensure that the attributes requested from the IdP are sufficient to uniquely resolve the subscriber within the RP’s system before linking the federated identifier with the RP subscriber account and granting access. The intended use of each attribute by the RP is detailed in the trust agreement, including whether the attribute is used for account resolution in this manner.\n\nAn RP performing account resolution SHALL perform a risk assessment to ensure that the resolution process does not associate an RP subscriber account’s information with a federated identifier not belonging to the subscriber.\n\nA similar account resolution process is also used when the RP verifies an authenticator used in a holder-of-key assertion for the first time. In this case, the RP SHALL ensure that the attributes carried with the authenticator uniquely resolve to the subscriber.\n\nAlternative Authentication Processes\n\nThe RP MAY allow a subscriber to access their RP subscriber account using direct authentication processes by allowing the subscriber to add and remove authenticators in the RP subscriber account. The RP SHALL follow the requirements in [SP800-63B] in managing all alternative authenticators.\n\nSince the RP is using the direct authentication model discussed in [SP800-63], there is no federation transaction and therefore no FAL assigned.\n\n\\clearpage\n\n\nIf the RP allows this kind of access, the RP SHALL disclose in the trust agreement:\n\n\n The process for adding and removing alternative authenticators in the RP subscriber account\n Any restrictions on authenticators the subscriber can use to access the RP\n The AAL required for access to the subscriber account without a federation transaction\n The circumstances under which the RP will require the subscriber to authenticate with their IdP, such as a period of time since last federation transaction\n\n\nFor additional considerations on providing notice to a subscriber about authenticator management events, see Sec. 4.6 of [SP800-63B].\n\nWhile it is possible for a bound authenticator to be used as an alternative authenticator for direct access to the RP, these uses are distinct from each other and an RP needs to determine that the use of a given authenticator can be used in one or both scenarios.\n\nAuthenticated Sessions at the RP\n\nThe end goal of a federation transaction is creating an authenticated session between the subscriber and the RP, backed by a verified assertion from the IdP. This authenticated session can be used to allow the subscriber access to functions at the RP (i.e., logging in), identifying the subscriber to the RP, or processing attributes about the subscriber carried in the federation transaction.\nAn authenticated session SHALL be created by the RP only when the following conditions are true:\n\n\n The RP has processed and verified a valid assertion\n The assertion is from the expected IdP for a transaction\n The IdP that issued the assertion is the IdP identified in the federated identifier of the assertion\n The assertion is associated with an RP subscriber account (which may be ephemeral)\n The RP subscriber account has been provisioned at the RP through the method specified in the trust agreement\n\n\nIf the assertion is a holder-of-key assertion at FAL3, the authenticator indicated in the assertion SHALL be verified before the RP subscriber account is associated with an authenticated session, as discussed in Sec. 3.14.\nIf the assertion also requires authentication with a bound authenticator at FAL3, a bound authenticator SHALL be verified before the RP subscriber account is associated with an authenticated session, as discussed in Sec. 3.15.\n\nThe authenticated session MAY be ended by the RP at any time.\n\nSee [SP800-63B] Sec. 5 for more information about session management requirements for both IdPs and RPs. For additional session requirements with general purpose IdPs, see Sec. 4.7.\n\nPrivacy Requirements\n\nThe ultimate goal of a subscriber is to interact with and use the RP. Federation involves the transfer of personal attributes from a third party that is not otherwise involved in a transaction — the IdP. Federation also potentially gives the IdP broad visibility into subscriber activities and status. Accordingly, there are specific privacy requirements associated with federation which do not exist in direct authentication.\n\nWhen the RP requests a federation transaction from the IdP, this request and the subsequent processing of the federation transaction reveals to the IdP where the subscriber is logging in. Over time, the IdP could build a profile of subscriber transactions based on this knowledge of which RPs a given subscriber is using. This aggregation could enable new opportunities for subscriber tracking and use of profile information that do not align with subscribers’ privacy interests.\n\nIf the same subscriber account is asserted to multiple RPs, and those RPs communicate with each other, the colluding RPs could track a subscriber’s activity across multiple applications and security domains. The IdP SHOULD employ technical measures, such as the use of pairwise pseudonymous identifiers described in Sec. 3.3.1 or privacy-enhancing cryptographic protocols, to provide disassociability and discourage subscriber activity tracking and profiling between RPs.\n\nThe following requirements apply specifically to federal agencies acting as an IdP, an RP, or both:\n\n\n \n The agency SHALL consult with their Senior Agency Official for Privacy (SAOP) to conduct an analysis determining whether the requirements of the Privacy Act are triggered by the agency that is acting as an IdP, by the agency that is acting as an RP, or both (see Sec. 7.4).\n \n \n The agency SHALL publish or identify coverage by a System of Records Notice (SORN) as applicable.\n \n \n The agency SHALL consult with their SAOP to conduct an analysis determining whether the requirements of the E-Government Act are triggered by the agency that is acting as an IdP, the agency that is acting as an RP, or both.\n \n \n The agency SHALL publish or identify coverage by a Privacy Impact Assessment (PIA) as applicable.\n \n \n The agency SHALL conduct a privacy risk assessment regarding the sharing of subscriber identity information between the IdP and RP.\n \n\n\nIf the RP subscriber account lifecycle process gives the RP access to attributes through a provisioning API as discussed in Sec. 4.6.3, additional privacy measures SHALL be implemented to account for the difference in RP subscriber account lifecycle. The IdP SHALL minimize the attributes made available to the RP through the provisioning API. The IdP SHALL limit the population of subscriber accounts available via the provisioning API to the population of subscribers authorized to use the RP by the trust agreement. To prevent RP retention of identity attributes for accounts that have been terminated at the IdP, the IdP SHALL use the provisioning API to de-provision RP subscriber accounts for terminated subscriber accounts.\n\nTrust agreements SHOULD require identity attributes be shared only when the subscriber opts in, using a runtime decision as discussed in Sec. 4.6.1.3.\n\nTransmitting Subscriber Information\n\nThe IdP SHALL limit transmission of subscriber information to only that which is necessary for functioning of the system.\nThese functions include the following:\n\n\n identity proofing, authentication, or attribute assertions (collectively “identity service”); or\n in the case of a specific subscriber request to transmit the information\n\n\nThe IdP MAY additionally transmit the subscriber’s information in the following cases, if stipulated and disclosed by the trust agreement:\n\n\n fraud mitigation related to the identity service,\n to respond to a security incident related to the identity service, or\n to comply with law or legal process.\n\n\nIf an IdP discloses information on subscriber activities at an RP to any party, or processes the subscriber’s attributes for any purpose other than these cases, the IdP SHALL implement measures to maintain predictability and manageability commensurate with the privacy risk arising from the additional processing. Measures MAY include providing clear notice, obtaining subscriber consent, or enabling selective use or disclosure of attributes. When an IdP uses consent measures for this purpose, the IdP SHALL NOT make consent for the additional processing a condition of the identity service.\n\nAn RP MAY disclose information on subscriber activities to the associated IdP in the following cases, if stipulated and disclosed by the trust agreement:\n\n\n fraud mitigation related to the identity service,\n to respond to a security incident related to the identity service, or\n to comply with law or legal process.\n\n\nSee [NISTIR8062] for additional information on privacy engineering and risk management.\n\nSecurity Controls\n\nThe IdP and RP SHALL employ appropriately tailored security controls from the moderate baseline security controls defined in \n[SP800-53] or equivalent federal (e.g., [FEDRAMP])\nor industry standard that the organization has determined for the information systems, applications, and\nonline services that these guidelines are used to protect. The IdP and RP SHALL ensure that the minimum\nassurance-related controls for the appropriate systems, or equivalent, are satisfied.\n\nProtection from Injection Attacks\n\nAn injection attack in the context of a federated protocol consists of an attacker attempting to force an RP to accept or process an assertion or assertion reference in order to gain access to the RP or deny a legitimate subscriber access to the RP. The attacker does this by taking an assertion or assertion reference and injecting it into a vulnerable RP. If the attacker is able to do this successfully, the attacker can trick an RP into binding the attacker’s session to the federated identifier in the assertion. The attackers assertion could be either stolen from a legitimate subscriber or manufactured to perpetrate the attack.\n\nProtection from injection attacks is recommended at all FALs, and this protection is required at FAL2 and above. In all cases, the RP needs to take reasonable steps to prevent an attacker from presenting an injected assertion or assertion reference based on the nature of the RP software, the capabilities of the federation protocol in use, and the needs of the overall system. Both [OIDC] and [SAML] provide mechanisms for injection protection including nonces sent from the RP during the request, RP authentication for back-channel communications, and methods for the RP to start the federation transaction and track its state throughout the process. Different mechanisms provide different degrees of protection and are applicable in different circumstances.\nWhile the details of specific protections will vary based on the federation protocol and technology in use, common best practices such as the following can be used to limit the attack surface:\n\n\n The use of back channel assertion presentation as discussed in Sec. 4.11.1, which prevents an attacker from presenting the assertion directly to the RP.\n The use of an unguessable value to tie the unauthenticated session at the RP with the request to the back channel, which prevents an attacker from injecting an assertion reference from one session to another.\n Requiring the RP to authenticate to the IdP during an assertion request, preventing the attacker from faking a request from the RP to begin a federation process.\n Prohibition of IdP-initiated federation processes, which prevent the RP from accepting unsolicited assertions and assertion references from the IdP. This prohibition does not include processes in which an external party (such as the IdP or a federation authority) signals the RP to start a federation process with the IdP, allowing the RP to begin the federation transaction and securely await a response within that transaction.\n The use of a signed front channel response from the IdP with an RP-provided nonce covered by the signature, which prevents the attacker from injecting an assertion reference from one session to another.\n The use of platform APIs for front-channel communication, as opposed to HTTP redirects.\n\n\nInjection attacks are particularly dangerous when combined with phishing attacks. When combined, the attacker can either trick the subscriber into generating a valid assertion for the attacker to inject into the attacker’s session, or the attacker can trick the subscriber into injecting the attacker’s assertion into the subscriber’s session at the RP.\n\nProtecting Subscriber Information\n\nCommunications between the IdP and the RP SHALL be protected in transit using an authenticated protected channel. Communications between the subscriber and either the IdP or the RP (usually through a user agent) SHALL be made using an authenticated protected channel.\n\nNote that the IdP may have access to information that may be useful to the RP in enforcing security policies, such as device identity, location, system health checks, and configuration management. If so, it may be a good idea to pass this information along to the RP within the bounds of the subscriber’s privacy preferences described in Sec. 7.2.\n\nAdditional attributes about the user MAY be included outside of the assertion itself by use of authorized access to an identity API as discussed in Sec. 3.11.3. Splitting user information in this manner can aid in protecting user privacy and can allow for limited disclosure of identifying attributes on top of the essential information in the authentication assertion itself.\n\nWhen derived attribute values are available and fulfill the RP’s needs, the RP SHOULD request derived attribute values rather than full attribute values as described in Sec. 7.3. The IdP SHOULD support derived attribute values to the extent the underlying federation protocol allows.\n\nStoring Subscriber Information\n\nThe IdP and RP SHALL delete personal identity information in the subscriber account and RP subscriber account (respectively) upon account termination, unless required otherwise by legal action or policy.\nWhenever personal identity information is stored in a subscriber account or RP subscriber account, whether the account is active or not, the IdP and RP SHALL determine and use appropriate controls to ensure secure storage of the personal identity information.\n\nFor example, the RP could record the federated identifier in access and audit logs, which logs are retained even after the account has been terminated. However, all identity attributes and personal information are removed from the RP’s own storage.\n\nWhen the RP uses an ephemeral provisioning mechanism as described in Sec. 4.6.3, the RP SHALL remove all subscriber attributes at the termination of the session, unless required by legal action or policy.\n\nIdentity Attributes\n\nIdentity attributes representing the subscriber are sent to the RP during a federation transaction. These attributes take on multiple aspects, which can be combined in different ways.\n\n\n Bundling:\n Attributes SHALL be either unbundled (presented directly by the IdP) or bundled into a package that is cryptographically signed by the CSP, as described in Sec. 3.11.1.\n Derivation:\n Attributes SHALL be either attribute values (e.g., a date of birth) or derived attribute values (e.g., an indication of age of majority).\n Presentation:\n Attributes SHALL be either presented in the assertion (and therefore covered by the assertion’s signature) or made available as part of a protected identity API.\n\n\nTrust agreements SHALL record the validation practices for all attributes made available under the trust agreement (e.g., whether the attribute is from an authoritative or credible source, self-asserted by the subscriber, assigned by the IdP, etc.).\n\nAttribute Bundles\n\n\n Note: Attribute bundles are often referred to elsewhere as credentials by other protocols and specifications, but usage of this term would be in conflict with its use within these guidelines for a different concept. Consequently, the term attribute bundle is used within these guidelines instead.\n\n\nAs an alternative to sending attributes directly from the IdP, attributes can be collected into bundles that are signed by the CSP. These attribute bundles can be independently verified by the RP. This pattern is commonly used by a subscriber-controlled wallet. Some examples of technologies used to bundle attributes are Selective Disclosure JSON Web Tokens [SD-JWT] and the mDoc security object defined in [ISOIEC18013-5].\n\nThe presentation of an attribute bundle SHALL be protected by the IdP in the same manner as non-bundled attributes. That is to say, attribute bundles presented in an assertion are covered by the signature of the assertion, and attribute bundles made available by an identity API are protected by the limited access controls to that API.\n\nAttribute bundles include one or more attribute values and derived attribute values. Attribute bundles are carried in the assertion from the IdP, the subscriber attributes within the bundle need not be fully disclosed to all RPs on every transaction and instead MAY be selectively disclosed to the RP. An attribute bundle using selective disclosure technology can effectively limit which attributes an RP can read from the attribute bundle. The RP can still verify the signature of the attribute bundle as a whole, confirming its source as the CSP, without the IdP having to disclose all of the contents of the attribute bundle to the RP.\n\nThe RP SHALL validate the signature covering the attribute bundle itself as well as the signature of the assertion as a whole. The RP SHALL ensure that the attribute bundle is able to be presented by the IdP that created the assertion containing the attribute bundle, such as by verifying that the public key used to sign the assertion is included in the signature of the attribute bundle.\n\nDerived Attribute Values\n\nFor some use cases, knowing the actual value of an identity attribute is not strictly necessary for the RP to function, but a value derived from the identity attribute is sufficient instead. For example, if the RP needs to know if the subscriber is above the age of majority, the RP could request the subscriber’s birth date and calculate the majority age question from this value. However, doing so reveals more specific information to the RP than it truly needed. Instead, if the IdP can calculate whether the subscriber’s age meets the definitions for majority at the time of the RP’s request and return a simple boolean for this derivation instead of the birth date value itself. The RP can then continue its processing without needing to see the underlying value.\n\nDerived attribute values increase the privacy of a system since they allow a more focused release of information to the RP. While some federation systems allow the RP to dynamically query for an arbitrary derived attribute value at request time, many common use cases can be accommodated by the IdP pre-calculating common derived attribute values and offering them as alternatives to the full attribute value.\n\nIdentity APIs\n\nAttributes about the subscriber, including profile information, MAY be provided to the RP through a protected API known as the identity API. The RP is granted limited access to the identity API during the federation transaction, in concert with the assertion. For example, in OpenID Connect, the UserInfo Endpoint provides a standardized identity API for fetching attributes about the subscriber. This API is protected by an OAuth 2.0 Access Token, which is issued to the RP along with OpenID Connect’s assertion, the ID Token.\n\nBy making attributes available at an identity API, the IdP no longer has to use the assertion to convey as much information to the RP. This not only means that sensitive attributes do not have to be carried in the assertion itself, it also makes the assertion smaller and easier to process by the RP. The contents of the assertion can then be limited to essential fields (e.g., unique subject identifiers) and information about the immediate authentication event being asserted.\n\nIdentity APIs also make it possible for the RP to help manage when subscriber attributes are transmitted from the IdP. The RP often caches attributes provided by the IdP in an RP subscriber account, discussed in Sec. 3.10.1, and the RP can record when these attributes were last received from the IdP. The RP can request subscriber attributes only when needed to update the RP subscriber account, instead of receiving them on every federation transaction in the assertion. The IdP can aid this decision by indicating in the assertion the time at which any of the subscriber attributes available to the RP were updated at the IdP. This approach is particularly helpful when a subscriber’s attributes are stable over time, allowing the RP to function without fetching them on every request.\n\nAll possible use of identity APIs, including which provisioning models are available through the API, SHALL be recorded and disclosed as part of the trust agreement. Access to the identity API SHALL be time limited by the trust agreement. Access to the identity API SHOULD be limited to the duration of the federation transaction plus time necessary for synchronization of attributes, as discussed in Sec. 4.6.4. Since the time limitation is separate from the validity time window of the assertion and the lifetime of the authenticated session at the RP, access to an identity API by the RP without an associated valid assertion SHALL NOT be sufficient for the establishment of an authenticated session at the RP.\n\nA given identity API deployment is expected to be capable of providing attributes for all subscribers for whom the IdP can create assertions. However, when access to the identity API is granted within the context of a federation transaction, the attributes provided by an identity API SHALL be associated with only the single subscriber identified in the associated assertion. If the identity API is hosted by the IdP, the returned attributes SHALL include the subject identifier for the subscriber. This allows the RP to positively correlate the assertion’s subject to the returned attributes. Note that when access to an identity API is provided as part of pre-provisioning of RP subscriber accounts as discussed in Sec. 4.6.3, the RP is usually granted blanket access to the identity API outside the context of the federation transaction and these requirements do not apply. For pre-provisioning use cases, the privacy considerations SHALL be evaluated and recorded as part of the trust agreement. If the identity API is hosted externally, the requirements in Sec. 3.11.3.1 apply.\n\nExternal Identity APIs\n\nWhile most identity APIs used in federation protocols are hosted as part of the IdP, it is also possible for the IdP to grant access to identity APIs hosted directly by attribute providers. These services provide attributes about the subscriber in addition to those made available directly from the IdP.\n\nWhen the IdP grants access to an external attribute provider, the IdP is making an explicit statement that the information returned from the attribute provider is associated with the subscriber identified in the associated assertion. For the purposes of the trust agreement, the IdP is the responsible party for the accuracy and content of the identity API and its association with the represented subscriber account.\n\nThe attributes returned by the attribute provider are assumed to be independent of those returned directly from the IdP, and as such MAY use different identifiers, formats, or schemas.\n\nFor example, an IdP could provide access to a subscriber’s medical license information as part of the federation process. Instead of the IdP asserting the license status directly, the IdP provides the RP access to a record for the subscriber at a medical licensure agency by providing a link to an API containing the record representing the subscriber as well as a credential allowing limited access to this API. The RP can then make a strong association between the current subscriber and the license record, even though the license record will likely use a different subject identifier and would otherwise be not correlatable by the RP. The trust agreement would list the medical licensure agency as an additional attribute provider to the IdP. The IdP remains responsible for providing this linked data.\n\nBefore accepting attributes from an external identity provider and associating them with the RP subscriber account, the RP SHALL verify that the attributes in question are allowed to be provided by the external attribute provider under the auspices of the trust agreement.\n\nAssertion Protection\n\nAssertions SHALL include a set of protections to prevent attackers from manufacturing valid assertions or reusing captured assertions at disparate RPs. The protections required are dependent on the details of the use case being considered, and specific protections are listed here.\n\nAssertion Identifier\n\nAssertions SHALL be sufficiently unique to permit unique identification by the target RP. Assertions MAY accomplish this by use of an embedded nonce, issuance timestamp, assertion identifier, or a combination of these or other techniques.\n\nSigned Assertion\n\nAssertions SHALL be cryptographically signed by the issuer (IdP). The RP SHALL validate the digital signature or MAC of each such assertion based on the issuer’s key. This signature SHALL cover the entire assertion, including its identifier, issuer, audience, subject, and expiration.\n\nThe assertion signature SHALL either be a digital signature using asymmetric keys or a MAC using a symmetric key shared between the RP and issuer. Shared symmetric keys used for this purpose by the IdP SHALL be independent for each RP to which they send assertions, and are normally established during registration of the RP. Public keys for verifying digital signatures SHALL be transferred to the RP in a secure manner, and MAY be fetched by the RP in a secure fashion at runtime, such as through an HTTPS URL hosted by the IdP. Approved cryptography SHALL be used.\n\nEncrypted Assertion\n\nThe contents of the assertion can be encrypted to protect their exposure to untrusted third parties, such as a user agent. This protection is especially relevant when the assertion contains PII of the subscriber—excluding opaque identifiers such as the subject identifier. Subject identifiers are meaningless outside of their target systems, unlike other possible identifiers such as SSN, email address, or driver’s license number. Therefore, subject identifiers are excluded as a qualifier for encrypting the assertion.\nA trust agreement MAY require encryption of assertion contents in other situations.\n\nWhen the entire assertion is encrypted, the encryption protects the contents of the assertion from being read by unintended parties, ensuring that only the targeted RP is able to process the assertion.\nWhile most assertion formats support encryption of the entire assertion, some assertion formats allow for only the PII portions of the assertion to be encrypted, providing selective disclosure of sensitive information to the RP without encrypting the entire assertion.\n\nWhen encrypting assertions, the IdP SHALL encrypt the contents of the assertion using either the RP’s public key or a shared symmetric key. Shared symmetric keys used for this purpose by the IdP SHALL be independent for each RP to which they send assertions, and are normally established during registration of the RP. Public keys for encryption SHALL be transferred over an authenticated protected channel and MAY be fetched by the IdP at runtime, such as through an HTTPS URL hosted by the RP.\n\nAll encryption of assertions SHALL use approved cryptography applied to the federation technology in use.\nFor example, a SAML assertion can be encrypted using XML-Encryption, or an OpenID Connect ID Token can be encrypted using JSON Web Encryption (JWE).\nWhen used with back-channel presentation, an assertion can also be encrypted with a mutually-authenticated TLS connection, so long as there are no intermediaries between the IdP and RP that interrupt the TLS channel.\n\nAudience Restriction\n\nAssertions SHALL use audience restriction techniques to allow an RP to recognize whether or not it is the intended target of an issued assertion. All RPs SHALL check that the audience of an assertion contains an identifier for their RP to prevent the injection and replay of an assertion generated for one RP at another RP.\n\nIn order to limit the places that an assertion could successfully be replayed by an attacker, IdPs SHOULD issue assertions designated for only a single audience. Restriction to a single audience is required at FAL2 and above.\n\nBearer Assertions\n\nA bearer assertion can be presented on its own as proof of the identity of the party presenting it. No other proof beyond validation of the assertion is required. Similarly, a bearer assertion reference can be presented own its own to the RP and used by the RP to fetch an assertion. If an attacker can capture or manufacture a valid assertion or assertion reference representing a subscriber and can successfully present that assertion or reference to the RP, then the attacker could be able to impersonate the subscriber at that RP.\n\nNote that mere possession of a bearer assertion or reference is not always enough to impersonate a subscriber. For example, if an assertion is presented in the back-channel federation model (described in Sec. 4.11.1), additional controls can be placed on the transaction (such as identification of the RP and assertion injection protections) that help further protect the RP from fraudulent activity.\n\n\\clearpage\n\n\nHolder-of-Key Assertions\n\nA holder-of-key assertion as in Fig. 3 SHALL include a unique identifier for an authenticator that can be verified independently by the RP, such as the public key of a certificate controlled by the subscriber. The RP SHALL verify that the subscriber possesses the authenticator identified by the assertion.\n\nFig. 3. Holder-of-Key Assertions\n\n\n\nThe authenticator identified in a holder-of-key assertion MAY be distinct from the primary authenticator the subscriber uses to authenticate to the IdP. The authenticator identified in a holder-of-key assertion SHALL be phishing resistant. When the RP encounters an authenticator in a holder-of-key assertion for the first time, the RP SHALL ensure that the authenticator can be uniquely resolved to the RP subscriber account, as discussed in Sec. 3.7.2.\n\nA holder-of-key assertion SHALL NOT include an unencrypted private or symmetric key to be used as an authenticator.\n\nWhen the RP uses an ephemeral provisioning mechanism as described in Sec. 4.6.3, the IdP SHOULD use a unique pairwise identifier for each authorization request to the RP to prevent the RP from storing or correlating information.\n\nA more complete example is found in Sec 10.6, which shows the use of a mutual TLS connection to provide the proof of possession of a certificate on a smart card that is listed by the assertion.\n\nSince the authenticators used in holder-of-key assertions are presented to multiple parties, and these authenticators often contain identity attributes, there are additional privacy considerations to address as discussed in Sec. 7.\n\nBound Authenticators\n\nA bound authenticator as shown in Fig. 4 is an authenticator bound to the RP subscriber account and managed by the RP. The IdP SHALL include an indicator in the assertion when the assertion is to be used with a bound authenticator at FAL3. The unique identifier for the authenticator (such as its public key) SHALL be stored in the RP subscriber account. The RP needs to have a reliable basis for evaluating the characteristics of the bound authenticator; one such basis is the inclusion of a signed attestation, as discussed in Sec. 3.2.4 of [SP800-63B].\n\nFig. 4. Bound Authenticators\n\n\n\nA bound authenticator SHALL be unique per subscriber at the RP such that two subscribers cannot present the same authenticator for their separate RP subscriber accounts. All bound authenticators SHALL be phishing resistant. Consequently, subscriber-chosen values such as a password cannot be used as bound authenticators.\nThe RP SHALL accept authentication from a bound authenticator only in the context of processing an FAL3 assertion for a federation transaction. While it’s possible for the same authenticator to also be used for direct authentication to the RP, such use is not considered a bound authenticator and the RP SHALL document these as distinct use cases.\n\nBefore an RP can successfully accept an FAL3 assertion, the RP subscriber account SHALL include a reference to a bound authenticator that is to be verified during the FAL3 transaction. These authenticators can be provided by either the RP or the subscriber, with slightly different requirements applying to the initial binding of the authenticator to the RP subscriber account in each case.\n\nThe RP SHALL send a notification to the subscriber via a mechanism that is independent of the transaction binding the new authenticator (e.g., an email to an address previously associated with the subscriber), and SHOULD notify the IdP using a shared signaling system (see Sec. 4.8), if any of the following events occur:\n\n\n A new bound authenticator is added to the RP subscriber account.\n An existing bound authenticator is removed from the RP subscriber account.\n\n\nFor additional considerations on providing notice to a subscriber about authenticator management events, see Sec. 4.6 of [SP800-63B].\n\nRP-Provided Bound Authenticator Issuance\n\nFor RP-provided authenticators, the administrator of the RP SHALL issue the authenticator to the subscriber directly for use with an FAL3 federation transaction. The administrator of the RP SHALL store a unique identifier for the bound authenticator in the RP subscriber account, such as the public key of the authenticator.\n\nThe administrator of the RP SHALL determine through independent means that the identified subject of the RP subscriber account is the party to which the authenticator is issued.\n\nFor example, consider an RP that has a collection of cryptographic authenticators that it has purchased for use with FAL3 authentication. These authenticators are each provisioned to a specific RP subscriber account, but are held in a controlled environment by the administrator of the RP. To issue the authenticator, the RP could use an in-person process in which the administrator of the RP has the subscriber authenticate to an RP-controlled kiosk using an FAL3 federation transaction from the IdP. The administrator then hands the subscriber the bound authenticator indicated by the RP subscriber account and has them authenticate to the kiosk using that. The subscriber is now in possession of a bound authenticator supplied by the RP, which can be used to reach FAL3 for future transactions. Alternatively, the administrator of the RP could send the authenticator to a verified address for the subscriber and have the subscriber verify receipt through an activation process. Since the use of the bound authenticator still requires a valid assertion from the IdP, interception of the authenticator alone is not sufficient for accessing the RP subscriber account.\n\nSubscriber-Provided Bound Authenticator Binding Ceremony\n\nThe RP MAY provide a process for associating subscriber-provided authenticators to the RP subscriber account on a trust-on-first-use basis. This process is known as a binding ceremony and has additional requirements beyond a typical FAL3 federation process. This is similar to the subscriber-provided authenticator binding process discussed in Sec. 4.1.3 of [SP800-63B].\n\nIf no bound authenticators are associated with the RP subscriber account, the RP SHALL perform a binding ceremony to establish the connection between the authenticator, the subscriber, and the RP subscriber account as shown in Fig. 5. The RP SHALL first establish an authenticated session using federation with an assertion that meets all the other requirements of FAL3, including an indication that the assertion is intended for use at FAL3 with a bound authenticator. The subscriber SHALL immediately be prompted to present and authenticate with the proposed authenticator. Upon successful presentation of the authenticator, the RP SHALL store a unique identifier for the authenticator (such as its public key) and associate this with the RP subscriber account associated with the federated identifier. If the subscriber fails to successfully authenticate to the RP using an appropriate authenticator, the binding ceremony fails. The binding ceremony session SHALL have a timeout of five minutes or less and SHALL NOT be used as an authenticated session for any other purpose as described in Sec. 3.8. Upon successful completion of the binding ceremony, the RP SHALL immediately request a new assertion from the IdP at FAL3, including prompting the subscriber for the newly-bound authenticator.\n\nFig. 5. Subscriber-Provided Bound Authenticator Binding Ceremony\n\n\n\nAn RP MAY allow a subscriber to bind multiple subscriber-provided authenticators at FAL3. If this is the case, and the RP subscriber account has one or more existing bound authenticators, the binding ceremony makes use of the existing ability to reach FAL3. The subscriber SHALL first be prompted to authenticate to the RP with an existing bound authenticator to reach FAL3. Upon successful authentication, the RP SHALL immediately prompt the subscriber to authenticate to the RP using the newly-bound authenticator.\n\nIn addition to an RP determining a bound authenticator is no longer viable, a subscriber could choose to stop using a bound authenticator for a variety of reasons, such as the authenticator being lost, compromised, or no longer usable due to technology and platform changes.\nIn such cases, an RP MAY allow a subscriber to remove a subscriber-provided bound authenticator from their RP subscriber account, thereby removing the ability to use that authenticator for FAL3 sessions. When a bound authenticator is removed, the RP SHALL terminate all current FAL3 sessions for the subscriber and SHALL require reauthentication at FAL3 of the subscriber from the IdP. The RP SHALL NOT prompt the subscriber to authenticate with the authenticator being removed, since the subscriber will often not have access to the authenticator in question during the unbinding process, particularly in cases where the authenticator is lost or compromised.\n\nThis option is particularly helpful in situations where the subscriber already has access to an appropriate authenticator that the RP wants to allow them to use for FAL3 transactions.\nFor example, a subscriber could have a single-factor cryptographic authenticator which uses name-based phishing resistance as described in Sec. 3.2.5.2 of [SP800-63B]. With such a device, the IdP and RP would see different keys when the authenticator is used in each location, meaning the bound authenticator cannot be easily verified by the IdP. Furthermore, since the RP did not issue the authenticator, the RP does not know the authenticator’s key ahead of time, nor does it know which subscriber account to associate to the key. Instead, the RP can use a binding ceremony as described here to allow the subscriber to use this device as a bound authenticator at FAL3.\nA more complete example is found in Sec 10.7.\n\nRP Requirements for Processing Holder-of-Key Assertions and Bound Authenticators\n\nWhen the RP receives an assertion associated with a bound authenticator, the subscriber proves possession of the bound authenticator directly to the RP. The primary authentication at the IdP and the federated authentication at the RP are processed separately. While the subscriber could use the same authenticator during the primary authentication at the IdP and as the bound authenticator at the RP, there is no assumption that these will be the same.\n\nThe following requirements apply to all assertions associated with a bound authenticator:\n\n\n The subscriber SHALL prove possession of the bound authenticator to the RP, in addition to presentation of the assertion itself.\n For a holder-of-key assertion, a reference to a given authenticator found within an assertion SHALL be trusted at the same level as all other information within the assertion, as stipulated in the trust agreement.\n The RP SHALL process and validate the assertion in addition to the bound authenticator.\n Failure to authenticate with the bound authenticator SHALL result in an error at the RP.\n\n\n"
} ,
{
"title" : "General-Purpose IdP",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/GenIdP/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "General-Purpose IdPs\n\nThis section is normative.\n\nWhen the IdP is hosted on a service and not on the subscriber’s device, or when the IdP represents multiple subscribers, the IdP is known as a general-purpose IdP and the following requirements apply.\n\nDigital wallets that are deployed to networked systems and not to subscriber devices are considered general-purpose IdPs for the purposes of these guidelines.\n\nIdP Account Provisioning\n\nIn order to make subscriber accounts available through an IdP, the subscriber accounts need to be provisioned at the IdP. The means by which the subscriber account is provisioned to the IdP SHALL be disclosed in the trust agreement.\n\nDue to the requirement for the IdP to be able to authenticate the subscriber, the IdP is often a service of the CSP, where the IdP has some level of access to the attributes and authenticators in the subscriber account. Such IdPs are generally in the same security domain as the IdAM that houses the subscriber account. In other cases, one or more authenticators in the subscriber account can be verified outside of the security domain, such as authenticators tied to a common PKI.\n\nThe IdP augments the subscriber account with federation-specific attributes, such as a subject identifier. The IdP can collect additional attributes, subject to the privacy and storage requirements enumerated by the trust agreement.\n\nOnce the subscriber account is provisioned to the IdP, the CSP is no longer an active participant in the federation process.\nConsequently, even if the RP fetches attributes through an identity API hosted by the CSP, the identity API is considered a function of the IdP and not the CSP for the purposes of these guidelines.\n\n\\clearpage\n\n\nFederation Transaction\n\nA federation transaction involving a general-purpose IdP establishes the subscriber account at the IdP and culminates in an authenticated session for the subscriber at the RP. This process is shown in Fig. 6.\n\nFig. 6. Federation Overview\n\n\n\nA federation transaction is a multi-stage process:\n\n\n \n Before federation can occur, the subscriber account is established by the CSP. This account binds the identity attributes collected by the CSP to a set of authenticators used by the subscriber.\n \n \n The subscriber account is provisioned at the IdP. The IdP augments the subscriber account with federation-specific attributes, such as a subject identifier.\n \n \n The IdP and RP perform discovery and registration to establish the cryptographic keys and identifiers needed for information to be securely exchanged between the parties in the federation protocol. While there may have been an existing policy decision representing a permission to connect (through an apriori trust agreement), this step entails a connection and integration at the technical level. This stage can occur before any subscriber tries to access the RP or as a response to a subscriber’s attempt to use an IdP at an RP.\n \n \n The IdP and RP begin a federated authentication transaction to authenticate a subscriber to the RP. As part of this, the set of attributes that is to be passed to the RP is selected from a subset of what the RP has requested, what is allowed by the trust agreement, and what is permitted by the authorized party. If necessary, the authorized party is prompted at runtime to approve the release of attributes.\n \n \n The subscriber authenticates to the IdP using an authenticator bound to the subscriber account.\n \n \n The IdP creates an assertion to represent the results of the authentication event. The assertion is based on terms established by the trust agreement, the request from the RP, the capabilities of the IdP, the subscriber account known to the IdP, and the attributes permitted by the authorized party.\n \n \n The assertion is passed to the RP across the network.\n \n \n The RP processes this assertion from the IdP and establishes an authenticated session with the subscriber. Optionally, the RP receives identity attributes from the IdP representing the subscriber account, either in the assertion or through an identity API.\n \n\n\nIn all transactions, the parties involved enter into a trust agreement, described in Sec. 3.4. This agreement establishes which parties are fulfilling which roles, and its execution represents initial permission for the systems in question to connect. The list of available subscriber identity attributes is established in this step, though the decision of which attributes are released to a given RP for a given transaction is finalized during the federation transaction itself.\n\nIn a federated identity transaction, the IdP is the source of identity and authentication attributes for the RP. The normal flow of information for a federation transaction is from the IdP to the RP. Due to the directional nature of this information flow, the IdP is considered to be upstream of the RP and the RP is considered to be downstream of the IdP. It is also possible for additional information to flow back up from the RP, particularly through use of shared signals as discussed in Sec. 4.8.\n\n\\clearpage\n\n\nTrust Agreements\n\nTrust agreements SHALL be established either:\n\n\n as the result of an agreement by the federated parties, prior to the federation transaction, or\n as the result of decision or action by the subscriber, during the federation transaction.\n\n\nApriori Trust Agreement Establishment\n\nWhen the trust agreement is established by the federated parties prior to the federation transaction, the trust agreement SHALL establish the following terms, which MAY vary per IdP and RP relationship:\n\n\n The set of subscriber attributes the CSP makes available to the IdP as part of the subscriber account\n The set of subscriber attributes the IdP can make available to the RP\n The attribute storage policy of the IdP for the subscriber account, including any available means for the subscriber to request deletion\n Any additional attribute sources that the IdP receives applicable subscriber attributes from\n What if any identity APIs are made available by the IdP, either directly or through an external provider, and which subscriber attributes are available at these APIs\n The population of subscriber accounts that the IdP can create assertions for\n Any additional uses of subscriber information, beyond providing the identity service\n The set of subscriber attributes that the RP will request (a subset of the attributes made available)\n The purpose for each attribute requested by the RP\n The attribute storage policy of the RP for the RP subscriber account, including any available means for the subscriber to request deletion\n The use of any shared signaling between the IdP and RP\n The authorized party responsible for decisions regarding the release of subscriber attributes to the RP (e.g., the IdP organization, the subscriber, etc.)\n The means of informing subscribers about attribute release to the RP\n The xALs available from the IdP\n The xALs required by the RP\n\n\nThe terms of the trust agreement SHALL be available to the operators of the RP and the IdP upon its establishment. The terms of the trust agreement SHALL be made available to subscribers upon request to the IdP or RP.\n\nThe IdP and RP SHALL each assess their respective redress mechanisms for their efficacy in achieving a resolution of complaints or problems and disclose the results of this assessment as part of the trust agreement. See Sec. 3.4.3 for additional requirements and considerations for redress mechanisms.\n\nIf FAL3 is allowed within the trust agreement, the trust agreement SHALL stipulate the following terms regarding holder-of-key assertions and bound authenticators (see Sec. 3.14 and Sec. 3.15):\n\n\n The means by which holder-of-key assertions can be verified by the RP (such as a common trusted PKI system)\n The means by which the RP can associate holder-of-key assertions with specific RP subscriber accounts (such as attribute-based account resolution or pre-provisioning)\n Whether bound authenticators are supplied by the RP or by the subscriber\n Documentation of the binding ceremony used for any subscriber-provided bound authenticators\n\n\nRuntime decisions at the IdP, as described in Sec. 4.6.1.3, MAY be used to further limit which subscriber attributes are sent between parties in the federated authentication process (e.g., a runtime decision could opt to not disclose an email address even though this attribute was included in the terms of the trust agreement).\n\nThe IdP and RP SHALL exchange only the minimum data necessary to achieve the function of the system.\n\nThe trust agreement SHALL be reviewed periodically to ensure it is still fit for purpose, and to avoid unnecessary data exchange and over-collection of subscriber data.\n\nSubscriber-driven Trust Agreement Establishment\n\nWhen the trust agreement is established as the result of a subscriber’s decision, such as a subscriber starting a federation transaction between an RP and their IdP who have no established agreement, the trust agreement is anchored by the subscriber. Consequently, the following terms SHALL be disclosed to the subscriber upon request to the IdP and to the RP during the runtime decision at the IdP as described in Sec. 4.6.1.3:\n\n\n The set of subscriber attributes the CSP makes available to the IdP\n Any additional attribute sources that the IdP receives applicable subscriber attributes from\n What if any identity APIs are made available by the IdP, either directly or through an external provider, and which subscriber attributes are available at these APIs\n The set of subscriber attributes the IdP can make available to the RP\n The attribute storage policy of the IdP for the subscriber account, including any available means for the subscriber to request deletion\n The use of any shared signaling between the IdP and RP\n The population of subscriber accounts that the IdP can create assertions for\n Any additional uses of subscriber information, beyond providing the identity service\n The xALs available from the IdP\n\n\nThe IdP SHALL assess its redress mechanisms for their efficacy in achieving a resolution of complaints or problems and disclose the results of this assessment to the subscriber. See Sec. 3.4.3 for additional requirements and considerations for redress mechanisms.\n\nThe release of subscriber attributes SHALL be managed using a runtime decision at the IdP, as described in Sec. 4.6.1.3. The authorized party SHALL be the subscriber.\n\nThe following terms of the trust agreement SHALL be disclosed to the subscriber during the runtime decision:\n\n\n The set of subscriber attributes that the RP will request (a subset of the attributes made available by the IdP)\n The purpose for each attribute requested by the RP\n The attribute storage policy of the RP for the RP subscriber account, including any available means for the subscriber to request deletion\n The xALs required by the RP\n\n\nNote that all information disclosed to the subscriber needs to be conveyed in a manner that is understandable and actionable, as discussed in Sec. 8.\n\nDiscovery and Registration\n\nTo perform a federation transaction with a general-purpose IdP, the RP SHALL associate the assertion signing keys and other relevant configuration information with the IdP’s identifier, as stipulated by the trust agreement. If these are retrieved over a network connection, request and retrieval SHALL be made over a secure protected channel from a location associated with the IdP’s identifier by the trust agreement. In many federation protocols, this is accomplished by the RP fetching the public keys and configuration data from a URL known to be controlled by the IdP or offered on the IdP’s behalf. It is also possible for the RP to be configured directly with this information in a static fashion, whereby the RP’s administrator enters the IdP information directly into the RP software.\n\nAdditionally, the RP SHALL register its information either with the IdP or with an authority the IdP trusts, as stipulated by the trust agreement. In many federation protocols, the RP is assigned an identifier during this stage, which the RP will use in subsequent communication with the IdP.\n\nIn all of these requirements, the IdP MAY use a trusted third party to facilitate its discovery and registration processes, so long as that trusted third party is identified in the trust agreement. For example, a consortium could make use of a hosted service that collects the configuration records of IdPs and RPs directly from participants. Instead of going to the IdP directly for its discovery record, an RP would instead go to this service. The IdP would in turn go to this service to find the identifiers and configuration information for RPs that are needed to connect.\n\nManual Registration\n\nAt all FALs, the cryptographic keys and identifiers of the RP and IdP can be exchanged in a manual process, whereby the administrator of the RP submits the RP’s configuration to the IdP (either directly or through a trusted third party) and receives the identifier to use with that IdP. The RP administrator then configures the RP with this identifier and any additional information needed for the federation transaction to continue.\n\nAs this is a manual process, the registration happens prior to the federation transaction beginning.\n\nThis process MAY be facilitated by some level of automated tooling, whereby the manual configuration points the systems in question to a trusted source of information that can be updated over time. If such automation is used, the trust agreement SHALL enumerate the allowable terms of the cryptographic key distribution and assignment, including allowable cache lifetimes.\n\nDynamic Registration\n\nAt FAL1 and FAL2, the cryptographic keys and identifiers of the RP can be exchanged in a dynamic process, whereby the RP software presents its configuration to the IdP (either directly or through a trusted third party) and receives the identifier to use with that IdP. This process is specific to the federation protocol in use but requires machine-readable configuration data to be made available over the network. All transmission of configuration information SHALL be made over a secure protected channel to endpoints associated with the IdP’s identifier by the trust agreement.\n\nIdPs SHOULD consider the risks of information leakage to multiple RP instances and take appropriate countermeasures, such as issuing PPIs to dynamically registered RPs as discussed in Sec. 3.3.1.\n\nDynamic registration SHOULD be augmented by attestations about the RP software and device, as discussed in Sec. 3.5.3.\n\n[OIDC-Registration] defines a protocol for dynamic registration of RPs at an OpenID Connect IdP.\n\nSubscriber Authentication at the IdP\n\nIn a federation context, the IdP acts as the verifier for the authenticator bound to the subscriber account, as described in [SP800-63B].\nVerification of the authenticator creates an authentication event which begins the authenticated session at the IdP. This authenticated session serves as the basis of the IdP’s claim that the subscriber is present.\n\nThe IdP SHALL require the subscriber to have an authenticated session before any of the following events:\n\n\n Approval of attribute release\n Creation and issuance of an assertion\n Establishment of a subscriber-driven trust agreement.\n\n\nAdditional requirements for session management and reauthentication are discussed in Sec. 4.7.\n\nAuthentication and Attribute Disclosure\n\nThe decision of whether a federation transaction proceeds SHALL be determined by the authorized party stipulated by the trust agreement. The decision can be calculated in a variety of ways, including:\n\n\n an allowlist, which determines the circumstances under which the system can allow the federation transaction to proceed in an automated fashion;\n a blocklist, which determines the circumstances under which the system will not allow the federation transaction to proceed; and\n a runtime decision, which allows the authorized party to decide if the transaction can proceed and under what precise terms. Note that a runtime decision can be stored and applied to future transactions.\n\n\nThe applicability of an allowlist, blocklist, or runtime decision can be influenced by aspects of the federation transaction, including the identity of the IdP and RP, the subscriber attributes requested, the xAL required, and other factors. These decisions can be facilitated by risk management systems, federation authorities, and local system policies.\n\nFor a non-normative example of an RP that has been allowlisted at an IdP for a set of subscribers to facilitate single-sign-on for an enterprise application, see Sec. 10.5.\n\nThe IdP SHALL provide effective mechanisms for redress of subscriber complaints or problems (e.g., subscriber identifies an inaccurate attribute value). See Sec. 3.4.3 for additional requirements and considerations for redress mechanisms.\n\nIdP-Controlled Decisions\n\nIdP Allowlists of RPs\n\nIn an a priori trust agreement, IdPs MAY establish allowlists of RPs authorized to receive authentication and attributes from the IdP without a runtime decision from the subscriber. When placing an RP on its allowlist, the IdP SHALL confirm that the RP abides the terms of the trust agreement. The IdP SHALL determine which identity attributes are passed to the allowlisted RP upon authentication. IdPs SHALL make allowlists available to subscribers as described in Sec. 7.2.\n\nIdP allowlists SHALL uniquely identify RPs through the means of domain names, cryptographic keys, or other identifiers applicable to the federation protocol in use. Any entities that share an identifier SHALL be considered equivalent for the purposes of the allowlist. Allowlists SHOULD be as specific as possible to avoid unintentional impersonation of an RP.\n\nIdP allowlist entries for an RP SHALL indicate which attributes are included as part of an allowlisted decision. If additional attributes are requested by the RP, the request SHALL be either:\n\n\n subject to a runtime decision of the authorized party to approve the additional attributes requested,\n redacted to only the attributes in the allowlist entry, or\n denied outright by the IdP.\n\n\nIdP allowlists MAY include other information, such as the xALs under which the allowlist entry is applied. For example, an IdP could use an allowlist entry to bypass a consent screen for an FAL1 transaction but require confirmation of consent from the subscriber during an FAL3 transaction.\n\nIdP Blocklists of RPs\n\nIdPs MAY establish blocklists of RPs not authorized to receive authentication assertions or attributes from the IdP, even if requested to do so by the subscriber. If an RP is on an IdP’s blocklist, the IdP SHALL NOT produce an assertion targeting the RP in question under any circumstances.\n\nIdP blocklists SHALL uniquely identify RPs through the means of domain names, cryptographic keys, or other identifiers applicable to the federation protocol in use. Any entities that share an identifier SHALL be considered equivalent for the purposes of the blocklist. For example, a wildcard domain identifier of “*.example.com” would match the domains “www.example.com”, “service.example.com”, and “unknown.example.com” equally. All three of these sites would blocked by the same blocklist entry.\n\nIdP Runtime Decisions\n\nEvery RP that is in a trust agreement with an IdP but not on an allowlist with that IdP SHALL be governed by a default policy in which runtime authorization decisions will be made by an authorized party identified by the trust agreement. Since the runtime decision occurs during the federation transaction, the authorized party is generally a person and, in most circumstances, is the subscriber; however, it is possible for another party such as an administrator to be prompted on behalf of the subscriber. Note that in a subscriber-driven trust agreement, a runtime decision with the subscriber is the only allowable means to authorize the release of subscriber attributes.\n\nWhen processing a runtime decision, the IdP prompts the authorized party interactively during the federation transaction. The authorized party provides consent to release an authentication assertion and specific attributes to the RP. The IdP SHALL provide the authorized party with explicit notice and prompt them for positive confirmation before any attributes about the subscriber are transmitted to the RP. At a minimum, the notice SHOULD be provided by the party in the position to provide the most effective notice and obtain confirmation, consistent with Sec. 7.2. The IdP SHALL disclose which attributes will be released to the RP if the transaction is approved. If the federation protocol in use allows for optional or selective attribute disclosure at runtime, the authorized party SHALL be given the option to decide whether to transmit specific attributes to the RP without terminating the federation transaction entirely.\n\nIf the authorized party is the subscriber, the IdP SHALL provide mechanisms for the subscriber to view the attribute values and derived attribute values to be sent to the RP. To mitigate the risk of unauthorized exposure of sensitive information (e.g., shoulder surfing), the IdP SHALL, by default, mask sensitive information displayed to the subscriber. For more details on masking, see Sec. 8 on usability considerations.\n\nAn IdP MAY employ mechanisms to remember and re-transmit the same set of attributes to the same RP, remembering the authorized party’s decision. This mechanism is associated with the subscriber account as managed by the IdP. If such a mechanism is provided, the IdP SHALL allow the authorized party to revoke such remembered access at a future time.\n\nRP-Controlled Decisions\n\nRP Allowlists of IdPs\n\nRPs MAY establish allowlists of IdPs from which the RP will accept authentication and attributes without a runtime decision from the subscriber to use the IdP. In practice, many RPs interface with only a single IdP, and this IdP is allowlisted as the only possible entry for that RP. When placing an IdP in its allowlist, the RP SHALL confirm that the IdP abides by the terms of the trust agreement. Note that this confirmation can be facilitated by a federation authority or be undertaken directly by the RP.\n\nRP allowlists SHALL uniquely identify IdPs through the means of domain names, cryptographic keys, or other identifiers applicable to the federation protocol in use.\n\nRP allowlist entries MAY be applied based on aspects of the subscriber account (such as the xALs required for the transaction). For example, an RP could use a runtime decision for FAL1 transactions but require an allowlisted IdP for FAL3 transactions.\n\nRP Blocklists of IdPs\n\nRPs MAY also establish blocklists of IdPs that the RP will not accept authentication or attributes from, even when requested by the subscriber. A blocklisted IdP can be otherwise in a valid trust agreement with the RP, for example if both are under the same federation authority.\n\nRP blocklists SHALL uniquely identify IdPs through the means of domain names, cryptographic keys, or other identifiers applicable to the federation protocol in use.\n\nRP Runtime Decisions\n\nEvery IdP that is in a trust agreement with an RP but not on an allowlist with that RP SHALL be governed by a default policy in which runtime authorization decisions will be made by the authorized party indicated in the trust agreement. In this mode, the authorized party is prompted by the RP to select or enter which IdP to contact for authentication on behalf of the subscriber. This process can be facilitated through the use of a discovery mechanism allowing the subscriber to enter a human-facing identifier such as an email address. This process allows the RP to programmatically select the appropriate IdP for that identifier. Since the runtime decision occurs during the federation transaction, the authorized party is generally a person and, in most circumstances, is the subscriber.\n\nThe RP MAY employ mechanisms to remember the authorized party’s decision to use a given IdP. Since this mechanism is employed prior to authentication at the RP, the manner in which the RP provides this mechanism (e.g., a browser cookie outside the authenticated session) is separate from the RP subscriber account described in Sec. 3.10.1. If such a mechanism is provided, the RP SHALL allow the authorized party to revoke such remembered options at a future time.\n\nProvisioning Models for RP subscriber accounts\n\nThe lifecycle of the provisioning process for an RP subscriber account varies depending on factors including the trust agreement discussed in Sec. 3.4 and the deployment pattern of the IdP and RP. However, in all cases, the RP subscriber account SHALL be provisioned at the RP prior to the establishment of an authenticated session at the RP in one of the following ways:\n\n\n Just-In-Time Provisioning\n An RP subscriber account is created automatically the first time the RP receives an assertion with an unknown federated identifier from an IdP. Any identity attributes learned during the federation process, either within the assertion or through an identity API as discussed in Sec. 3.11.3, MAY be associated with the RP subscriber account. Accounts provisioned in this way are bound to the federated identifier in the assertion used to provision them.\nThis is the most common form of provisioning in federation systems, as it requires the least coordination between the RP and IdP. However, in such systems, the RP SHALL be responsible for managing any cached attributes it might have. See Fig. 7.\n\n\nFig. 7. Just-In-Time Provisioning\n\n\n\n\n Pre-provisioning\n An RP subscriber account is created by the IdP pushing the attributes to the RP or the RP pulling attributes from the IdP. Pre-provisioning of accounts generally occurs in bulk through a provisioning API as discussed in Sec. 4.6.5, as the provisioning occurs prior to the represented subscribers authenticating through a federation transaction. Pre-provisioned accounts SHALL be bound to a federated identifier at the time of provisioning. Any time a particular federated identifier is seen by the RP, the associated account can be logged in as a result. \nThis form of provisioning requires infrastructure and planning on the part of the IdP and RP, but these processes can be facilitated by automated protocols. Additionally, the IdP and RP must keep the set of provisioned accounts synchronized over time as discussed in Sec. 4.6.4. See Fig. 8.\n\n In this model, the RP also receives attributes about subscribers who have not yet interacted with the RP (and who may never do so). This is in contrast to other models, where the RP receives information only about the subset of subscribers that use the RP, and then only after the subscriber uses the RP for the first time. The privacy considerations of the RP having access to this information prior to a federation transaction SHALL be accounted for in the trust agreement.\n \n\n\nFig. 8. Pre-Provisioning\n\n\n\n\n Ephemeral\n An RP subscriber account is created when processing the assertion, but then the RP subscriber account is terminated when the authenticated session ends. This process is similar to a just-in-time provisioning, but the RP keeps no long-term record of the account when the session is complete, in accordance with Sec. 3.10.3.\nThis form of provisioning is useful for RPs that fully externalize access rights to the IdP, allowing the RP to be more simplified with less internal state. However, this pattern is not common because even the simplest RPs tend to have a need to track state within the application or at least keep a record of actions associated with the federated identifier. See Fig. 9.\n\n\nFig. 9. Ephemeral Provisioning\n\n\n\n\\clearpage\n\n\n\n Other\n Other RP subscriber account provisioning models are possible but the details of such models are outside the scope of these guidelines. The details of any alternative provisioning model SHALL be included in the privacy risk assessments of the IdP and RP.\n\n\nAll organizations SHALL document their provisioning models as part of their trust agreement.\n\nAttribute Synchronization\n\nIn a federated process, the IdP and RP each have their own stores of identity attributes associated with the subscriber account. The IdP has a direct view of the subscriber account’s attributes, but the RP subscriber account is derived from a subset of those attributes that are presented during the federation transaction. Therefore, it is possible for the IdP’s and RP’s attribute stores to diverge from each other over time.\n\nFrom the RP’s perspective, the IdP is the trusted source for any attributes that the IdP asserts as being associated with the subscriber account at the IdP. The provenance of the IdP’s attributes, and their validation process, is stipulated in the trust agreement.\n\nHowever, the RP MAY additionally collect, and optionally verify, other attributes to associate with the RP subscriber account, as discussed in Sec. 4.6.6. Sometimes, these attributes can even override what is asserted by the IdP. For example, if an IdP asserts a full display name for the subscriber, the RP can allow the subscriber to provide an alternative preferred name for use at the RP.\n\nThe IdP SHOULD signal downstream RPs when the attributes of a subscriber account available to the RP have been updated, and the RP MAY respond to this signal by updating the attributes in the RP subscriber account. This synchronization can be accomplished using shared signaling as described in Sec. 4.8, through a provisioning API as described in Sec. 4.6.5, or by providing a signal in the assertion (e.g., a timestamp indicating when relevant attributes were last updated) allowing the RP to determine that its cache is out of date. If the RP is granted access to an identity API as in Sec. 3.11.3, the IdP SHOULD allow the RP access to the API for sufficient time to perform synchronization operations after the federation transaction has concluded. For example, if the assertion is valid for five minutes, access to the identity API could be valid for 30 minutes to allow the RP to fetch and update attributes out of band.\n\nThe IdP SHOULD signal downstream RPs when a subscriber account is terminated, or when the subscriber account’s access to an RP is revoked. This can be accomplished using shared signaling as described in Sec. 4.8 or through a provisioning API as described in Sec. 4.6.5. Upon receiving such a signal, the RP SHALL process the RP subscriber account as stipulated in the trust agreement. If the RP subscriber account is terminated, the RP SHALL remove all personal information associated with the RP subscriber account, in accordance with Sec. 3.10.3. If the reason for termination is suspicious or fraudulent activity, the IdP SHALL include this reason in its signal to the RP to allow the RP to review the account’s activity at the RP for suspicious activity, if specified in the trust agreement with that RP.\n\nProvisioning APIs\n\nAs part of some proactive forms of provisioning, the RP can be given access to subscriber attributes through a general-purpose identity API known as a provisioning API. This type of API allows an IdP to push attributes for a range of subscriber accounts, and sometimes allows an RP to query the attributes of these subscriber accounts directly. Since access to the API is granted outside the context of a federation transaction, access to the provisioning API for a given subscriber does not indicate to the RP that a given subscriber has been authenticated.\n\nThe attributes in the provisioning API available to a given RP SHALL be limited to only those necessary for the RP to perform its functions, including any audit and security purposes as discussed in Sec. 3.9.1. As part of establishing the trust agreement, the IdP SHALL document when an RP is given access to a provisioning API including at least the following:\n\n\n the purpose for the access using the provisioning model;\n the set of attributes made available to the RP;\n whether the API functions as a push to the RP, a pull from the RP, or both; and\n the population of subscribers whose attributes are made available to the RP.\n\n\nAccess to the provisioning API SHALL occur over a mutually authenticated protected channel. The exact means of authentication varies depending on the specifics of the API and whether it is a push model (where the IdP connects to the RP) or a pull model (where the RP connects to the IdP).\n\nA provisioning API SHALL NOT be made available under a subscriber-driven trust agreement. The IdP SHALL NOT make a provisioning API available to any RP outside of an established trust agreement. The IdP SHALL provide access to a provisioning API only as part of a federated identity relationship with an RP to facilitate federation transactions with that RP and related functions such as signaling revocation of the subscriber account. The IdP SHALL revoke an RP’s access to the provisioning API once access is no longer required by the RP for its functioning purposes or when the trust agreement is terminated.\n\nAny provisioning API provided to the RP SHALL be under the control and jurisdiction of the IdP. External attribute providers MAY be used as information sources by the IdP to provide attributes through this provisioning API, but the IdP is responsible for the content and accuracy of the information provided by the referenced attribute providers.\n\nWhen a provisioning API is in use, the IdP SHALL signal to the RP when a subscriber account has been terminated. When receiving such a signal, the RP SHALL remove the binding of the federated identifier from the account and SHALL terminate the account if necessary (e.g., there are no other federated identifiers linked to this account or the trust agreement dictates such an action). The RP SHALL remove all personal information sourced from the provisioning API in accordance with Sec. 3.10.3.\n\nCollection of Additional Attributes by the RP\n\nThe RP MAY collect and maintain additional attributes from the subscriber beyond those provided by the IdP. For example, the RP could collect a preferred display name directly from the subscriber that is not provided by the IdP. The RP could also have a separate agreement with an attribute provider that gives the RP access to an identity API not associated with the IdP. For example, the RP could receive a state license number from the IdP, but use a separate attribute verification API to check if a particular license number is currently valid. The assertion from the IdP binds the license to the subscriber, but the attribute verification API provides additional information beyond what the IdP can share or be authoritative for.\n\nThese attributes are governed separately from the trust agreement since they are collected by the RP outside of a federation transaction. All attributes associated with an RP subscriber account, regardless of their source, SHALL be removed when the RP subscriber account is terminated, in accordance with Sec. 3.10.3.\n\nThe RP SHALL disclose to the subscriber the purpose for collection of any additional attributes. These attributes SHALL be used solely for the stated purposes of the RP’s functionality and SHALL NOT have any secondary use, including communication of said attributes to other parties.\n\nThe RP SHALL provide an effective means of redress for the subscriber to update and remove these additionally-collected attributes from the RP subscriber account. See Sec. 3.4.3 for additional requirements and considerations for redress mechanisms.\n\nThe following requirement applies to federal agencies, regardless of whether they operate their own identity service or use an external CSP as part of their identity service:\n\n\n An RP SHALL disclose any additional attributes collected, and their use, as part of its System of Records Notice (SORN)\n\n\nTime-based Removal of RP Subscriber Accounts\n\nIf an RP is using a just-in-time provisioning mechanism, the RP only learns of the existence of a subscriber account when that account is first used at the RP. If the IdP does not inform the RP of terminated subscriber accounts using shared signaling as described in Sec 4.8, an RP could accumulate RP subscriber accounts that are no longer accessible from the IdP. This poses a risk to the RP for holding personal information in the RP subscriber accounts. In such circumstances, the RP MAY employ a time-based mechanism to identify RP subscriber accounts for termination that have not been accessed after a period of time tailored to the usage patterns of the application. For example, an RP that is usually accessed on a weekly basis could set a timeout of 120 days since last access at the RP to mark the RP subscriber account for termination. An RP that expects longer gaps between access, such as a service used annually, should have a much longer time frame, such as five years.\n\nWhen processing such an inactive account, the RP SHALL provide sufficient notice to the subscriber, about the pending termination of the account and provide the subscriber with an option to re-activate the account prior to its scheduled termination. Upon termination, the RP SHALL remove all personal information associated with the RP subscriber account, in accordance with Sec. 3.10.3.\n\nReauthentication and Session Requirements in Federated Environments\n\nIn a federated environment, the RP manages its sessions separately from any sessions at the IdP. The assertion is related to both sessions but its validity period is ultimately independent of them.\n\nAs shown in Fig. 10, an assertion is created during an authenticated session at the IdP, and processing an assertion creates an authenticated session at the RP. The validity time window of an assertion is used to manage the RP’s processing of the assertion but does not indicate the lifetime of the authenticated session at the IdP or the RP. If a request comes to the IdP for a new federation transaction while the subscriber’s session is still valid at the IdP, a new and separate assertion would be created with its own validity time window. Similarly, after the RP consumes the assertion, the validity of the RP’s session is independent of the validity of the assertion, and in most cases the authenticated session at the RP will far outlive the validity of the assertion. Access granted to an identity API is likewise independent of the validity of the assertion or the lifetime of the authenticated session at the RP.\n\nFig. 10. Session Lifetimes\n\n\n\nThe IdP ending the subscriber’s session at the IdP will not necessarily cause any sessions that subscriber might have at downstream RPs to end as well. The RP and IdP MAY communicate end-session events to each other, if supported by the federation protocol or through shared signaling (see Sec. 4.8).\n\nAt the time of a federated transaction request, the subscriber could have a pre-existing authenticated session at the IdP which MAY be used to generate an assertion to the RP. The IdP SHALL communicate to the RP any information the IdP has regarding the time of the subscriber’s latest authentication event at the IdP, and the RP MAY use this information in making authorization and access decisions. Depending on the capabilities of the federation protocol in use, the IdP SHOULD allow the RP to request that the subscriber provide a fresh authentication at the IdP instead of using the existing session at the IdP. For example, suppose the subscriber authenticates at the IdP for one transaction. Then, 30 min later, the subscriber starts a federation transaction at the RP. Depending on xAL requirements, the subscriber’s existing session at the IdP can be used to avoid prompting the subscriber for their authenticators. The resulting assertion to the RP will indicate that the last time the subscriber had authenticated to the RP was 30 min in the past. The RP can then use this information to determine whether this is reasonable for the RP’s needs, and, if possible within the federation protocol, request the IdP to prompt the subscriber for a fresh authentication event instead.\n\nAn RP requiring authentication through a federation protocol SHALL specify the maximum acceptable authentication age to the IdP, either through the federation protocol (if possible) or through the terms of the trust agreement. The authentication age represents the time since the last authentication event in the subscriber’s session at the IdP, and the IdP SHALL reauthenticate the subscriber if they have not been authenticated within that time period. The IdP SHALL communicate the authentication event time to the RP to allow the RP to decide if the assertion is sufficient for authentication at the RP and to determine the time for the next reauthentication event.\n\nIf an RP is granted access to an identity API at the same time the RP receives an assertion, the lifetime of the access to the identity API is independent from the lifetime of the assertion. As a consequence, the RP’s ability to successfully fetch additional attributes through an identity API SHALL NOT be used to establish a session at the RP. Likewise, inability to access an identity API SHOULD NOT be used to end the session at the RP.\n\nWhen the RP is granted access to the identity API, the RP is often also granted access to other APIs at the same time, such as granting access to a subscriber’s calendar and data storage while also logging in. It is common for this access to be valid long after the assertion has expired and possibly after the session with the RP has ended, allowing the RP to access these non-identity APIs on the subscriber’s behalf while the subscriber is no longer present at the RP. Providing access to non-identity APIs is outside the scope of these guidelines.\n\nThe RP MAY terminate its authenticated session with the subscriber or restrict access to the RP’s functions if the assertion, authentication event, or attributes do not meet the RP’s requirements. For example, if an RP is configured to allow access to certain high-risk functionality only if the federation transaction was at FAL3, but the incoming assertion only meets the requirements for FAL2, the RP could decide to deny access to the high-risk functionality while allowing access to lower-risk functionality, or the RP could choose to terminate the session entirely.\n\nSee [SP800-63B] Sec. 5 for more information about session management requirements that apply to both IdPs and RPs.\n\nShared Signaling\n\nIn some environments, it is useful for the IdP and RP to send information to each other outside of the federation transaction. These signals can communicate important changes in state between parties that would not be otherwise known. The use of any shared signaling SHALL be documented in the trust agreement between the IdP and RP. Signaling from the IdP to the RP SHALL require an apriori trust agreement. Signaling from the RP to the IdP MAY be used in both apriori and subscriber-driven trust agreements.\n\nAny use of shared signaling SHALL be documented and made available to the authorized party stipulated by the trust agreement. This documentation SHALL include the events under which a signal is sent, the information included in such a signal (including any attribute information), and any additional parameters sent with the signal. The use of shared signaling SHALL be subject to privacy review under the trust agreement.\n\nThe IdP SHOULD send a signal regarding the following changes to the subscriber account:\n\n\n The account has been terminated.\n The account is suspected of being compromised.\n Attributes of the account, including identifiers other than the federated identifier (such as email address or certificate common name), have changed.\n The possible range of IAL, AAL, or FAL for the account has changed.\n\n\nIf the RP receives a signal that an RP subscriber account is suspected of compromise, the RP SHOULD review actions taken by that account at the RP for suspicious activity.\n\nThe RP SHOULD send a signal regarding the following changes to the RP subscriber account:\n\n\n The account has been terminated.\n The account is suspected of being compromised.\n A bound authenticator is added by the RP.\n A bound authenticator is removed by the RP.\n\n\nIf the IdP receives a signal that a subscriber account is suspected of compromise, the IdP SHALL review actions taken by that account at the IdP for suspicious activity. If suspicious activity is confirmed at the IdP, the IdP SHALL signal any additional RPs the subscriber account was used for during the suspected time frame.\n\nAdditional signals from both the IdP and RP MAY be allowed subject to privacy and security review as part of the trust agreement.\n\nAssertion Contents\n\nAn assertion is a packaged set of attribute values or derived attribute values about or associated with an authenticated subscriber that is passed from the IdP to the RP in a federated identity system. Assertions contain a variety of information, including: assertion metadata, attribute values and derived attribute values about the subscriber, information about the subscriber’s authentication at the IdP, and other information that the RP can leverage (e.g., restrictions and validity time window). While the assertion’s primary function is to authenticate the user to an RP, the information conveyed in the assertion can be used by the RP for a number of use cases — for example, authorization or personalization of a website. These guidelines do not restrict RP use cases nor the type of protocol or data payload used to federate an identity, provided that the chosen solution meets all mandatory requirements contained herein.\n\nAssertions SHALL represent a discrete authentication event of the subscriber at the IdP and SHALL be processed as a discrete authentication event at the RP.\n\nAll assertions SHALL include the following attributes:\n\n\n Subject identifier: An identifier for the party to which the assertion applies (i.e., the subscriber).\n Issuer identifier: An identifier for the issuer of the assertion (i.e., the IdP).\n Audience identifier: An identifier for the party intended to consume the assertion (i.e., the RP). An assertion can contain more than one audience identifier at FAL1.\n Issuance time: A timestamp indicating when the IdP issued the assertion.\n Validity time window: A period of time outside of which the assertion SHALL NOT be accepted as valid by the RP for the purposes of authenticating the subscriber and starting an authenticated session at the RP. This is usually communicated by means of an expiration timestamp for the assertion in addition to the issuance timestamp.\n Assertion identifier: A value uniquely identifying this assertion, used to prevent attackers from replaying prior assertions.\n Authentication time: A timestamp indicating when the IdP last verified the presence of the subscriber at the IdP through a primary authentication event.\n Nonce: A cryptographic nonce, if one is provided by the RP.\n Signature: Digital signature or message authentication code (MAC), including key identifier, covering the entire assertion.\n\n\nAll assertions SHALL contain sufficient information to determine the following aspects of the federation transaction:\n\n\n The IAL of the subscriber account being represented in the assertion, or an indication that no IAL is asserted.\n The AAL used when the subscriber authenticated to the IdP, or an indication that no AAL is asserted.\n The IdP’s intended FAL of the federation process represented by the assertion.\n\n\nAt FAL3, the assertion SHALL include one of the following:\n\n\n The public key, key identifier, or other identifier for a holder-of-key assertion, or\n An indicator that verification of a bound authenticator is required to process this assertion.\n\n\nAssertions MAY also include additional items, including the following information:\n\n\n Attribute values and derived attribute values: Information about the subscriber.\n Attribute bundles: Collections of attributes in a signed bundle from the CSP.\n Attribute metadata: Additional information about one or more subscriber attributes, such as those described in [NISTIR8112].\n Authentication event: Additional details about the authentication event, such as the class of authenticator used.\n\n\nThe RP SHALL validate the assertion by checking that all the following are true:\n\n\n Signature validation: ensuring that the signature of the assertion is valid and corresponds to a key belonging to the IdP sending the assertion.\n Issuer verification: ensuring that the assertion was issued by the IdP the RP expects it to be from.\n Time validation: ensuring that the expiration and issue times are within acceptable limits of the current timestamp.\n Audience restriction: ensuring that this RP is the intended recipient of the assertion.\n Nonce: ensuring that the cryptographic nonce included in the RP’s request (if applicable) is included in the presentation.\n Transaction terms: ensuring that the IAL, AAL, and FAL represented by the assertion are allowable under the applicable trust agreement.\n\n\nAn RP SHALL treat subject identifiers as not inherently globally unique. Instead, the value of the assertion’s subject identifier is usually in a namespace under the assertion issuer’s control, as discussed in Sec. 3.3. This allows an RP to talk to multiple IdPs without incorrectly conflating subjects from different IdPs.\n\nAssertions MAY include additional attributes about the subscriber. Section 3.9 contains privacy requirements for presenting attributes in assertions. The RP MAY be given limited access to an identity API as discussed in Sec. 3.11.3, either in the same response as the assertion is received or through some other mechanism. The RP can use this API to fetch additional identity attributes for the subscriber that are not included in the assertion itself.\n\nThe assertion’s validity time window is the time between its issuance and its expiration. This window needs to be large enough to allow the RP to process the assertion and create a local application session for the subscriber, but should not be longer than necessary for such establishment. Long-lived assertions have a greater risk of being stolen or replayed; a short assertion validity time window mitigates this risk. Assertion validity time windows SHALL NOT be used to limit the session at the RP. See Sec. 4.7 for more information.\n\nAssertion Requests\n\nWhen the federation transaction is initiated by the RP, the RP’s request for an assertion SHALL contain:\n\n\n An identifier for the RP\n A cryptographic nonce, to be returned in the assertion\n\n\nThe RP’s request SHOULD additionally contain:\n\n\n The set of identity attributes requested by the RP and their purpose of use at the RP; this is a subset of what is allowed by the trust agreement\n The requirements for the authentication event at the IdP\n\n\nNote that federation transactions are always initiated by the RP at FAL2 or higher.\n\nAssertion Presentation\n\nDepending on the specifics of the protocol, the RP and the IdP communicate with each other in two ways, which lends to two different ways in which an assertion can be passed from the IdP to the RP:\n\n\n The back channel, through a direct connection between the RP and IdP, not involving the subscriber directly; or\n The front channel, through a third party using redirects involving the subscriber and the subscriber’s browser.\n\n\nThere are tradeoffs with each model, but each requires the proper validation of the assertion. Assertions MAY also be proxied to facilitate federation between IdPs and RPs using different presentation methods, as discussed in detail in Sec. 3.2.3.\n\nBack-Channel Presentation\n\nIn the back-channel presentation model shown in Fig. 11, the subscriber is given an assertion reference to present to the RP, generally through the front channel. The assertion reference itself contains no information about the subscriber and SHALL be resistant to tampering and fabrication by an attacker. The RP presents the assertion reference to the IdP to fetch the assertion. How this is achieved varies form one protocol to the next. In the authorization code flow and some forms of the hybrid flow of [OIDC] the assertion (the ID Token) is presented in the back channel in exchange for the assertion reference (the authorization code). In the artifact binding profile of [SAML-Bindings], the SAML assertion is presented in the back channel.\n\n\\clearpage\n\n\nFig. 11. Back-channel Presentation\n\n\n\nAs shown in Fig. 11, the back-channel presentation model consists of three steps:\n\n\n The IdP sends an assertion reference to the subscriber through the front channel.\n The subscriber sends the assertion reference to the RP through the front channel.\n The RP presents the assertion reference and its RP credentials to the IdP through the back channel. The IdP validates the credentials and returns the assertion.\n\n\nThe assertion reference:\n\n\n SHALL be limited to use by a single RP.\n SHALL be single-use.\n SHALL be time limited, and SHOULD have a validity time window of no more than five minutes.\n SHALL be presented along with authentication of the RP to the IdP.\n SHALL NOT be predictable or guessable by an attacker.\n\n\nIn this model, the RP directly requests the assertion from the IdP, minimizing chances of interception and manipulation by a third party (including the subscriber themselves).\nMore network transactions are required in the back-channel method, but the information is limited to only those parties that need it. Since an RP is expecting to get an assertion only from the IdP directly as a result of its request, the attack surface is reduced. Consequently, it is more difficult to inject assertions directly into the RP and this presentation method is recommended for FAL2 and above.\nSince the IdP and RP are already directly connected, the back-channel presentation method facilitates the use of identity APIs, as described in Sec. 3.11.3.\n\nNote that while it is technically possible for an assertion reference (which is single-audience) to result in a multi-audience assertion, this situation is unlikely. For this reason, back-channel presentation is practically limited to use with single-audience assertions.\n\nConveyance of the assertion reference from the IdP to the subscriber, as well as from the subscriber to the RP, SHALL be made over an authenticated protected channel. Conveyance of the assertion reference from the RP to the IdP, as well as the assertion from the IdP to the RP, SHALL be made over an authenticated protected channel.\n\nThe RP SHALL protect itself against injection of manufactured or captured assertion references by the use of cross-site scripting protection, rejecting assertion references outside of the correct stage of a federation transaction, or other accepted techniques discussed in Sec. 3.10.1.\nWhen assertion references are presented to the IdP, the IdP SHALL verify that the RP presenting the assertion reference is the same RP that made the assertion request resulting in the assertion reference. Examples for this are discussed in Sec 10.12 such as the authorization code flow of [OIDC] with additional security profiles such as [FAPI].\n\nNote that in a federation proxy described in Sec. 3.2.3, the upstream IdP audience restricts the assertion reference and assertion to the proxy, and the proxy restricts any newly-created assertion references or assertions to the downstream RP.\n\nFront-Channel Presentation\n\nIn the front-channel presentation model shown in Fig. 12, the IdP creates an assertion and sends it to the RP by means of a third party, such as the subscriber’s user agent. In the implicit flow and some forms of the hybrid flow of [OIDC], the assertion (the ID Token) is presented in the front channel. In the SAML Web SSO profile defined in [SAML-WebSSO], the SAML assertion is presented in the front channel.\n\nFig. 12. Front-channel Presentation\n\n\n\nFront-channel presentation methods expose the assertion to parties other than the IdP and RP, which increases the risk for leakage of PII and other information included in the assertion. Additionally, there is an increased attack surface for the assertion to be captured and replayed by an attacker. As a consequence, it is recommended to not use front-channel presentation when other mechanisms are available.\n\nThe RP SHALL use the assertion identifier ensure that a given assertion is presented at most once during the assertion’s validity time window.\n\nThe RP SHALL protect itself against injection of manufactured or captured assertion by the use of cross-site scripting protection, rejecting assertions outside of the correct stage of a federation transaction, or other accepted techniques discussed in Sec. 3.10.1.\n\nConveyance of the assertion from the IdP to the subscriber, as well as from the subscriber to the RP, SHALL be made over an authenticated protected channel.\n\nWith general-purpose IdPs, it is common for front-channel communications to be accomplished using HTTP redirects, where the contents of the assertion are made available as part of an HTTP request URL. Due to the nature of the HTTP ecosystem, these request URLs are sometimes available in unexpected places, such as access logs and browser history. These logs and other artifacts tend to live on long past the federation transaction and are available in other contexts, which increases the attack surface for reading the assertion. As a consequence, an IdP that uses HTTP redirects for front channel presentation of assertions that contain PII SHALL encrypt the assertion as discussed in Sec 3.12.3.\n"
} ,
{
"title" : "Subscriber-Controlled Wallets",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/Wallets/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Subscriber-Controlled Wallets\n\nThis section is normative.\n\nWhen the IdP runs on a device controlled by the subscriber, whether as a digital wallet or as a self-issued identity provider, the IdP is known as a subscriber-controlled wallet and the following requirements apply.\n\nSubscriber-controlled wallets SHALL require the presentation of an activation factor in order to perform any actions requiring the use of the wallet’s signing key, including onboarding of the wallet and release of attributes to an RP.\n\nWallet Activation\n\nThe subscriber-controlled wallet SHALL require presentation of an activation factor from the subscriber for the following actions:\n\n\n Providing proof of the signing key to the CSP during the provisioning process\n Signing the assertion for presentation to the RP\n\n\nThe subscriber-controlled wallet SHOULD require presentation of an activation factor before any other operations that involve use of the wallet’s signing keys. The wallet MAY request reissuance of previously-issued attribute bundles without requiring subscriber involvement.\n\nSubmission of the activation factor SHALL be a separate operation from the unlocking of the host device (e.g., smartphone), although the same activation factor used to unlock the host device MAY be used in the activation operation. Agencies MAY relax this requirement for subscriber-controlled wallets managed by or on behalf of the CSP (e.g., via mobile device management) that are constrained to have short (agency-determined) inactivity timeouts and device activation factors meeting the above requirements. Additional discussion of activation factors for authenticators is found in Sec. 3.2.10 of [SP800-63B].\n\nFederation Transaction\n\nA federation transaction with a subscriber-controlled wallet establishes the subscriber’s device as an IdP for the subscriber account and creates an authenticated session for the subscriber at the RP. The process is shown in Fig. 13.\n\nFig. 13. Federation with a Subscriber-Controlled Wallet\n\n\n\nA federation transaction with a subscriber-controlled wallet takes place over several steps:\n\n\n The CSP identity proofs the subscriber and creates a subscriber account.\n The CSP provisions the wallet to the subscriber account, which includes the subscriber verifying an authenticator in their subscriber account.\n The wallet receives a signed attribute bundle from the CSP, allowing the wallet to act as an IdP.\n The RP requests a federated authentication from the wallet, usually through subscriber action.\n The subscriber activates the wallet through an authentication factor.\n The wallet creates an assertion based on the attribute bundles available to the wallet.\n The wallet presents the assertion to the RP.\n The RP validates the assertion.\n The RP creates an authenticated session for the subscriber.\n\n\nTrust Agreements\n\nThe trust agreement for a transaction involving a subscriber-controlled wallet SHALL be established between the RP and the CSP. The trust agreement MAY be facilitated through use of a federation authority, as described in Sec. 3.4.2.\n\nIn most cases, the RP does not have a direct trust relationship with the wallet (acting as IdP), but instead trusts the wallet transitively through the wallet’s established relationship with the CSP. This relationship can be verified by the means of attribute bundles, as described in Sec. 3.11.1. Even though the wallet is not usually involved in the process of establishing the trust agreement, the trust agreement between the RP and CSP can still be accomplished in either an a priori or subscriber-driven fashion.\n\nThe trust agreement SHALL include the following\n\n\n The set of subscriber attributes the CSP makes available to wallets in attribute bundles\n The set of subscriber attributes the wallet can make available to the RP\n The population of subscriber accounts that the CSP can represent\n The xALs available from the wallet\n\n\nThe release of subscriber attributes SHALL be managed using a runtime decision managed by the wallet, as described in Sec. 4.6.1.3. The authorized party SHALL be the subscriber.\n\nThe following terms SHALL be disclosed to the subscriber during the runtime decision:\n\n\n The set of subscriber attributes that the RP will request (a subset of the attributes made available)\n The purpose for each attribute requested by the RP\n The xALs required by the RP\n\n\nNote that all information disclosed to the subscriber needs to be conveyed in a manner that is understandable and actionable, as discussed in Sec. 8.\n\nIf FAL3 is allowed within the trust agreement and authenticators other than the wallet itself are allowed for use at FAL3, the trust agreement SHALL stipulate the following terms regarding holder-of-key assertions and bound authenticators (see Sec. 3.14 and Sec. 3.15):\n\n\n Whether the wallet’s presentation is considered sufficient for holder-of-key assertion requirements\n The means by which non-wallet holder-of-key assertions can be verified by the RP (such as a common trusted PKI system)\n The means by which the RP can associate non-wallet holder-of-key assertions with specific RP subscriber accounts (such as attribute-based account resolution or pre-provisioning)\n Whether bound authenticators are supplied by the RP or by the subscriber\n Documentation of the binding ceremony used for any subscriber-provided bound authenticators\n\n\nProvisioning the Subscriber-Controlled Wallet\n\nWhen the CSP provisions the subscriber-controlled wallet, the process SHALL include the following steps:\n\n\n The subscriber authenticates to the CSP’s provisioning system using one or more authenticators bound to the subscriber account.\n The subscriber activates the wallet using an activation factor.\n The wallet proves possession of its signing key to the CSP.\n The CSP creates one or more attribute bundles that include subscriber attributes and the wallet’s signing key (or a reference to that key).\n The wallet stores the attribute bundle for later presentation to RPs.\n\n\nThe subscriber-controlled wallet MAY generate and use a different signing key for each provisioning request with the CSP.\n\nThe CSP SHALL create a unique attribute bundle for each requesting wallet.\n\nDeprovisioning the Subscriber-Controlled Wallet\n\nThe CSP SHALL provide a means of deprovisioning a subscriber-controlled wallet. The deprovisioning process is used when the subscriber account is terminated, thereby rendering downstream federation actions invalid, or when the wallet needs to be terminated due to the device being lost, stolen, or compromised.\n\nTo accomplish this, the CSP SHALL issue attribute bundles with a limited time validity window, SHALL issue attribute bundles specific to each wallet. The CSP SHOULD provide a means to independently verify the status of attribute bundles (i.e., whether a specific bundle has been revoked by the CSP). If such a service is offered, the service SHALL be deployed in a privacy-preserving way such that the CSP is not alerted to the use of a specific attribute bundle at a specific RP.\n\nDiscovery and Registration\n\nTo perform a federation transaction with a subscriber-controlled wallet, the RP SHALL first determine the attribute bundle singing public key of the CSP through a secure process as stated by the trust agreement. In some systems, this is accomplished by retrieving the CSP’s attribute bundle signing public keys from a URL known to be controlled by the CSP. In other systems, the RP is configured manually with the public key of the CSP before being deployed.\n\nThe RP learns the identifier and assertion signing public keys of the subscriber-controlled wallet as part of the attribute bundle signed by the CSP, presented in the federation transaction. The RP trusts the CSP’s onboarding process of the wallet to provide assurance that the public key being presented can be trusted to present the attribute bundle in question.\n\nThe RP also needs to register with the subscriber-controlled wallet. In most cases, this is expected to be a dynamic process in which the RP introduces its properties during the federation transaction. The nature of a subscriber-controlled wallet makes it difficult for any specific RP to pre-register with an instance of the wallet, but this use case can be facilitated through the use of a trusted third party stipulated in the trust agreement. For example, an ecosystem has a centralized service for managing discovery and registration. When an RP joins the ecosystem, it registers itself with the trusted service, downloads the CSP’s public keys, and receives an identifier to use with wallets. When the wallet is onboarded by the CSP, the wallet is informed where it can find the list of valid RP identifiers within the ecosystem. When the RP connects to the wallet, the wallet can verify the RP’s identifier without the RP having to register itself directly with the wallet. Likewise, the RP can verify the wallet’s signing keys by the fact they are presented in an attribute bundle signed by the CSP’s public key, which had in turn been retrieved from the trusted third party.\n\nAuthentication and Attribute Disclosure\n\nThe decision of whether a federated authentication can occur or attributes may be passed SHALL be determined by the subscriber, acting in the role of the authorized party.\n\nThe subscriber-controlled wallet SHOULD provide a means to selectively disclose a subset of the attributes in the attribute bundle from the CSP.\n\nThe CSP SHALL provide effective mechanisms for redress of subscriber complaints or problems (e.g., subscriber identifies an inaccurate attribute value, or the need to deprovision a subscriber-controlled wallet). See Sec. 3.4.3 for additional requirements and considerations for redress mechanisms.\n\nAssertion Requests\n\nWhen the federation transactions are initiated by the RP, the RP’s request for an assertion SHALL contain:\n\n\n An identifier for the RP\n A cryptographic nonce\n The set of identity attributes requested by the RP and their purpose of use at the RP\n\n\nNote that federation transactions are always initiated by the RP at FAL2 or higher.\n\nAssertion Contents\n\nAssertions from a subscriber-controlled wallet SHALL contain:\n\n\n A signed attribute bundle from the CSP.\n Subject identifier: An identifier for the party to which the assertion applies (i.e., the subscriber).\n Issuer identifier: An identifier for the issuer of the assertion (i.e., the subscriber-controlled wallet).\n Audience identifier: An identifier for the party intended to consume the assertion (i.e., the RP).\n Issuance time: A timestamp indicating when the wallet issued the assertion.\n Validity time window: A period of time outside of which the assertion SHALL NOT be accepted as valid by the RP for the purposes of authenticating the subscriber and starting an authenticated session at the RP. This is usually communicated by means of an expiration timestamp for the assertion in addition to the issuance timestamp.\n Assertion identifier: A value uniquely identifying this assertion, used to prevent attackers from replaying prior assertions.\n Authentication time: A timestamp indicating when the subscriber last used the wallet’s activation factor.\n Nonce: A cryptographic nonce, if one is provided by the RP.\n Signature: Digital signature using asymmetric cryptography, covering the entire assertion.\n\n\nAll assertions SHALL contain sufficient information to determine the following aspects of the federation transaction:\n\n\n The IAL of the subscriber account being represented in the assertion, or an indication that no IAL is asserted.\n The wallet’s intended FAL of the federation process represented by the assertion.\n\n\nAt FAL3, the assertion SHALL include one of the following:\n\n\n The public key, key identifier, or other identifier for a holder-of-key assertion. This MAY be the same key that the subscriber-controlled wallet uses to sign the assertion.\n An indicator that verification of a bound authenticator is required to process this assertion.\n\n\nThe signed attribute bundle from the CSP SHALL contain:\n\n\n A public key or key identifier for the key used by the subscriber-controlled wallet to sign the assertion\n Issuance time: A timestamp indicating when the CSP issued the attribute bundle.\n Validity time window: A period of time outside of which the attribute bundle SHALL NOT be accepted as valid by the RP for the purposes of authenticating the subscriber and starting an authenticated session at the RP. This is usually communicated by means of an expiration timestamp for the assertion in addition to the issuance timestamp.\n IAL: Indicator of the IAL of the subscriber account being represented in the attribute bundle, or an indication that no IAL is asserted.\n Signature: Digital signature using asymmetric cryptography, covering the entire attribute bundle.\n\n\nAdditional identity attributes and derived attribute values MAY be included in the attribute bundle. These attributes SHOULD be made available using a selective disclosure method, whereby the subscriber can, through their wallet software, determine which parts of the bundle to disclose to the RP.\n\nIdentity attributes in the assertion but outside of a signed attribute bundle SHALL be considered self-asserted. The RP MAY validate these additional attributes out of band.\n\nSubscriber-controlled wallets SHOULD use non-exportable key storage as discussed in Sec. 3.5.2.\n\nAssertion Presentation\n\nAssertions SHALL be presented to the RP through an authenticated protected channel.\n\nThe presentation SHALL include the cryptographic nonce from the RP’s request, if present. The RP SHALL verify the nonce in accordance with the federation protocol.\n\nIf the assertion contains PII, and the presentation mechanism passes the assertion through a component other than the wallet or RP, the assertion SHOULD be encrypted.\n\nThe RP SHALL protect itself against injection of manufactured or captured assertions by the use of cross-site scripting protection, rejecting assertions outside of the correct stage of a federation transaction, or other accepted techniques discussed in Sec. 3.10.1. When possible, the IdP SHOULD use platform APIs instead of HTTP redirects when delivering an assertion to the RP.\n\nSince assertions from a subscriber-controlled wallet always contain a reference to the wallet’s signing key inside the signed attribute bundle from the CSP, assertions from subscriber-controlled wallets MAY be used as holder-of-key assertions to reach FAL3, as long as all other requirements in these guidelines are met. For additional requirements for holder-of-key assertions, see Sec. 3.14.\n\nAssertion Validation\n\nThe RP SHALL validate the signature on all signed attribute bundles in the assertion, using the cryptographic key from the CSP issuing the signed attribute bundle. The RP SHALL validate the signature of the assertion using the identified cryptographic key in the signed attribute bundle.\n\nThe RP SHALL validate the assertion by checking that all the following are true:\n\n\n Issuer verification: ensuring that the assertion was issued by the wallet the RP expects it to be from.\n Time validation: ensuring that the expiration and issue times are within acceptable limits of the current timestamp.\n Audience restriction: ensuring that this RP is the intended recipient of the assertion.\n Nonce: ensuring that the cryptographic nonce included in the RP’s request is included in the presentation.\n Transaction terms: ensuring that the IAL, AAL, and FAL represented by the assertion are allowable under the applicable trust agreement.\n\n\nAdditionally, the issuer MAY make available an online mechanism to determine the validity of a given attribute bundle, such as a status list queryable by the RP.\n\nRP Subscriber Accounts\n\nRP subscriber accounts SHALL be managed using a just-in-time or ephemeral provisioning model only (see Sec. 4.6.3). In each of these cases, the RP creates the RP subscriber account and associates it with the federated identifier only after successful validation of the assertion from the wallet.\n\nThe RP SHALL disclose its practices for management of subscriber information as part of the trust agreement. The RP SHALL provide effective means of redress to the subscriber for correcting and removing information from the RP subscriber account. See Sec. 3.4.3 for additional requirements and considerations for redress mechanisms.\n"
} ,
{
"title" : "Security",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/security/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Security\n\nThis section is informative.\n\nSince the federated authentication process involves coordination between multiple components, including the CSP, IdP, and RP, there are additional opportunities for attackers to compromise federated identity transactions and additional ramifications for successful attacks. This section summarizes many of the attacks and mitigations applicable to federation.\n\nFederation Threats\n\nAs in non-federated authentication, attackers’ motivations are typically to gain access (or a greater level of access) to a resource or service provided by an RP. Attackers may also attempt to impersonate a subscriber. Rogue or compromised IdPs, RPs, user agents (e.g., browsers), and parties outside of a typical federation transaction are potential attackers. To accomplish their attack, they might intercept or modify assertions and assertion references. Furthermore, two or more entities may attempt to subvert federation protocols by directly compromising the integrity or confidentiality of the assertion data. For the purpose of these types of threats, any authorized parties who attempt to exceed their privileges are considered attackers.\n\nIn federated systems, successful attacks on the IdP can propagate through to the RPs that rely on that IdP for identity and security information. As a consequence, an attack against the IdP targeting one agency’s RP could potentially proliferate to another agency’s RP. Additionally, since a single subscriber account is made available to multiple RPs in a federated system, there are potential limitations on the tailoring to proofing strategies and the visibility into the proofing process that an IdP can offer to different RPs. However, these terms can vary in the trust agreements with each RP, if the IdP is able to support different use cases for different subscriber account populations. Furthermore, while the IdP can disclose different attributes to each RP, the subscriber account will need to contain the union of all attributes available to all RPs. This practice limits the damage of attacks against RPs but in turn makes the IdP a more compelling target for attackers.\n\nTable 2. Federation Threats\n\n\n \n \n Federation Threats/Attacks\n Description\n Examples\n \n \n \n \n Assertion Manufacture or Modification\n The attacker generates a false assertion\n Compromised IdP asserts identity of a claimant who has not properly authenticated\n \n \n \n The attacker modifies an existing assertion\n Compromised proxy that changes AAL of an authentication assertion\n \n \n Assertion Disclosure\n Assertion visible to third party\n Network monitoring reveals subscriber address of record to an outside party\n \n \n Assertion Repudiation by the IdP\n IdP later claims not to have signed transaction\n User engages in fraudulent credit card transaction at RP, IdP claims not to have logged them in\n \n \n Assertion Repudiation by the Subscriber\n Subscriber claims not to have performed transaction\n User agreement (e.g., contract) cannot be enforced\n \n \n Assertion Redirect\n Assertion can be used in unintended context\n Compromised user agent passes assertion to attacker who uses it elsewhere\n \n \n Assertion Reuse\n Assertion can be used more than once with same RP\n Intercepted assertion used by attacker to authenticate their own session\n \n \n Assertion Substitution\n Attacker uses an assertion intended for a different subscriber\n Session hijacking attack between IdP and RP\n \n \n\n\nFederation Threat Mitigation Strategies\n\nMechanisms that assist in mitigating the above threats are identified in Table 3.\n\nTable 3. Mitigating Federation Threats\n\n\n \n \n Federation Threat/Attack\n Threat Mitigation Mechanisms\n Normative Reference(s)\n \n \n \n \n Assertion Manufacture or Modification\n Cryptographically sign the assertion at IdP and verify at RP\n 3.5, 3.12.2\n \n \n \n Send assertion over an authenticated protected channel authenticating the IdP\n 4.11\n \n \n \n Include a non-guessable random identifier in the assertion\n 3.12.1\n \n \n Assertion Disclosure\n Send assertion over an authenticated protected channel authenticating the RP\n 4.9, 5.8\n \n \n \n Encrypt assertion for a specific RP (may be accomplished by use of a mutually authenticated protected channel)\n 3.12.3\n \n \n Assertion Repudiation by the IdP\n Cryptographically sign the assertion at the IdP with a key that supports non-repudiation; verify signature at RP\n 3.12.2\n \n \n Assertion Repudiation by the Subscriber\n Issue holder-of-key assertions or assertions with bound authenticators; proof of possession of authenticator verifies subscriber’s participation to the RP\n 3.14 3.15\n \n \n Assertion Redirect\n Include identity of the RP (“audience”) for which the assertion is issued in its signed content; RP verifies that they are intended recipient\n \n \n \n Assertion Reuse\n Include an issuance timestamp with short validity period in the signed content of the assertion; RP verifies validity\n 4.9, 5.8\n \n \n \n RP keeps track of assertions consumed within a configurable time window to ensure that a given assertion is not used more than once.\n 3.12.1\n \n \n Assertion Substitution\n Ensure that assertions contain a reference to the assertion request or some other nonce that was cryptographically bound to the request by the RP\n 4.9, 5.8\n \n \n \n Send assertions in the same authenticated protected channel as the request, such as in the back-channel model\n 4.11.1\n \n \n\n\n"
} ,
{
"title" : "Privacy",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/privacy/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Privacy Considerations\n\nThis section is informative.\n\nMinimizing Tracking and Profiling\n\nFederation offers numerous benefits to RPs and subscribers, but it requires subscribers to have trust in the federation participants. Sec. 3 and Sec. 3.3.1 cover a number of technical requirements, the objective of which is to minimize privacy risks arising from increased capabilities to track and profile subscribers. For example, a subscriber using the same IdP to authenticate to multiple RPs allows the IdP to build a profile of subscriber transactions that would not have existed absent federation. The availability of such data makes it vulnerable to uses that may not be anticipated or desired by the subscriber and may inhibit subscriber adoption of federated services.\n\nSection 3.9 requires IdPs to use measures to maintain the objectives of predictability (enabling reliable assumptions by individuals, owners, and operators about PII and its processing by an information system) and manageability (providing the capability for granular administration of PII, including alteration, deletion, and selective disclosure) commensurate with privacy risks that can arise from the processing of attributes for purposes other than those listed in Sec. 3.9.1.\n\nIdPs may have various business purposes for processing attributes, including providing non-identity services to subscribers. However, processing attributes for different purposes from the original collection purpose can create privacy risks when individuals are not expecting or comfortable with the additional processing. IdPs can determine appropriate measures commensurate with the privacy risk arising from the additional processing. For example, absent applicable law, regulation, or policy, it may not be necessary to get consent when processing attributes to provide non-identity services requested by subscribers, although notices may help subscribers maintain reliable assumptions about the processing (e.g., predictability). Other processing of attributes may carry different privacy risks that call for obtaining consent or allowing subscribers more control over the use or disclosure of specific attributes (manageability). Subscriber consent needs to be meaningful; therefore, when IdPs do use consent measures, they cannot make acceptance by the subscriber of additional uses a condition of providing the identity service.\n\nWhen holder-of-key assertions are used at FAL3, the same authenticator is usually used at both the IdP and RP. With authenticators that can fulfill this technical requirement, it is likely that the same authenticator would further be used at multiple RPs. Furthermore, an unrelated RP could use the same authenticator for direct authentication. All such RPs would potentially be able to collude and disclose the use of the same authenticator across all parties in order to effect tracking of the subscriber through the network. This is true even if per-provider identifiers are used, as the bound authenticator is recognizable apart from the assertion. Additionally, many authenticators suitable for holder-of-key assertions contain identity attributes which are sent apart from the assertion or an identity API. These additional attributes have to be covered by the privacy risk assessment.\n\nConsult the SAOP if there are questions about whether the proposed processing falls outside the scope of the permitted processing or the appropriate privacy risk mitigation measures.\n\nSection 3.9 also encourages the use of technical measures to provide disassociability (enabling the processing of PII or events without association to individuals or devices beyond the operational requirements of the system) and prevent subscriber activity tracking and profiling [NISTIR8062]. Technical measures, such as those outlined in Sec. 3.2.3 for proxied federation and Sec. 3.3.1 for pairwise pseudonymous identifiers, can increase the effectiveness of policies by making it more difficult to track or profile subscribers beyond operational requirements. However, even these measures have their limitations and tracking can still occur based on subscriber attributes, statistical demographics, and other kinds of information shared between the IdP and RP.\n\nIn some use cases, especially at higher xALs, tracking the real-world identity of the subscriber is expected as a means of securing the system. It is the responsibility of the IdP and RP to inform and educate the subscriber about which pieces of information are transmitted, and allow the subscriber to review this information.\n\nNotice and Consent\n\nTo build subscriber trust in federation, subscribers need to be able to develop reliable assumptions about how their information is being processed. For instance, it can be helpful for subscribers to understand what information will be transmitted, which attributes for the transaction are required versus optional, and to have the ability to decide whether to transmit optional attributes to the RP. Accordingly, Sec. 3.4 requires that positive confirmation be obtained from the authorized party before any attributes about the subscriber are transmitted to any RP.\n\nIn determining when a set of RPs should share a shared pairwise pseudonymous identifier as in Sec. 3.3.1.3, the trust agreement considers the subscriber’s understanding of such a grouping of RPs and provides a means for effective notice to the subscriber in assisting such understanding. An effective notice will take into account user experience design standards and research, as well as an assessment of privacy risks that may arise from the information processing. There are various factors to be considered, including the reliability of the assumptions subscribers may have about the processing and the role of different entities involved in federation. However, a link to a complex, legalistic privacy policy or general terms and conditions that a substantial number of subscribers do not read or understand is never an effective notice.\n\nSec. 3.4 does not specify which party should provide the notice. In some cases, a party in a federation may not have a direct connection to the subscriber in order to provide notice and obtain consent. Although multiple parties may elect to provide notice, it is permissible for parties to determine in advance, either contractually or through trust framework policies, which party will provide the notice and obtain confirmation, as long as the determination is being based upon factors that center on enabling the subscriber to pay attention to the notice and make an informed choice.\n\nThe IdP is required to inform subscribers of all RPs that might access the subscriber’s attributes. If an RP is on an IdP’s allowlist as described in Sec. 4.6.1.1, the subscriber will not be prompted at runtime to consent to the release of their attributes. This single-sign-on scenario allows for a more seamless login experience for the subscriber, who might not even realize they are participating in a federation transaction. The IdP makes its list of allowlisted RPs available to the subscriber as part of the terms of the trust agreement. This information allows the subscriber to see which RPs might have access to their attributes, under what circumstances, and for what purposes.\n\nIf a subscriber’s runtime decisions at the IdP were stored in the subscriber account by the IdP to facilitate future transactions, the IdP also needs to allow the subscriber to view and revoke any RPs that were previously approved during a runtime decision. This list includes information on which attributes were approved and when the approval was recorded. Similarly, if a subscriber’s runtime decisions at the RP are stored in some fashion, the RP also needs to allow the subscriber to view and revoke any IdPs that were approved during a runtime decision.\n\nData Minimization\n\nFederation enables the data exposed to an RP to be minimized, which can yield privacy protections for subscribers. Although an IdP may collect additional attributes beyond what the RP requires for its use case, only those attributes that were explicitly requested by the RP are to be transmitted by the IdP. In some instances, an RP does not require a full value of an attribute. For example, an RP may need to know whether the subscriber is over 13 years old, but has no need for the full date of birth. To minimize collection of potentially sensitive PII, the RP may request a derived attribute value (e.g., Question: Is the subscriber over 13 years old? Response: Y/N or Pass/Fail). This minimizes the RP’s collection of potentially sensitive and unnecessary PII. Accordingly, Sec. 3.10.2 recommends the RP to, where feasible, request derived attribute values rather than full attribute values. To support this RP requirement IdPs are, in turn, required to support a derived attribute value.\n\nAgency-Specific Privacy Compliance\n\nSection 3.9 identifies agency requirements to consult their SAOP to determine privacy compliance requirements. It is critical to involve the agency’s SAOP in the earliest stages of digital authentication system development to assess and mitigate privacy risks and advise the agency on compliance obligations such as whether the federation triggers the Privacy Act of 1974 or the E-Government Act of 2002 requirement to conduct a PIA. For example, if the agency is serving as an IdP in a federation, it is likely that the Privacy Act requirements will be triggered and require coverage by either a new or existing Privacy Act System of Records Notice since credentials would be maintained at the IdP on behalf of any RP it federates with. If, however, the agency is an RP and using a third-party IdP, digital authentication may not trigger the requirements of the Privacy Act, depending on what data passed from the RP is maintained by the agency at the RP (in such instances the agency may have a broader programmatic SORN that covers such data).\n\nThe SAOP can similarly assist the agency in determining whether a PIA is required. These considerations should not be read as a requirement to develop a Privacy Act SORN or PIA for use of a federated credential alone. In many cases it will make the most sense to draft a PIA and SORN that encompasses the entire digital authentication process or includes the digital authentication process as part of a larger programmatic PIA that discusses the program or benefit the agency is establishing online access.\n\nDue to the many components of digital authentication, it is important for the SAOP to have an awareness and understanding of each individual component. For example, other privacy artifacts may be applicable to an agency offering or using federated IdP or RP services, such as Data Use Agreements, Computer Matching Agreements, etc. The SAOP can assist the agency in determining what additional requirements apply. Moreover, a thorough understanding of the individual components of digital authentication will enable the SAOP to thoroughly assess and mitigate privacy risks either through compliance processes or by other means.\n\nBlinding in Proxied Federation\n\nWhile some proxy structures — typically those that exist primarily to simplify integration — may not offer additional subscriber privacy protection, others offer varying levels of privacy to the subscriber through a range of blinding technologies. Privacy policies may dictate appropriate use of the subscriber attributes and authentication transaction data (e.g., identities of the ultimate IdP and RP) by the IdP, RP, and the federation proxy.\n\nTechnical means such as blinding can increase effectiveness of these policies by making the data more difficult to obtain. A proxy-based system has three parties, and the proxy can be used to hide information from one or more of the parties, including itself. In a double-blind proxy, the IdP and RP do not know each other’s identities, and their relationship is only with the proxy. In a triple-blind proxy, the proxy additionally does not have insight into the data being passed through it. As the level of blinding increases, the technical and operational implementation complexity may increase. Since proxies need to map transactions to the appropriate parties on either side as well as manage the keys for all parties in the transaction, fully triple-blind proxies are very difficult to implement in practice.\n\nEven with the use of blinding technologies, a blinded party may still infer protected subscriber information through released attribute data or metadata, such as by analysis of timestamps, attribute bundle sizes, or attribute signer information. The IdP could consider additional privacy-enhancing approaches to reduce the risk of revealing identifying information of the entities participating in the federation.\n\nThe following table illustrates a spectrum of blinding implementations used in proxied federation. This table is intended to be illustrative, and is neither comprehensive nor technology-specific.\n\nTable 4. Proxy Characteristics\n\n\n \n \n Proxy Type\n RP knows IdP\n IdP knows RP\n Proxy can track subscriptions between RP and IdP\n Proxy can see attributes of Subscriber\n \n \n \n \n Non-Blinding Proxy with Attributes\n Yes\n Yes\n Yes\n Yes\n \n \n Non-Blinding Proxy\n Yes\n Yes\n Yes\n N/A\n \n \n Double Blind Proxy with Attributes\n No\n No\n Yes\n Yes\n \n \n Double Blind Proxy\n No\n No\n Yes\n N/A\n \n \n Triple Blind Proxy with or without Attributes\n No\n No\n No\n No\n \n \n\n"
} ,
{
"title" : "Usability",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/usability/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Usability Considerations\n\nThis section is informative.\n\n\n In order to align with the standard terminology of user-centered design and usability, the term “user” is used throughout this section to refer to the human party. In most cases, the user in question will be the subject (in the role of applicant, claimant, or subscriber) as described elsewhere in these guidelines.\n\n\nErgonomic of Human-System Interaction — Part 11: Usability: Definitions and Concepts [ISO/IEC9241-11] defines usability as the “extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” This definition focuses on users, goals, and context of use as key elements necessary for achieving effectiveness, efficiency and satisfaction. A holistic approach considering these key elements is necessary to achieve usability.\n\nFrom the usability perspective, one of the major potential benefits of federated identity systems is to address the problem of user fatigue associated with managing multiple authenticators. While this has historically been a problem with usernames and passwords, the increasing need for users to manage many authenticators — whether physical or digital — presents a usability challenge.\n\nAs stated in Sec. 8 of [SP800-63A] and Sec. 8 of [SP800-63B], overall user experience is critical to the success of digital identity systems. This is especially true for federated identity systems, as federation is a less familiar user interaction paradigm for many users. Users’ prior authentication experiences may influence their expectations.\n\nThe overall user experience with federated identity systems should be as smooth and easy as possible. This can be accomplished by following usability standards (such as the ISO 25060 series of standards) and established best practices for user interaction design.\n\nGuidelines and considerations are described from the users’ perspective.\n\nSection 508 of the Rehabilitation Act of 1973 [Section508] was enacted to eliminate barriers in information technology and require federal agencies to make electronic and information technology accessible to people with disabilities. While these guidelines do not directly assert requirements from Section 508, identity service providers are expected to comply with Section 508 provisions. Beyond compliance with Section 508, Federal Agencies and their service providers are generally expected to design services and systems with the experiences of people with disabilities in mind to ensure that accessibility is prioritized throughout identity system lifecycles.\n\nGeneral Usability Considerations\n\nFederated identity systems should:\n\n\n Minimize user burden (e.g., frustration, learning curve)\n \n Minimize the number of user actions required.\n Allow users to quickly and easily select among multiple subscriber accounts with a single IdP. For example, approaches such as Account Chooser allow users to select from a list of subscriber accounts they have accessed in the recent past, rather than start the federation process by selecting their IdP from a list of potential IdPs.\n Balance minimizing user burden with the need to provide sufficient information to enable users to make informed decisions.\n \n \n \n Minimize the use of unfamiliar technical jargon and details (e.g., users do not need to know the terms IdP and RP if the basic concepts are clearly explained).\n \n \n Strive for a consistent and integrated user experience across the IdP and RP.\n \n \n Help users establish an understanding of identity by providing resources to users such as graphics, illustrations, FAQs, tutorials and examples. Resources should explain how users’ information is treated and how transacting parties (e.g., RPs, IdPs, and brokers) relate to each other.\n \n \n Provide clear, honest, and meaningful communications to users (i.e., communications should be explicit and easy to understand).\n \n \n Provide users online services independent of location and device.\n \n \n Make trust relationships explicit to users to facilitate informed trust decisions. Trust relationships are often dynamic and context dependent. Users may be more likely to trust some IdPs and RPs with certain attributes or transactions more than others. For example, users may be more hesitant to use federated identity systems on websites that contain valuable personal information (such as financial or health). Depending on the perceived sensitivity of users’ personal information, users may be less comfortable with commercial as IdPs since people often have concerns about advertising and data-usage of such companies. Conversely, some may have more confidence in the commercial IdPs than government IdPs based on their historical interactions with government services. Either way, it is critical to be clear to end-users on the entities involved in a federation transaction and, ideally, provide options that support the broadest set of stakeholder perceptions possible.\n \n \n Follow the usability considerations specified in [SP800-63A] Sec. 8 for any user-facing information.\n \n \n Clearly communicate how and where to acquire technical assistance. For example, provide users with information such as a link to an online self-service feature, chat sessions or a phone number for help desk support. Avoid redirecting users back and forth among transacting parties (e.g., RPs, IdPs, and brokers) to receive technical assistance.\n \n Perform integrative and continuous usability evaluations with representative users and realistic tasks in an appropriate context to ensure success of federated identity systems from the users’ perspectives.\n\n\nSpecific Usability Considerations\n\nThis section addresses the specific usability considerations that have been identified with federated identity systems. This section does not attempt to present exhaustive coverage of all usability factors related to federated identity systems. Rather, it is focused on the larger, more pervasive themes in the usability literature, primarily users’ perspectives on identity, user adoption, trust, and perceptions of federated identity space. In some cases, implementation examples are provided. However, specific solutions are not prescribed. The implementations mentioned are examples to encourage innovative technological approaches to address specific usability needs. See standards for system design and coding, specifications, APIs, and current best practices (such as OpenID and OAuth) for additional examples. Implementations are sensitive to many factors that prevent a one-size-fits-all solution.\n\nUser Perspectives on Online Identity\n\nEven when users are familiar with federated identity systems, there are different approaches to federated identity (especially in terms of privacy and the sharing of information) that make it necessary to establish reliable expectations for how users’ data are treated. Users and implementers have different concepts of identity. Users think of identity as logging in and gaining access to their own private space. Implementers think of identity in terms of authenticators and assertions, assurance levels, and the necessary set of identity attributes to provide a service. Given this disconnect between users’ and implementers’ concepts of identity, it is essential to help users form an accurate concept of identity as it applies to federated identity systems. A good model of identity provides users a foundation for understanding the benefits and risks of federated systems and encourage user adoption and trust of these systems.\n\nTo minimize the personal information collected and protect privacy, IdPs ought to provide users with pseudonymous options for providing data to RPs, where possible, and inform users of the benefits and drawbacks of pseudonymous identification.\nLikewise, RPs ought to request pseudonymous options for users when pseudonymity is possible for the RP’s policy. Both IdPs and RPs need to seek to minimize unnecessary data transmission and inform users of which information is transmitted and for what purpose.\n\nMany properties of identity have implications for how users manage identities, both within and among federations. Just as users manage multiple identities based on context outside of cyberspace, users must learn to manage their identity in a federated environment. Therefore, it must be clear to users how identity and context are used. The following factors should be considered:\n\n\n \n Provide users the requisite context and scope in order to distinguish among different user roles. For example, whether the user is acting on their own behalf or on behalf of another, such as their employer.\n \n \n Provide users unique, meaningful, and descriptive identifiers to distinguish among entities such as IdPs, RPs, and accounts. Any such user-facing identifiers are likely to be in addition to identifiers used by the underlying protocols, which are not normally exposed to the user.\n \n \n Provide users with information on data ownership and those authorized to make changes. Identities, and the data associated with them, can sometimes be updated and changed by multiple actors. For example, some healthcare data is updated and owned by the patient, while some data is only updated by a hospital or doctor’s practice.\n \n \n Provide users with the ability to easily verify, view, and update attributes. Identities and user roles are dynamic and not static; they change over time (e.g., age, health, and financial data). The ability to update attributes or make attribute release decisions may or may not be offered at the same time. Ensure the process for how users can change attributes is well known, documented, and easy to perform.\n \n \n Provide users means for updating data, even if the associated subscriber account or RP subscriber account no longer exists. Consider applicable audit, legal, or policy constraints for needs to track updated data.\n \n \n Provide users means to delete their identities completely, removing all information about themselves, including transaction history. Consider applicable audit, legal, or policy constraints that may preclude such action. In certain cases, full deactivation is more appropriate than deletion.\n \n \n Provide users with clear, easy-to-find, site/application data retention policy information.\n \n \n Provide users with appropriate anonymity and pseudonymity options, and the ability to switch among such identity options as desired, in accordance with an organization’s data access policies.\n \n \n Provide a means for users to manage each IdP to RP connection, including complete separation as well as the removal of RP access to one or more attributes.\n \n\n\nUser Perspectives of Trust and Benefits\n\nMany factors can influence user adoption of federated identity systems. As with any technology, users may value some factors more than others. Users often weigh perceived benefits versus risks before making technology adoption decisions. It is critical that IdPs and RPs provide users with sufficient information to enable them to make informed decisions. The concepts of trust and tiers of trust — fundamental principles in federated identity systems — can drive user adoption. Finally, a positive user experience may also result in increased user demand for federation, triggering increased adoption by RPs.\n\nThis sub-section is focused primarily on user trust and user perceptions of benefits versus risks.\n\nTo encourage user adoption, IdPs and RPs need to establish and build trust with users and provide them with an understanding of the benefits and risks of adoption. The following factors should be considered:\n\n\n \n Allow users to control their information disclosure and provide explicit consent through the appropriate use of interactive user interfaces and notifications (see Sec. 7.2). Considerations such as balancing the content, size, and frequency of notifications as well as tailoring notifications to specific communities are necessary to avoid thoughtless user click-through.\n \n For attribute sharing, consider the following:\n \n Provide a means for users to verify those attributes and attribute values that will be shared. Follow good security practices (see Sec. 3.10.2 and Sec. 6).\n Enable users to consent to a partial list of attributes, rather than an all-or-nothing approach. Allow users some degree of online access, even if the user does not consent to share all information.\n Allow users to update their consent to their list of shared attributes.\n Minimize unnecessary information presented to users. For example, do not display system generated attributes (such as pairwise pseudonymous identifiers) even if they are shared with the RP as part of the authentication response.\n Minimize user steps and navigation. For example, build attribute consent into the protocols so they’re not a feature external to the federation transaction. Examples can be found in standards such as OAuth or OpenID Connect.\n Provide effective redress methods such that a user can recover from invalid attribute information claimed by the IdP or collected by the RP. See Sec 3.6 of [SP800-63] for more requirements on providing redress.\n Minimize the number of times a user is required to consent to attribute sharing. Limiting the frequency of consent requests avoids user frustration from multiple requests to share the same attribute.\n \n \n \n Collect information for constrained usage only and minimize information disclosure (see Sec. 7.3). User trust is eroded by unnecessary and superfluous information collection and disclosure or user tracking without explicit user consent. For example, only request attributes from the user that are relevant to the current transaction, not for all possible transactions a user may or may not access at the RP.\n \n Clearly and honestly communicate potential benefits and risks of using federated identity to users. Benefits that users value include time savings, ease of use, reduced number of passwords to manage, and increased convenience.\n\n\nUser concern over risk can negatively influence willingness to adopt federated identity systems. Users may have trust concerns, privacy concerns, security concerns, and single-point-of-failure concerns. For example, users may be fearful of losing access to multiple RPs if a single IdP is unavailable, either temporarily or permanently. Additionally, users may be concerned or confused about learning a new authentication process. In order to foster the adoption of federated identity systems, the perceived benefits must outweigh the perceived risks.\n\nUser Mental Models and Beliefs\n\nUsers’ beliefs and perceptions predispose them to expect certain results and to behave in certain ways. Such beliefs, perceptions, and predispositions are referred to in the social sciences as mental models. For example, people have a mental model of dining out that guides their behavior and expectations at each establishment, such as fast food restaurants, cafeterias, and more formal restaurants. Thus, it is not necessary to be familiar with every establishment to understand how to interact appropriately at each one.\n\nAssisting users in establishing good and complete mental models of federation allows users to generalize beyond a single specific implementation. If federated identity systems are not designed from users’ perspectives, users may form incorrect or incomplete mental models that impact their willingness to adopt these systems. The following factors should be considered:\n\n\n Clearly explain the working relationship and information flow among the transacting parties (e.g., RPs, IdPs, and proxies) to avoid user misconceptions. Use the actual names of the entities in the explanation rather than using the generic terms IdPs and RPs.\n \n Provide prominent visual cues and information so that users understand why seemingly unrelated entities have a working relationship. For example, users may be concerned with mixing online personal activities with government services due to a lack of understanding of the information flow in federated identity systems.\n Provide prominent visual cues and information to users about redirection when an RP needs to redirect control from their site to an IdP. For example, display RP branding within the IdP user interface to inform users when they are logging in with their IdP for access to the destination RP.\n \n \n \n Provide users with clear and usable ways (e.g., visual assurance) to determine the authenticity of the transacting parties (e.g., RPs, IdPs, and proxies). This will also help to alleviate user concern over leaving one domain for another, especially if the root domain changes (e.g., .gov to .com). For example, display the URL of the IdP so that the user can verify that they are not being phished by a malicious site.\n \n Provide users with clear information, including visual cues, regarding logins and logouts. Depending on the implementation, logging into an RP with a federated account can create long-running sessions for the user at both the IdP and RP. Users may not realize that ending their session with the RP will not necessarily end their session with the IdP; users will need to explicitly “log out” of the IdP. Users require clear information to remind them if explicit logouts are required to end their IdP sessions. Both the IdP and RP could also have automated logout features, based on time since authentication or an activity timeout. Users require clear information about when their session might end without any action on their part, in order to avoid frustration, lost work, or insecure workarounds like copying data out of a secure site in order to avoid an unexpected session timeout.\n\n"
} ,
{
"title" : "Equity Considerations",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/equity/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Equity Considerations\n\nThis section is informative.\n\nEquitable access to the functions of IdPs and RPs is an essential element of a federated identity system. The ability for all subscribers to authenticate reliably is required to provide equitable access to government services, even when using federation technology, as specified in Executive Order 13985, Advancing Racial Equity and Support for Underserved Communities Through the Federal Government [EO13985]. In assessing equity risks, IdPs and RPs should consider the overall user population served by their federated identity service. Additionally, IdPs and RPs further identify groups of users within the population whose shared characteristics can cause them to be subject to inequitable access, treatment, or outcomes when using that service. The Usability Considerations provided in Sec. 8 should also be considered to help ensure the overall usability and equity for all persons using federated identity services.\n\nIn its role as the verifier, the IdP needs to be aware of equity considerations related to identity proofing, attribute validation, and enrollment as enumerated in [SP800-63A] Sec. 9 and equity considerations concerning authenticators as enumerated in [SP800-63B] Sec. 9. An RP offering FAL3 will also need to be aware of these same authenticator considerations when processing bound authenticators and holder-of-key assertions.\n\nSince the federation process takes place over a network protocol between multiple active parties, the experience of authenticating using the federation system may present equity problems, such as the following examples:\n\n\n Completing the entire federation transaction without timing out may be difficult for subscribers without a reliable network connection, such as those in rural areas.\n It may be difficult to provide informed consent for a runtime decision regarding the release of attributes for subscribers with intellectual, developmental, learning, or neurocognitive difficulties.\n Systems with sufficient processing power, network access, and other features required to interact with both the IdP and the RP simultaneously may be too costly or beyond some subscribers’ technological skill to access or use.\n Subscribers that share devices may find allowlist-based systems difficult to manage securely, as other users of the device could silently gain unintended access to an RP through a session still active at the IdP.\n It could be prohibitively difficult to re-establish an account at the RP for subscribers who lose access to their IdP for any of a variety of reasons.\n\n\nAdditionally, subscribers in disadvantaged populations could be more susceptible to monitoring and tracking through federation systems, as discussed in Sec. 7.\nIf the IdP knows the subscriber is part of a disadvantaged population, the IdP could specifically target the subscriber by profiling them and their access to the set of RPs, and use the data gathered against the subscriber.\nAlternatively, the IdP could learn that that the subscriber is part of a disadvantaged population by watching the RP connections. For example, if the IdP sees that the subscriber logs into social services, the IdP has learned things about the subscriber’s socioeconomic status that were not disclosed to the IdP. The IdP could then use this to unfairly target the subscriber and provide a lower quality of service.\nAdditionally, subscribers in disadvantaged populations are at a greater risk of having their data correlated between a set of colluding RPs. For example, a set of RPs could share subscriber attributes and behavior among them in order to justify denial of the RP’s services to the subscriber.\nAs such, IdPs and RPs are encouraged to use privacy-enhancing techniques equally across subscriber populations.\n\nWhen consent dialogs and notifications are sent to users, the content of these should be tailored to different subscriber populations in order to facilitate subscriber understanding and avoid thoughtless click-through.\n\nIdPs are required to disclose the method of proofing used for each subscriber as recorded in the subscriber account. This includes all available forms of proofing and exception processes, and possibly compensating controls, as defined in the trust agreement. IdPs and CSPs should not single out subscribers who have had to make use of exception handling or compensating controls beyond the proofing information contained in their subscriber account to avoid bias processing against certain subscriber populations.\n\nSince federation transactions are intended to cross security domain boundaries, discrepancies between the interests of the IdP and the RP could pose additional considerations. This difference in requirements has to be addressed in the trust agreement that governs the connection between these parties, and practices such as transparent reporting can help address some forms of disparities. Furthermore, the availability of alternative IdPs (for the RP) and RPs (for the IdP) for a given service can help enhance the equity of the system overall. For example in a public-private partnership, if a private IdP is used to access a federal RP, or a federal IdP is used to access a private RP, the public and private systems could be driven by different motivations and bound by different requirements in terms of equity, accessibility, and transparency.\n\nNormative requirements have been established requiring IdPs and RPs to mitigate the problems in this area that are expected to be most common. However, normative requirements are unlikely to have anticipated all potential equity problems. Potential equity problems also will vary for different applications. Accordingly, IdPs and RPs need to provide mechanisms for subscribers to report inequitable authentication requirements and to advise them on potential alternative authentication strategies.\n\nThis guideline allows the binding of additional federated identifiers to an RP subscriber account to minimize the risk of IdP access loss (see Sec. 3.7). However, a subscriber might find it difficult to have multiple IdP accounts that are acceptable to the RP at the same time. This inequity can be addressed by having the RP having its own account recovery process that allows for the secure linking of multiple federated identifiers to the RP subscriber account.\n\nRPs need to be aware that not all subscribers will necessarily have access to the same IdPs. The RPs can institute locally authenticated accounts for such subscribers, and later allow binding of those accounts to federated identifiers.\n"
} ,
{
"title" : "List of Symbols, Abbreviations, and Acronyms",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/abbreviations/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "List of Symbols, Abbreviations, and Acronyms\n\n\n 1:1 Comparison\n One-to-One Comparison\n ABAC\n Attribute-Based Access Control\n AAL\n Authentication Assurance Level\n CAPTCHA\n Completely Automated Public Turing test to tell Computer and Humans Apart\n CSP\n Credential Service Provider\n CSRF\n Cross-Site Request Forgery\n DNS\n Domain Name System\n FAL\n Federation Assurance Level\n FEDRAMP\n Federal Risk and Authorization Management Program\n IAL\n Identity Assurance Level\n IdP\n Identity Provider\n JOSE\n JSON Object Signing and Encryption\n JWT\n JSON Web Token\n MAC\n Message Authentication Code\n PIA\n Privacy Impact Assessment\n PII\n Personally Identifiable Information\n PIN\n Personal Identification Number\n PKI\n Public Key Infrastructure\n PPI\n Pairwise Pseudonymous Identifier\n RMF\n Risk Management Framework\n RP\n Relying Party\n SAML\n Security Assertion Markup Language\n SAOP\n Senior Agency Official for Privacy\n SCIM\n System for Cross-domain Identity Management\n SORN\n System of Records Notice\n TLS\n Transport Layer Security\n XSS\n Cross-Site Scripting\n\n"
} ,
{
"title" : "SP 800-63C",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/abstract/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "\nABSTRACT\n\nThis guideline focuses on the use of federated identity and the use of assertions to implement identity federations. Federation allows a given credential service provider to provide authentication attributes and (optionally) subscriber attributes to a number of separately-administered relying parties. Similarly, relying parties may use more than one credential service provider. The guidelines are not intended to constrain the development or use of standards outside of this purpose. This publication supersedes NIST Special Publication (SP) 800-63C.\n\nKeywords\n\nassertions; authentication; credential service provider; digital authentication; electronic authentication; electronic credentials; federations.\n"
} ,
{
"title" : "Changelog",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/changelog/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Changelog\n\nThis appendix is informative. It provides an overview of the changes to SP 800-63C since its initial release.\n\n\n \n Added discussion of equity considerations and requirements.\n \n \n Established trust agreements and registration/discovery (key establishment) as discrete steps in the federation process.\n \n \n All FALs have requirements around establishment of trust agreements and registration.\n \n \n FAL definitions no longer have encryption requirements; encryption is triggered by passing PII in an assertion through an untrusted party regardless of FAL.\n \n \n FAL2 requires injection protection.\n \n \n FAL3 allows more general bound authenticators including RP-managed authenticators, in addition to classical holder-of-key assertions.\n \n \n Communication of IAL/AAL/FAL required.\n \n \n Updated language to be more inclusive.\n \n \n Added definition and discussion of RP subscriber accounts.\n \n \n Added attribute provisioning models and discussion.\n \n \n Subscriber-controlled wallet model added, with specific requirements separated from general-purpose IdPs.\n \n \n Restructured core document sections to address common, general-purpose, and subscriber-controlled wallet requirements in separate sections.\n \n \n Redress requirements for IdPs and RPs added.\n \n \n Enterprise and dynamic use cases added throughout, with explicit examples.\n \n\n"
} ,
{
"title" : "Glossary",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/glossary/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Glossary\n\nA wide variety of terms are used in the realm of digital identity. While many definitions are consistent with earlier versions of SP 800-63, some have changed in this revision. Many of these terms lack a single, consistent definition, warranting careful attention to how the terms are defined here.\n\n\n account linking\n The association of multiple federated identifiers with a single RP subscriber account, or the management of those associations.\n account resolution\n The association of an RP subscriber account with information already held by the RP prior to the federation transaction and outside of a trust agreement.\n activation factor\n An additional authentication factor that is used to enable successful authentication with a multi-factor authenticator.\n allowlist\n A documented list of specific elements that are allowed, per policy decision. In federation contexts, this is most commonly used to refer to the list of RPs allowed to connect to an IdP without subscriber intervention. This concept has historically been known as a whitelist.\n approved cryptography\n An encryption algorithm, hash function, random bit generator, or similar technique that is Federal Information Processing Standard (FIPS)-approved or NIST-recommended. Approved algorithms and techniques are either specified or adopted in a FIPS or NIST recommendation.\n assertion\n A statement from an IdP to an RP that contains information about an authentication event for a subscriber. Assertions can also contain identity attributes for the subscriber.\n assertion reference\n A data object, created in conjunction with an assertion, that is used by the RP to retrieve an assertion over an authenticated protected channel.\n assertion presentation\n The method by which an assertion is transmitted to the RP.\n asymmetric keys\n Two related keys, comprised of a public key and a private key, that are used to perform complementary operations such as encryption and decryption or signature verification and generation.\n attribute\n A quality or characteristic ascribed to someone or something. An identity attribute is an attribute about the identity of a subscriber.\n attribute bundle\n A package of attribute values and derived attribute values from a CSP. The package has necessary cryptographic protection to allow validation of the bundle independent from interaction with the CSP or IdP. Attribute bundles are often used with subscriber-controlled wallets.\n attribute provider\n The provider of an identity API that provides access to a subscriber’s attributes without necessarily asserting that the subscriber is present to the RP.\n attribute value\n A complete statement that asserts an identity attribute of a subscriber, independent of format. For example, for the attribute “birthday,” a value could be “12/1/1980” or “December 1, 1980.”\n audience restriction\n The restriction of a message to a specific target audience to prevent a receiver from unknowingly processing a message intended for another recipient. In federation protocols, assertions are audience restricted to specific RPs to prevent an RP from accepting an assertion generated for a different RP.\n authenticate\n See authentication.\n authenticated protected channel\n An encrypted communication channel that uses approved cryptography where the connection initiator (client) has authenticated the recipient (server). Authenticated protected channels are encrypted to provide confidentiality and protection against active intermediaries and are frequently used in the user authentication process. Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) [RFC9325] are examples of authenticated protected channels in which the certificate presented by the recipient is verified by the initiator. Unless otherwise specified, authenticated protected channels do not require the server to authenticate the client. Authentication of the server is often accomplished through a certificate chain that leads to a trusted root rather than individually with each server.\n authenticated session\n See protected session.\n authentication\n The process by which a claimant proves possession and control of one or more authenticators bound to a subscriber account to demonstrate that they are the subscriber associated with that account.\n Authentication Assurance Level (AAL)\n A category describing the strength of the authentication process.\n authenticator\n Something that the subscriber possesses and controls (e.g., a cryptographic module or password) and that is used to authenticate a claimant’s identity. See authenticator type and multi-factor authenticator.\n authenticator binding\n The establishment of an association between a specific authenticator and a subscriber account that allows the authenticator to be used to authenticate for that subscriber account, possibly in conjunction with other authenticators.\n authorize\n A decision to grant access, typically automated by evaluating a subject’s attributes.\n authorized party\n In federation, the organization, person, or entity that is responsible for making decisions regarding the release of information within the federation transaction, most notably subscriber attributes. This is often the subscriber (when runtime decisions are used) or the party operating the IdP (when allowlists are used).\n back-channel communication\n Communication between two systems that relies on a direct connection without using redirects through an intermediary such as a browser.\n bearer assertion\n An assertion that can be presented on its own as proof of the identity of the presenter.\n blocklist\n A documented list of specific elements that are blocked, per policy decision. This concept has historically been known as a blacklist.\n challenge-response protocol\n An authentication protocol in which the verifier sends the claimant a challenge (e.g., a random value or nonce) that the claimant combines with a secret (e.g., by hashing the challenge and a shared secret together or by applying a private-key operation to the challenge) to generate a response that is sent to the verifier. The verifier can independently verify the response generated by the claimant (e.g., by re-computing the hash of the challenge and the shared secret and comparing to the response or performing a public-key operation on the response) and establish that the claimant possesses and controls the secret.\n core attributes\n The set of identity attributes that the CSP has determined and documented to be required for identity proofing.\n credential service provider (CSP)\n A trusted entity whose functions include identity proofing applicants to the identity service and registering authenticators to subscriber accounts. A CSP may be an independent third party.\n cross-site request forgery (CSRF)\n An attack in which a subscriber who is currently authenticated to an RP and connected through a secure session browses an attacker’s website, causing the subscriber to unknowingly invoke unwanted actions at the RP.\n\n For example, if a bank website is vulnerable to a CSRF attack, it may be possible for a subscriber to unintentionally authorize a large money transfer by clicking on a malicious link in an email while a connection to the bank is open in another browser window.\n \n cross-site scripting (XSS)\n A vulnerability that allows attackers to inject malicious code into an otherwise benign website. These scripts acquire the permissions of scripts generated by the target website to compromise the confidentiality and integrity of data transfers between the website and clients. Websites are vulnerable if they display user-supplied data from requests or forms without sanitizing the data so that it is not executable.\n derived attribute value\n A statement that asserts a limited identity attribute of a subscriber without containing the attribute value from which it is derived, independent of format. For example, instead of requesting the attribute “birthday,” a derived value could be “older than 18”. Instead of requesting the attribute for “physical address,” a derived value could be “currently residing in this district.” Previous versions of these guidelines referred to this construct as an “attribute reference.”\n digital identity\n An attribute or set of attributes that uniquely describes a subject within a given context.\n digital signature\n An asymmetric key operation in which the private key is used to digitally sign data and the public key is used to verify the signature. Digital signatures provide authenticity protection, integrity protection, and non-repudiation support but not confidentiality or replay attack protection.\n disassociability\n Enabling the processing of PII or events without association to individuals or devices beyond the operational requirements of the system. [NISTIR8062]\n entropy\n The amount of uncertainty that an attacker faces to determine the value of a secret. Entropy is usually stated in bits. A value with n bits of entropy has the same degree of uncertainty as a uniformly distributed n-bit random value.\n equity\n The consistent and systematic fair, just, and impartial treatment of all individuals, including individuals who belong to underserved communities that have been denied such treatment, such as Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders, and other persons of color; members of religious minorities; lesbian, gay, bisexual, transgender, and queer (LGBTQ+) persons; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality. [EO13985]\n Federal Information Processing Standard (FIPS)\n Under the Information Technology Management Reform Act (Public Law 104-106), the Secretary of Commerce approves the standards and guidelines that the National Institute of Standards and Technology (NIST) develops for federal computer systems. NIST issues these standards and guidelines as Federal Information Processing Standards (FIPS) for government-wide use. NIST develops FIPS when there are compelling federal government requirements, such as for security and interoperability, and there are no acceptable industry standards or solutions. See background information for more details.\n\n FIPS documents are available online on the FIPS home page: https://www.nist.gov/itl/fips.cfm\n \n federated identifier\n The combination of a subject identifier within an assertion and an identifier for the IdP that issued that assertion. When combined, these pieces of information uniquely identify the subscriber in the context of a federation transaction.\n federation\n A process that allows for the conveyance of identity and authentication information across a set of networked systems.\n Federation Assurance Level (FAL)\n A category that describes the process used in a federation transaction to communicate authentication events and subscriber attributes to an RP.\n federation protocol\n A technical protocol that is used in a federation transaction between networked systems.\n federation proxy\n A component that acts as a logical RP to a set of IdPs and a logical IdP to a set of RPs, bridging the two systems with a single component. These are sometimes referred to as “brokers.”\n federation transaction\n A specific instance of processing an authentication using a federation process for a specific subscriber by conveying an assertion from an IdP to an RP.\n front-channel communication\n Communication between two systems that relies on passing messages through an intermediary, such as using redirects through the subscriber’s browser.\n hash function\n A function that maps a bit string of arbitrary length to a fixed-length bit string. Approved hash functions satisfy the following properties:\n\n \n \n One-way — It is computationally infeasible to find any input that maps to any pre-specified output.\n \n \n Collision-resistant — It is computationally infeasible to find any two distinct inputs that map to the same output.\n \n \n \n identifier\n A data object that is associated with a single, unique entity (e.g., individual, device, or session) within a given context and is never assigned to any other entity within that context.\n identity\n See digital identity\n identity API\n A protected API accessed by an RP to access the attributes of a specific subscriber.\n Identity Assurance Level (IAL)\n A category that conveys the degree of confidence that the subject’s claimed identity is their real identity.\n identity provider (IdP)\n The party in a federation transaction that creates an assertion for the subscriber and transmits the assertion to the RP.\n injection attack\n An attack in which an attacker supplies untrusted input to a program. In the context of federation, the attacker presents an untrusted assertion or assertion reference to the RP in order to create an authenticated session with the RP.\n login\n Establishment of an authenticated session between a person and a system. Also known as “sign in”, “log on”, and “sign on.”\n message authentication code (MAC)\n A cryptographic checksum on data that uses a symmetric key to detect both accidental and intentional modifications of the data. MACs provide authenticity and integrity protection, but not non-repudiation protection.\n network\n An open communications medium, typically the Internet, used to transport messages between the claimant and other parties. Unless otherwise stated, no assumptions are made about the network’s security; it is assumed to be open and subject to active (e.g., impersonation, session hijacking) and passive (e.g., eavesdropping) attacks at any point between the parties (e.g., claimant, verifier, CSP, RP).\n nonce\n A value used in security protocols that is never repeated with the same key. For example, nonces used as challenges in challenge-response authentication protocols must not be repeated until authentication keys are changed. Otherwise, there is a possibility of a replay attack. Using a nonce as a challenge is a different requirement than a random challenge, because a nonce is not necessarily unpredictable.\n pairwise pseudonymous identifier\n A pseudonymous identifier generated by an IdP for use at a specific RP.\n personal information\n See personally identifiable information.\n personally identifiable information (PII)\n Information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other information that is linked or linkable to a specific individual. [A-130]\n predictability\n Enabling reliable assumptions by individuals, owners, and operators about PII and its processing by an information system. [NISTIR8062]\n private key\n In asymmetric key cryptography, the private key (i.e., a secret key) is a mathematical key used to create digital signatures and, depending on the algorithm, decrypt messages or files that are encrypted with the corresponding public key. In symmetric key cryptography, the same private key is used for both encryption and decryption.\n processing\n Operation or set of operations performed upon PII that can include, but is not limited to, the collection, retention, logging, generation, transformation, use, disclosure, transfer, and disposal of PII. [NISTIR8062]\n protected session\n A session in which messages between two participants are encrypted and integrity is protected using a set of shared secrets called “session keys.”\n\n A protected session is said to be authenticated if — during the session — one participant proves possession of one or more authenticators in addition to the session keys, and if the other party can verify the identity associated with the authenticators. If both participants are authenticated, the protected session is said to be mutually authenticated.\n \n Provisioning API\n A protected API that allows an RP to access identity attributes for multiple subscribers for the purposes of provisioning and managing RP subscriber accounts.\n pseudonymous identifier\n A meaningless but unique identifier that does not allow the RP to infer anything regarding the subscriber but that does permit the RP to associate multiple interactions with a single subscriber.\n public key\n The public part of an asymmetric key pair that is used to verify signatures or encrypt data.\n public key certificate\n A digital document issued and digitally signed by the private key of a certificate authority that binds an identifier to a subscriber’s public key. The certificate indicates that the subscriber identified in the certificate has sole control of and access to the private key. See also [RFC5280].\n public key infrastructure (PKI)\n A set of policies, processes, server platforms, software, and workstations used to administer certificates and public-_private key_ pairs, including the ability to issue, maintain, and revoke public key certificates.\n reauthentication\n The process of confirming the subscriber’s continued presence and intent to be authenticated during an extended usage session.\n relying party (RP)\n An entity that relies upon a verifier’s assertion of a subscriber’s identity, typically to process a transaction or grant access to information or a system.\n replay attack\n An attack in which the attacker is able to replay previously captured messages (between a legitimate claimant and a verifier) to masquerade as that claimant to the verifier or vice versa.\n risk assessment\n The process of identifying, estimating, and prioritizing risks to organizational operations (i.e., mission, functions, image, or reputation), organizational assets, individuals, and other organizations that result from the operation of a system. A risk assessment is part of risk management, incorporates threat and vulnerability analyses, and considers mitigations provided by security controls that are planned or in-place. It is synonymous with “risk analysis.”\n risk management\n The program and supporting processes that manage information security risk to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, and other organizations and includes (i) establishing the context for risk-related activities, (ii) assessing risk, (iii) responding to risk once determined, and (iv) monitoring risk over time.\n RP subscriber account\n An account established and managed by the RP in a federated system based on the RP’s view of the subscriber account from the IdP. An RP subscriber account is associated with one or more federated identifiers and allows the subscriber to access the account through a federation transaction with the IdP.\n security domain\n A set of systems under a common administrative and access control.\n session\n A persistent interaction between a subscriber and an endpoint, either an RP or a CSP. A session begins with an authentication event and ends with a session termination event. A session is bound by the use of a session secret that the subscriber’s software (e.g., a browser, application, or OS) can present to the RP to prove association of the session with the authentication event.\n session hijack attack\n An attack in which the attacker is able to insert themselves between a claimant and a verifier subsequent to a successful authentication exchange between the latter two parties. The attacker is able to pose as a subscriber to the verifier or vice versa to control session data exchange. Sessions between the claimant and the RP can be similarly compromised.\n single sign-on (SSO)\n An authentication process by which one account and its authenticators are used to access multiple applications in a seamless manner, generally implemented with a federation protocol.\n subject\n A person, organization, device, hardware, network, software, or service. In these guidelines, a subject is a natural person.\n subscriber\n An individual enrolled in the CSP identity service.\n subscriber account\n An account established by the CSP containing information and authenticators registered for each subscriber enrolled in the CSP identity service.\n symmetric key\n A cryptographic key used to perform both the cryptographic operation and its inverse. (e.g., to encrypt and decrypt or create a message authentication code and to verify the code).\n Transport Layer Security (TLS)\n An authentication and security protocol widely implemented in browsers and web servers. TLS is defined by [RFC5246]. TLS is similar to the older SSL protocol, and TLS 1.0 is effectively SSL version 3.1. SP 800-52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations [SP800-52], specifies how TLS is to be used in government applications.\n trust agreement\n A set of conditions under which a CSP, IdP, and RP are allowed to participate in a federation transaction for the purposes of establishing an authentication session between the subscriber and the RP.\n usability\n The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use. [ISO/IEC9241-11]\n verifier\n An entity that verifies the claimant’s identity by verifying the claimant’s possession and control of one or more authenticators using an authentication protocol. To do this, the verifier needs to confirm the binding of the authenticators with the subscriber account and check that the subscriber account is active.\n\n"
} ,
{
"title" : "Note to Reviewers",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/reviewers/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Note to Reviewers\n\nIn December 2022, NIST released the Initial Public Draft (IPD) of SP 800-63, Revision 4. Over the course of a 119-day public comment period, the authors received exceptional feedback from a broad community of interested entities and individuals. The input from nearly 4,000 specific comments has helped advance the improvement of these Digital Identity Guidelines in a manner that supports NIST’s critical goals of providing foundational risk management processes and requirements that enable the implementation of secure, private, equitable, and accessible identity systems. Based on this initial wave of feedback, several substantive changes have been made across all of the volumes. These changes include but are not limited to the following:\n\n\n Updated text and context setting for risk management. Specifically, the authors have modified the process defined in the IPD to include a context-setting step of defining and understanding the online service that the organization is offering and intending to potentially protect with identity systems.\n Added recommended continuous evaluation metrics. The continuous improvement section introduced by the IPD has been expanded to include a set of recommended metrics for holistically evaluating identity solution performance. These are recommended due to the complexities of data streams and variances in solution deployments.\n Expanded fraud requirements and recommendations. Programmatic fraud management requirements for credential service providers and relying parties now address issues and challenges that may result from the implementation of fraud checks.\n Restructured the identity proofing controls. There is a new taxonomy and structure for the requirements at each assurance level based on the means of providing the proofing: Remote Unattended, Remote Attended (e.g., video session), Onsite Unattended (e.g., kiosk), and Onsite Attended (e.g., in-person).\n Integrated syncable authenticators. In April 2024, NIST published interim guidance for syncable authenticators. This guidance has been integrated into SP 800-63B as normative text and is provided for public feedback as part of the Revision 4 volume set.\n Added user-controlled wallets to the federation model. Digital wallets and credentials (called “attribute bundles” in SP 800-63C) are seeing increased attention and adoption. At their core, they function like a federated IdP, generating signed assertions about a subject. Specific requirements for this presentation and the emerging context are presented in SP 800-63C-4.\n\n\nThe rapid proliferation of online services over the past few years has heightened the need for reliable, equitable, secure, and privacy-protective digital identity solutions.\nRevision 4 of NIST Special Publication SP 800-63, Digital Identity Guidelines, intends to respond to the changing digital landscape that has emerged since the last major revision of this suite was published in 2017, including the real-world implications of online risks. The guidelines present the process and technical requirements for meeting digital identity management assurance levels for identity proofing, authentication, and federation, including requirements for security and privacy as well as considerations for fostering equity and the usability of digital identity solutions and technology.\n\nBased on the feedback provided in response to the June 2020 Pre-Draft Call for Comments, research into real-world implementations of the guidelines, market innovation, and the current threat environment, this draft seeks to:\n\n\n Address comments received in response to the IPD of Revision 4 of SP 800-63\n Clarify the text to address the questions and issues raised in the public comments\n Update all four volumes of SP 800-63 based on current technology and market developments, the changing digital identity threat landscape, and organizational needs for digital identity solutions to address online security, privacy, usability, and equity\n\n\nNIST is specifically interested in comments and recommendations on the following topics:\n\n\n \n Federation and Assertions\n\n \n Is the concept of user-controlled wallets and attribute bundles sufficiently and clearly described to support real-world implementations? Are there additional requirements or considerations that should be added to improve the security, usability, and privacy of these technologies?\n \n \n \n General\n\n \n What specific implementation guidance, reference architectures, metrics, or other supporting resources could enable more rapid adoption and implementation of this and future iterations of the Digital Identity Guidelines?\n What applied research and measurement efforts would provide the greatest impacts on the identity market and advancement of these guidelines?\n \n \n\n\nReviewers are encouraged to comment and suggest changes to the text of all four draft volumes of the SP 800-63-4 suite. NIST requests that all comments be submitted by 11:59pm Eastern Time on October 7th, 2024. Please submit your comments to [email protected]. NIST will review all comments and make them available on the NIST Identity and Access Management website. Commenters are encouraged to use the comment template provided on the NIST Computer Security Resource Center website for responses to these notes to reviewers and for specific comments on the text of the four-volume suite.\n"
} ,
{
"title" : "Purpose",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/preface/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "Preface\n\nThis publication and its companion volumes, [SP800-63], [SP800-63A], and [SP800-63B], provide technical guidelines to organizations for the implementation of digital identity services.\n\nThis document, SP 800-63C, provides requirements to identity providers (IdPs) and relying parties (RPs) of federated identity systems. Federation allows a given IdP to provide authentication attributes and (optionally) subscriber attributes to a number of separately-administered RPs through the use of federation protocols and assertions. Similarly, RPs can use more than one IdP as sources of identities.\n"
} ,
{
"title" : "References",
"category" : "",
"tags" : "",
"url" : "/800-63-4/sp800-63c/references/",
"date" : "2024-08-28 20:39:12 -0500",
"content" : "References\n\nThis section is informative.\n\n[A-130] Office of Management and Budget (2016) Managing Information as a Strategic Resource. (The White House, Washington, DC), OMB Circular A-130, July 28, 2016. Available at https://obamawhitehouse.archives.gov/sites/default/files/omb/assets/OMB/circulars/a130/a130revised.pdf\n\n[EO13985] Biden J (2021) Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. (The White House, Washington, DC), Executive Order 13985, January 25, 2021. https://www.federalregister.gov/documents/2021/01/25/2021-01753/advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government\n\n[FAPI] Fett D, Bradley J, Heenan J (2024), FAPI 2.0 Security Profile (draft). (OpenID Foundation, San Ramon, CA). https://openid.bitbucket.io/fapi/fapi-2_0-security-profile.html\n\n[FEDRAMP] General Services Administration (2022), How to Become FedRAMP Authorized. Available at https://www.fedramp.gov/\n\n[FIPS140] National Institute of Standards and Technology (2019) Security Requirements for Cryptographic Modules. (U.S. Department of Commerce, Washington, DC), Federal Information Processing Standards Publication (FIPS) 140-3. https://doi.org/10.6028/NIST.FIPS.140-3\n\n[ISO/IEC9241-11] International Standards Organization (2018) ISO/IEC 9241-11 Ergonomics of human-system interaction – Part 11: Usability: Definitions and concepts (ISO, Geneva, Switzerland). Available at https://www.iso.org/standard/63500.html\n\n[ISO/IEC18013-5] International Standards Organization (2021) ISO/IEC 18013-5 Personal identification — ISO-compliant driving licence — Part 5: Mobile driving licence (mDL) application (ISO, Geneva, Switzerland). Available at https://www.iso.org/obp/ui/en/#iso:std:iso-iec:18013:-5:ed-1:v1:en\n\n[NISTIR8062] Brooks S, Garcia M, Lefkovitz N, Lightman S, Nadeau E (2017) An Introduction to Privacy Engineering and Risk Management in Federal Systems. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Interagency or Internal Report (IR) 8062, January 2017. https://doi.org/10.6028/NIST.IR.8062\n\n[NISTIR8112] Grassi PA, Lefkovitz NB, Nadeau EM, Galluzzo RJ, Dinh AT (2018) Attribute Metadata: A proposed Schema for Evaluating Federated Attributes. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Interagency or Internal Report (IR) 8112. https://pages.nist.gov/NISTIR-8112/NISTIR-8112.html\n\n[OIDC] Sakimura N, Bradley J, Jones M, de Medeiros B, Mortimore C (2014) OpenID Connect Core 1.0 incorporating errata set 1 (OpenID Foundation, San Ramon, CA). https://openid.net/specs/openid-connect-core-1_0.html\n\n[OIDC-Basic]\nSakimura N, Bradley J, Jones M, de Medeiros B, Mortimore C (2022) OpenID Connect Basic Client Implementer’s Guide 1.0 (OpenID Foundation, San Ramon, CA). https://openid.net/specs/openid-connect-basic-1_0.html\n\n[OIDC-Implicit]\nSakimura N, Bradley J, Jones M, de Medeiros B, Mortimore C (2022) OpenID Connect Implicit Client Implementer’s Guide 1.0 (OpenID Foundation, San Ramon, CA). https://openid.net/specs/openid-connect-implicit-1_0.html\n\n[OIDC-Registration]\nSakimura N, Bradley J, Jones M (2023) OpenID Connect Dynamic Client Registration 1.0 incorporating errata set 2 (OpenID Foundation, San Ramon, CA). https://openid.net/specs/openid-connect-registration-1_0.html\n\n[RFC5246] Rescorla E, Dierks T (2008) The Transport Layer Security (TLS) Protocol Version 1.2. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 5246. https://doi.org/10.17487/RFC5246\n\n[RFC5280] Cooper D, Santesson S, Farrell S, Boeyen S, Housley R, Polk W (2008) Internet X.509 Public Key Infrastructure Certification and Certificate Revocation List (CRL) Profile. (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 5280. https://doi.org/10.17487/RFC5280\n\n[RFC7591] Richer J, Jones M, Bradley J, Machulak M, Hunt P (2015) OAuth 2.0 Dynamic Client Registration Protocol. (Internet Engineering Task Force, Reston, VA), RFC 7591. https://doi.org/10.17487/RFC7591\n\n[RFC7636] Sakimura N, Bradley J, Agarwal N (2015) Proof Key For Code Exchange by OAuth Public Clients. (Internet Engineering Task Force, Reston, VA), RFC 7636. https://doi.org/10.17487/RFC7636\n\n[RFC9325] Sheffer Y, Saint-Andre P, Fossati T (2022) Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS). (Internet Engineering Task Force (IETF)), IETF Request for Comments (RFC) 9325. https://doi.org/10.17487/RFC9325\n\n[SAML] Ragouzis N, Hughes J, Philpott R, Maler E, Madsen P, Scavo T (2008) Security Assertion Markup Language (SAML) V2.0 Technical Overview. (Organization for Advancement of Structured Information Standards (OASIS) Open, Woburn, MA), SAML 2.0. https://docs.oasis-open.org/security/saml/Post2.0/sstc-saml-tech-overview-2.0.html\n\n[SAML-Bindings] Cantor S, Frederick H, Kemp J, Philpott R, Maler M (2005) Bindings for the OASIS Security Assertion Markup Language (SAML) V2.0. (Organization for Advancement of Structured Information Standards (OASIS) Open, Woburn, MA), SAML 2.0. https://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf\n\n[SAML-WebSSO] Hughes J, Cantor S, Hodges J, Hirsch F, Mishra P, Philpott R, Maler E (2005) Profiles for the OASIS Security Assertion Markup Language (SAML) V2.0. (Organization for Advancement of Structured Information Standards (OASIS) Open, Woburn, MA), SAML Profiles 2.0. https://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf\n\n[Section508] General Services Administration (2022) IT Accessibility Laws and Policies. Available at https://www.section508.gov/manage/laws-and-policies/\n\n[SD-JWT] Fett D, Yasuda K, Campbell B (2024) Selective Disclosure for JWTs (SD-JWT). (Internet Engineering Task Force, Reston, VA). Active Internet Draft. https://datatracker.ietf.org/doc/draft-ietf-oauth-selective-disclosure-jwt/\n\n[SP800-52] McKay K, Cooper D (2019) Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations. (National Institute of Standards and Technology), NIST Special Publication (SP) 800-52 Rev. 2. https://doi.org/10.6028/NIST.SP.800-52r2\n\n[SP800-53] Joint Task Force (2020) Security and Privacy Controls for Information Systems and Organizations. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-53 Rev. 5, Includes updates as of December 10, 2020. https://doi.org/10.6028/NIST.SP.800-53r5\n\n[SP800-63] Temoshok D, Proud-Madruga D, Choong YY, Galluzzo R, Gupta S, LaSalle C, Lefkovitz N, Regenscheid A (2024) Digital Identity Guidelines. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63-4 2pd. https://doi.org/10.6028/NIST.SP.800-63-4.2pd\n\n[SP800-63A] Temoshok D, Abruzzi C, Choong YY, Fenton JL, Galluzzo R, LaSalle C, Lefkovitz N, Regenscheid A (2024) Digital Identity Guidelines: Identity Proofing and Enrollment. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63A-4 2pd. https://doi.org/10.6028/NIST.SP.800-63a-4.2pd\n\n[SP800-63B] Temoshok D, Fenton JL, Choong YY, Lefkovitz N, Regenscheid A, Galluzzo R, Richer JP (2024) Digital Identity Guidelines: Authentication and Authenticator Management. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-63B-4 ipd. https://doi.org/10.6028/NIST.SP.800-63b-4.2pd\n\n[SP800-131A] Barker E,\nRoginsky A (2019) Transitioning the Use of Cryptographic Algorithms and Key Lengths. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-131Ar2. https://doi.org/10.6028/NIST.SP.800-131Ar2\n"
}
]