From 820c7c4d5540e5a5202a155f4b451e8cafd36cf5 Mon Sep 17 00:00:00 2001 From: awssdkgo Date: Tue, 14 Nov 2023 19:38:13 +0000 Subject: [PATCH] Release v1.47.11 (2023-11-14) === ### Service Client Updates * `service/backup`: Updates service API, documentation, and paginators * `service/cleanrooms`: Updates service API and documentation * `service/connect`: Updates service API and documentation * `service/glue`: Updates service API, documentation, and paginators * Introduces new storage optimization APIs to support automatic compaction of Apache Iceberg tables. * `service/iot`: Updates service API and documentation * This release introduces new attributes in API CreateSecurityProfile, UpdateSecurityProfile and DescribeSecurityProfile to support management of Metrics Export for AWS IoT Device Defender Detect. * `service/lambda`: Updates service API * Add Python 3.12 (python3.12) support to AWS Lambda * `service/mediatailor`: Updates service API * `service/pipes`: Updates service API and documentation * `service/resource-explorer-2`: Updates service API, documentation, and paginators * `service/sagemaker`: Updates service API and documentation * This release makes Model Registry Inference Specification fields as not required. * `service/signer`: Updates service documentation * Documentation updates for AWS Signer * `service/states`: Updates service API and documentation * This release adds support to redrive executions in AWS Step Functions with a new RedriveExecution operation. ### SDK Bugs * `aws/defaults`: Feature updates to endpoint credentials provider. * Add support for dynamic auth token from file and EKS container host in configured URI. --- CHANGELOG.md | 27 + CHANGELOG_PENDING.md | 2 - aws/endpoints/defaults.go | 98 + aws/version.go | 2 +- models/apis/backup/2018-11-15/api-2.json | 302 ++- models/apis/backup/2018-11-15/docs-2.json | 155 +- .../2018-11-15/endpoint-rule-set-1.json | 40 +- .../backup/2018-11-15/endpoint-tests-1.json | 11 + .../apis/backup/2018-11-15/paginators-1.json | 15 + models/apis/cleanrooms/2022-02-17/api-2.json | 77 +- models/apis/cleanrooms/2022-02-17/docs-2.json | 50 +- .../2022-02-17/endpoint-rule-set-1.json | 64 +- models/apis/connect/2017-08-08/api-2.json | 24 +- models/apis/connect/2017-08-08/docs-2.json | 24 + models/apis/glue/2017-03-31/api-2.json | 324 +++ models/apis/glue/2017-03-31/docs-2.json | 230 +- models/apis/glue/2017-03-31/paginators-1.json | 5 + models/apis/iot/2015-05-28/api-2.json | 37 +- models/apis/iot/2015-05-28/docs-2.json | 37 +- .../iot/2015-05-28/endpoint-rule-set-1.json | 40 +- models/apis/lambda/2015-03-31/api-2.json | 3 +- models/apis/mediatailor/2018-04-23/api-2.json | 18 +- .../2018-04-23/endpoint-rule-set-1.json | 40 +- models/apis/pipes/2015-10-07/api-2.json | 134 +- models/apis/pipes/2015-10-07/docs-2.json | 153 +- .../pipes/2015-10-07/endpoint-rule-set-1.json | 64 +- .../pipes/2015-10-07/endpoint-tests-1.json | 123 +- .../resource-explorer-2/2022-07-28/api-2.json | 172 +- .../2022-07-28/docs-2.json | 101 +- .../2022-07-28/endpoint-rule-set-1.json | 272 ++- .../2022-07-28/endpoint-tests-1.json | 6 + .../2022-07-28/paginators-1.json | 6 + models/apis/sagemaker/2017-07-24/api-2.json | 6 +- models/apis/sagemaker/2017-07-24/docs-2.json | 6 +- models/apis/signer/2017-08-25/docs-2.json | 82 +- .../2017-08-25/endpoint-rule-set-1.json | 366 ++-- models/apis/states/2016-11-23/api-2.json | 119 +- models/apis/states/2016-11-23/docs-2.json | 97 +- .../2016-11-23/endpoint-rule-set-1.json | 386 ++-- models/endpoints/endpoints.json | 74 +- service/backup/api.go | 1733 ++++++++++++++- service/backup/backupiface/interface.go | 21 + service/cleanrooms/api.go | 355 ++- service/connect/api.go | 76 + service/glue/api.go | 1896 +++++++++++++++++ service/glue/glueiface/interface.go | 27 + service/iot/api.go | 157 +- service/lambda/api.go | 4 + service/pipes/api.go | 763 ++++++- service/resourceexplorer2/api.go | 756 ++++++- service/resourceexplorer2/errors.go | 17 +- .../resourceexplorer2iface/interface.go | 11 + service/sagemaker/api.go | 20 +- service/sfn/api.go | 672 +++++- service/sfn/doc.go | 5 + service/sfn/errors.go | 13 + service/sfn/sfniface/interface.go | 4 + service/signer/api.go | 126 +- service/signer/doc.go | 15 +- 59 files changed, 9294 insertions(+), 1169 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 23cdae1be33..514a005c891 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,30 @@ +Release v1.47.11 (2023-11-14) +=== + +### Service Client Updates +* `service/backup`: Updates service API, documentation, and paginators +* `service/cleanrooms`: Updates service API and documentation +* `service/connect`: Updates service API and documentation +* `service/glue`: Updates service API, documentation, and paginators + * Introduces new storage optimization APIs to support automatic compaction of Apache Iceberg tables. +* `service/iot`: Updates service API and documentation + * This release introduces new attributes in API CreateSecurityProfile, UpdateSecurityProfile and DescribeSecurityProfile to support management of Metrics Export for AWS IoT Device Defender Detect. +* `service/lambda`: Updates service API + * Add Python 3.12 (python3.12) support to AWS Lambda +* `service/mediatailor`: Updates service API +* `service/pipes`: Updates service API and documentation +* `service/resource-explorer-2`: Updates service API, documentation, and paginators +* `service/sagemaker`: Updates service API and documentation + * This release makes Model Registry Inference Specification fields as not required. +* `service/signer`: Updates service documentation + * Documentation updates for AWS Signer +* `service/states`: Updates service API and documentation + * This release adds support to redrive executions in AWS Step Functions with a new RedriveExecution operation. + +### SDK Bugs +* `aws/defaults`: Feature updates to endpoint credentials provider. + * Add support for dynamic auth token from file and EKS container host in configured URI. + Release v1.47.10 (2023-11-13) === diff --git a/CHANGELOG_PENDING.md b/CHANGELOG_PENDING.md index 388b1943547..8a1927a39ca 100644 --- a/CHANGELOG_PENDING.md +++ b/CHANGELOG_PENDING.md @@ -3,5 +3,3 @@ ### SDK Enhancements ### SDK Bugs -* `aws/defaults`: Feature updates to endpoint credentials provider. - * Add support for dynamic auth token from file and EKS container host in configured URI. \ No newline at end of file diff --git a/aws/endpoints/defaults.go b/aws/endpoints/defaults.go index 7ebdf4325c8..ee86c50ee1f 100644 --- a/aws/endpoints/defaults.go +++ b/aws/endpoints/defaults.go @@ -6229,6 +6229,12 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-2", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-3", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -6250,6 +6256,9 @@ var awsPartition = partition{ endpointKey{ Region: "eu-south-1", }: endpoint{}, + endpointKey{ + Region: "eu-south-2", + }: endpoint{}, endpointKey{ Region: "eu-west-1", }: endpoint{}, @@ -6304,6 +6313,9 @@ var awsPartition = partition{ }, Deprecated: boxedTrue, }, + endpointKey{ + Region: "il-central-1", + }: endpoint{}, endpointKey{ Region: "me-central-1", }: endpoint{}, @@ -7002,6 +7014,14 @@ var awsPartition = partition{ Region: "ap-south-1", }, }, + endpointKey{ + Region: "ap-south-2", + }: endpoint{ + Hostname: "compute-optimizer.ap-south-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-south-2", + }, + }, endpointKey{ Region: "ap-southeast-1", }: endpoint{ @@ -7018,6 +7038,22 @@ var awsPartition = partition{ Region: "ap-southeast-2", }, }, + endpointKey{ + Region: "ap-southeast-3", + }: endpoint{ + Hostname: "compute-optimizer.ap-southeast-3.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-3", + }, + }, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{ + Hostname: "compute-optimizer.ap-southeast-4.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-4", + }, + }, endpointKey{ Region: "ca-central-1", }: endpoint{ @@ -7034,6 +7070,14 @@ var awsPartition = partition{ Region: "eu-central-1", }, }, + endpointKey{ + Region: "eu-central-2", + }: endpoint{ + Hostname: "compute-optimizer.eu-central-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-central-2", + }, + }, endpointKey{ Region: "eu-north-1", }: endpoint{ @@ -7050,6 +7094,14 @@ var awsPartition = partition{ Region: "eu-south-1", }, }, + endpointKey{ + Region: "eu-south-2", + }: endpoint{ + Hostname: "compute-optimizer.eu-south-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-south-2", + }, + }, endpointKey{ Region: "eu-west-1", }: endpoint{ @@ -7074,6 +7126,22 @@ var awsPartition = partition{ Region: "eu-west-3", }, }, + endpointKey{ + Region: "il-central-1", + }: endpoint{ + Hostname: "compute-optimizer.il-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "il-central-1", + }, + }, + endpointKey{ + Region: "me-central-1", + }: endpoint{ + Hostname: "compute-optimizer.me-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "me-central-1", + }, + }, endpointKey{ Region: "me-south-1", }: endpoint{ @@ -35266,12 +35334,42 @@ var awsusgovPartition = partition{ }, "appconfigdata": service{ Endpoints: serviceEndpoints{ + endpointKey{ + Region: "fips-us-gov-east-1", + }: endpoint{ + Hostname: "appconfigdata.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-gov-west-1", + }: endpoint{ + Hostname: "appconfigdata.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + Deprecated: boxedTrue, + }, endpointKey{ Region: "us-gov-east-1", }: endpoint{}, + endpointKey{ + Region: "us-gov-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "appconfigdata.us-gov-east-1.amazonaws.com", + }, endpointKey{ Region: "us-gov-west-1", }: endpoint{}, + endpointKey{ + Region: "us-gov-west-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "appconfigdata.us-gov-west-1.amazonaws.com", + }, }, }, "application-autoscaling": service{ diff --git a/aws/version.go b/aws/version.go index 15415c994a3..7fa477b8323 100644 --- a/aws/version.go +++ b/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "1.47.10" +const SDKVersion = "1.47.11" diff --git a/models/apis/backup/2018-11-15/api-2.json b/models/apis/backup/2018-11-15/api-2.json index 3a24469319d..c094b764c8f 100644 --- a/models/apis/backup/2018-11-15/api-2.json +++ b/models/apis/backup/2018-11-15/api-2.json @@ -639,6 +639,19 @@ {"shape":"ServiceUnavailableException"} ] }, + "ListBackupJobSummaries":{ + "name":"ListBackupJobSummaries", + "http":{ + "method":"GET", + "requestUri":"/audit/backup-job-summaries" + }, + "input":{"shape":"ListBackupJobSummariesInput"}, + "output":{"shape":"ListBackupJobSummariesOutput"}, + "errors":[ + {"shape":"InvalidParameterValueException"}, + {"shape":"ServiceUnavailableException"} + ] + }, "ListBackupJobs":{ "name":"ListBackupJobs", "http":{ @@ -732,6 +745,19 @@ ], "idempotent":true }, + "ListCopyJobSummaries":{ + "name":"ListCopyJobSummaries", + "http":{ + "method":"GET", + "requestUri":"/audit/copy-job-summaries" + }, + "input":{"shape":"ListCopyJobSummariesInput"}, + "output":{"shape":"ListCopyJobSummariesOutput"}, + "errors":[ + {"shape":"InvalidParameterValueException"}, + {"shape":"ServiceUnavailableException"} + ] + }, "ListCopyJobs":{ "name":"ListCopyJobs", "http":{ @@ -874,6 +900,19 @@ {"shape":"ServiceUnavailableException"} ] }, + "ListRestoreJobSummaries":{ + "name":"ListRestoreJobSummaries", + "http":{ + "method":"GET", + "requestUri":"/audit/restore-job-summaries" + }, + "input":{"shape":"ListRestoreJobSummariesInput"}, + "output":{"shape":"ListRestoreJobSummariesOutput"}, + "errors":[ + {"shape":"InvalidParameterValueException"}, + {"shape":"ServiceUnavailableException"} + ] + }, "ListRestoreJobs":{ "name":"ListRestoreJobs", "http":{ @@ -1181,6 +1220,14 @@ "type":"list", "member":{"shape":"AdvancedBackupSetting"} }, + "AggregationPeriod":{ + "type":"string", + "enum":[ + "ONE_DAY", + "SEVEN_DAYS", + "FOURTEEN_DAYS" + ] + }, "AlreadyExistsException":{ "type":"structure", "members":{ @@ -1218,7 +1265,8 @@ "BackupType":{"shape":"string"}, "ParentJobId":{"shape":"string"}, "IsParent":{"shape":"boolean"}, - "ResourceName":{"shape":"string"} + "ResourceName":{"shape":"string"}, + "MessageCategory":{"shape":"string"} } }, "BackupJobChildJobsInState":{ @@ -1240,6 +1288,39 @@ "PARTIAL" ] }, + "BackupJobStatus":{ + "type":"string", + "enum":[ + "CREATED", + "PENDING", + "RUNNING", + "ABORTING", + "ABORTED", + "COMPLETED", + "FAILED", + "EXPIRED", + "PARTIAL", + "AGGREGATE_ALL", + "ANY" + ] + }, + "BackupJobSummary":{ + "type":"structure", + "members":{ + "Region":{"shape":"Region"}, + "AccountId":{"shape":"AccountId"}, + "State":{"shape":"BackupJobStatus"}, + "ResourceType":{"shape":"ResourceType"}, + "MessageCategory":{"shape":"MessageCategory"}, + "Count":{"shape":"integer"}, + "StartTime":{"shape":"timestamp"}, + "EndTime":{"shape":"timestamp"} + } + }, + "BackupJobSummaryList":{ + "type":"list", + "member":{"shape":"BackupJobSummary"} + }, "BackupJobsList":{ "type":"list", "member":{"shape":"BackupJob"} @@ -1596,7 +1677,8 @@ "CompositeMemberIdentifier":{"shape":"string"}, "NumberOfChildJobs":{"shape":"Long"}, "ChildJobsInState":{"shape":"CopyJobChildJobsInState"}, - "ResourceName":{"shape":"string"} + "ResourceName":{"shape":"string"}, + "MessageCategory":{"shape":"string"} } }, "CopyJobChildJobsInState":{ @@ -1614,6 +1696,39 @@ "PARTIAL" ] }, + "CopyJobStatus":{ + "type":"string", + "enum":[ + "CREATED", + "RUNNING", + "ABORTING", + "ABORTED", + "COMPLETING", + "COMPLETED", + "FAILING", + "FAILED", + "PARTIAL", + "AGGREGATE_ALL", + "ANY" + ] + }, + "CopyJobSummary":{ + "type":"structure", + "members":{ + "Region":{"shape":"Region"}, + "AccountId":{"shape":"AccountId"}, + "State":{"shape":"CopyJobStatus"}, + "ResourceType":{"shape":"ResourceType"}, + "MessageCategory":{"shape":"MessageCategory"}, + "Count":{"shape":"integer"}, + "StartTime":{"shape":"timestamp"}, + "EndTime":{"shape":"timestamp"} + } + }, + "CopyJobSummaryList":{ + "type":"list", + "member":{"shape":"CopyJobSummary"} + }, "CopyJobsList":{ "type":"list", "member":{"shape":"CopyJob"} @@ -1973,7 +2088,8 @@ "IsParent":{"shape":"boolean"}, "NumberOfChildJobs":{"shape":"Long"}, "ChildJobsInState":{"shape":"BackupJobChildJobsInState"}, - "ResourceName":{"shape":"string"} + "ResourceName":{"shape":"string"}, + "MessageCategory":{"shape":"string"} } }, "DescribeBackupVaultInput":{ @@ -2581,6 +2697,54 @@ }, "exception":true }, + "ListBackupJobSummariesInput":{ + "type":"structure", + "members":{ + "AccountId":{ + "shape":"AccountId", + "location":"querystring", + "locationName":"AccountId" + }, + "State":{ + "shape":"BackupJobStatus", + "location":"querystring", + "locationName":"State" + }, + "ResourceType":{ + "shape":"ResourceType", + "location":"querystring", + "locationName":"ResourceType" + }, + "MessageCategory":{ + "shape":"MessageCategory", + "location":"querystring", + "locationName":"MessageCategory" + }, + "AggregationPeriod":{ + "shape":"AggregationPeriod", + "location":"querystring", + "locationName":"AggregationPeriod" + }, + "MaxResults":{ + "shape":"MaxResults", + "location":"querystring", + "locationName":"MaxResults" + }, + "NextToken":{ + "shape":"string", + "location":"querystring", + "locationName":"NextToken" + } + } + }, + "ListBackupJobSummariesOutput":{ + "type":"structure", + "members":{ + "BackupJobSummaries":{"shape":"BackupJobSummaryList"}, + "AggregationPeriod":{"shape":"string"}, + "NextToken":{"shape":"string"} + } + }, "ListBackupJobsInput":{ "type":"structure", "members":{ @@ -2643,6 +2807,11 @@ "shape":"string", "location":"querystring", "locationName":"parentJobId" + }, + "ByMessageCategory":{ + "shape":"string", + "location":"querystring", + "locationName":"messageCategory" } } }, @@ -2790,6 +2959,54 @@ "NextToken":{"shape":"string"} } }, + "ListCopyJobSummariesInput":{ + "type":"structure", + "members":{ + "AccountId":{ + "shape":"AccountId", + "location":"querystring", + "locationName":"AccountId" + }, + "State":{ + "shape":"CopyJobStatus", + "location":"querystring", + "locationName":"State" + }, + "ResourceType":{ + "shape":"ResourceType", + "location":"querystring", + "locationName":"ResourceType" + }, + "MessageCategory":{ + "shape":"MessageCategory", + "location":"querystring", + "locationName":"MessageCategory" + }, + "AggregationPeriod":{ + "shape":"AggregationPeriod", + "location":"querystring", + "locationName":"AggregationPeriod" + }, + "MaxResults":{ + "shape":"MaxResults", + "location":"querystring", + "locationName":"MaxResults" + }, + "NextToken":{ + "shape":"string", + "location":"querystring", + "locationName":"NextToken" + } + } + }, + "ListCopyJobSummariesOutput":{ + "type":"structure", + "members":{ + "CopyJobSummaries":{"shape":"CopyJobSummaryList"}, + "AggregationPeriod":{"shape":"string"}, + "NextToken":{"shape":"string"} + } + }, "ListCopyJobsInput":{ "type":"structure", "members":{ @@ -2852,6 +3069,11 @@ "shape":"string", "location":"querystring", "locationName":"parentJobId" + }, + "ByMessageCategory":{ + "shape":"string", + "location":"querystring", + "locationName":"messageCategory" } } }, @@ -3148,6 +3370,49 @@ "NextToken":{"shape":"string"} } }, + "ListRestoreJobSummariesInput":{ + "type":"structure", + "members":{ + "AccountId":{ + "shape":"AccountId", + "location":"querystring", + "locationName":"AccountId" + }, + "State":{ + "shape":"RestoreJobState", + "location":"querystring", + "locationName":"State" + }, + "ResourceType":{ + "shape":"ResourceType", + "location":"querystring", + "locationName":"ResourceType" + }, + "AggregationPeriod":{ + "shape":"AggregationPeriod", + "location":"querystring", + "locationName":"AggregationPeriod" + }, + "MaxResults":{ + "shape":"MaxResults", + "location":"querystring", + "locationName":"MaxResults" + }, + "NextToken":{ + "shape":"string", + "location":"querystring", + "locationName":"NextToken" + } + } + }, + "ListRestoreJobSummariesOutput":{ + "type":"structure", + "members":{ + "RestoreJobSummaries":{"shape":"RestoreJobSummaryList"}, + "AggregationPeriod":{"shape":"string"}, + "NextToken":{"shape":"string"} + } + }, "ListRestoreJobsInput":{ "type":"structure", "members":{ @@ -3239,6 +3504,7 @@ "max":1000, "min":1 }, + "MessageCategory":{"type":"string"}, "Metadata":{ "type":"map", "key":{"shape":"MetadataKey"}, @@ -3404,6 +3670,7 @@ "type":"list", "member":{"shape":"RecoveryPointMember"} }, + "Region":{"type":"string"}, "ReportDeliveryChannel":{ "type":"structure", "required":["S3BucketName"], @@ -3521,6 +3788,19 @@ "member":{"shape":"ResourceType"} }, "RestoreJobId":{"type":"string"}, + "RestoreJobState":{ + "type":"string", + "enum":[ + "CREATED", + "PENDING", + "RUNNING", + "ABORTED", + "COMPLETED", + "FAILED", + "AGGREGATE_ALL", + "ANY" + ] + }, "RestoreJobStatus":{ "type":"string", "enum":[ @@ -3531,6 +3811,22 @@ "FAILED" ] }, + "RestoreJobSummary":{ + "type":"structure", + "members":{ + "Region":{"shape":"Region"}, + "AccountId":{"shape":"AccountId"}, + "State":{"shape":"RestoreJobState"}, + "ResourceType":{"shape":"ResourceType"}, + "Count":{"shape":"integer"}, + "StartTime":{"shape":"timestamp"}, + "EndTime":{"shape":"timestamp"} + } + }, + "RestoreJobSummaryList":{ + "type":"list", + "member":{"shape":"RestoreJobSummary"} + }, "RestoreJobsList":{ "type":"list", "member":{"shape":"RestoreJobsListMember"} diff --git a/models/apis/backup/2018-11-15/docs-2.json b/models/apis/backup/2018-11-15/docs-2.json index 41b3814247c..17bc48343c8 100644 --- a/models/apis/backup/2018-11-15/docs-2.json +++ b/models/apis/backup/2018-11-15/docs-2.json @@ -42,12 +42,14 @@ "GetLegalHold": "

This action returns details for a specified legal hold. The details are the body of a legal hold in JSON format, in addition to metadata.

", "GetRecoveryPointRestoreMetadata": "

Returns a set of metadata key-value pairs that were used to create the backup.

", "GetSupportedResourceTypes": "

Returns the Amazon Web Services resource types supported by Backup.

", + "ListBackupJobSummaries": "

This is a request for a summary of backup jobs created or running within the most recent 30 days. You can include parameters AccountID, State, ResourceType, MessageCategory, AggregationPeriod, MaxResults, or NextToken to filter results.

This request returns a summary that contains Region, Account, State, ResourceType, MessageCategory, StartTime, EndTime, and Count of included jobs.

", "ListBackupJobs": "

Returns a list of existing backup jobs for an authenticated account for the last 30 days. For a longer period of time, consider using these monitoring tools.

", "ListBackupPlanTemplates": "

Returns metadata of your saved backup plan templates, including the template ID, name, and the creation and deletion dates.

", "ListBackupPlanVersions": "

Returns version metadata of your backup plans, including Amazon Resource Names (ARNs), backup plan IDs, creation and deletion dates, plan names, and version IDs.

", "ListBackupPlans": "

Returns a list of all active backup plans for an authenticated account. The list contains information such as Amazon Resource Names (ARNs), plan IDs, creation and deletion dates, version IDs, plan names, and creator request IDs.

", "ListBackupSelections": "

Returns an array containing metadata of the resources associated with the target backup plan.

", "ListBackupVaults": "

Returns a list of recovery point storage containers along with information about them.

", + "ListCopyJobSummaries": "

This request obtains a list of copy jobs created or running within the the most recent 30 days. You can include parameters AccountID, State, ResourceType, MessageCategory, AggregationPeriod, MaxResults, or NextToken to filter results.

This request returns a summary that contains Region, Account, State, RestourceType, MessageCategory, StartTime, EndTime, and Count of included jobs.

", "ListCopyJobs": "

Returns metadata about your copy jobs.

", "ListFrameworks": "

Returns a list of all frameworks for an Amazon Web Services account and Amazon Web Services Region.

", "ListLegalHolds": "

This action returns metadata about active and previous legal holds.

", @@ -58,6 +60,7 @@ "ListRecoveryPointsByResource": "

Returns detailed information about all the recovery points of the type specified by a resource Amazon Resource Name (ARN).

For Amazon EFS and Amazon EC2, this action only lists recovery points created by Backup.

", "ListReportJobs": "

Returns details about your report jobs.

", "ListReportPlans": "

Returns a list of your report plans. For detailed information about a single report plan, use DescribeReportPlan.

", + "ListRestoreJobSummaries": "

This request obtains a summary of restore jobs created or running within the the most recent 30 days. You can include parameters AccountID, State, ResourceType, AggregationPeriod, MaxResults, or NextToken to filter results.

This request returns a summary that contains Region, Account, State, RestourceType, MessageCategory, StartTime, EndTime, and Count of included jobs.

", "ListRestoreJobs": "

Returns a list of jobs that Backup initiated to restore a saved resource, including details about the recovery process.

", "ListTags": "

Returns a list of key-value pairs assigned to a target recovery point, backup plan, or backup vault.

ListTags only works for resource types that support full Backup management of their backups. Those resource types are listed in the \"Full Backup management\" section of the Feature availability by resource table.

", "PutBackupVaultAccessPolicy": "

Sets a resource-based policy that is used to manage access permissions on the target backup vault. Requires a backup vault name and an access policy document in JSON format.

", @@ -176,16 +179,22 @@ "base": null, "refs": { "BackupJob$AccountId": "

The account ID that owns the backup job.

", + "BackupJobSummary$AccountId": "

The account ID that owns the jobs within the summary.

", "CopyJob$AccountId": "

The account ID that owns the copy job.

", + "CopyJobSummary$AccountId": "

The account ID that owns the jobs within the summary.

", "DescribeBackupJobOutput$AccountId": "

Returns the account ID that owns the backup job.

", "DescribeRecoveryPointInput$BackupVaultAccountId": "

This is the account ID of the specified backup vault.

", "DescribeRestoreJobOutput$AccountId": "

Returns the account ID that owns the restore job.

", "GetRecoveryPointRestoreMetadataInput$BackupVaultAccountId": "

This is the account ID of the specified backup vault.

", + "ListBackupJobSummariesInput$AccountId": "

Returns the job count for the specified account.

If the request is sent from a member account or an account not part of Amazon Web Services Organizations, jobs within requestor's account will be returned.

Root, admin, and delegated administrator accounts can use the value ANY to return job counts from every account in the organization.

AGGREGATE_ALL aggregates job counts from all accounts within the authenticated organization, then returns the sum.

", "ListBackupJobsInput$ByAccountId": "

The account ID to list the jobs from. Returns only backup jobs associated with the specified account ID.

If used from an Organizations management account, passing * returns all jobs across the organization.

", + "ListCopyJobSummariesInput$AccountId": "

Returns the job count for the specified account.

If the request is sent from a member account or an account not part of Amazon Web Services Organizations, jobs within requestor's account will be returned.

Root, admin, and delegated administrator accounts can use the value ANY to return job counts from every account in the organization.

AGGREGATE_ALL aggregates job counts from all accounts within the authenticated organization, then returns the sum.

", "ListCopyJobsInput$ByAccountId": "

The account ID to list the jobs from. Returns only copy jobs associated with the specified account ID.

", "ListProtectedResourcesByBackupVaultInput$BackupVaultAccountId": "

This is the list of protected resources by backup vault within the vault(s) you specify by account ID.

", "ListRecoveryPointsByBackupVaultInput$BackupVaultAccountId": "

This parameter will sort the list of recovery points by account ID.

", + "ListRestoreJobSummariesInput$AccountId": "

Returns the job count for the specified account.

If the request is sent from a member account or an account not part of Amazon Web Services Organizations, jobs within requestor's account will be returned.

Root, admin, and delegated administrator accounts can use the value ANY to return job counts from every account in the organization.

AGGREGATE_ALL aggregates job counts from all accounts within the authenticated organization, then returns the sum.

", "ListRestoreJobsInput$ByAccountId": "

The account ID to list the jobs from. Returns only restore jobs associated with the specified account ID.

", + "RestoreJobSummary$AccountId": "

The account ID that owns the jobs within the summary.

", "RestoreJobsListMember$AccountId": "

The account ID that owns the restore job.

" } }, @@ -206,6 +215,14 @@ "UpdateBackupPlanOutput$AdvancedBackupSettings": "

Contains a list of BackupOptions for each resource type.

" } }, + "AggregationPeriod": { + "base": null, + "refs": { + "ListBackupJobSummariesInput$AggregationPeriod": "

This is the period that sets the boundaries for returned results.

Acceptable values include

", + "ListCopyJobSummariesInput$AggregationPeriod": "

This is the period that sets the boundaries for returned results.

", + "ListRestoreJobSummariesInput$AggregationPeriod": "

This is the period that sets the boundaries for returned results.

Acceptable values include

" + } + }, "AlreadyExistsException": { "base": "

The required resource already exists.

", "refs": { @@ -232,6 +249,25 @@ "ListBackupJobsInput$ByState": "

Returns only backup jobs that are in the specified state.

" } }, + "BackupJobStatus": { + "base": null, + "refs": { + "BackupJobSummary$State": "

This value is job count for jobs with the specified state.

", + "ListBackupJobSummariesInput$State": "

This parameter returns the job count for jobs with the specified state.

The the value ANY returns count of all states.

AGGREGATE_ALL aggregates job counts for all states and returns the sum.

" + } + }, + "BackupJobSummary": { + "base": "

This is a summary of jobs created or running within the most recent 30 days.

The returned summary may contain the following: Region, Account, State, RestourceType, MessageCategory, StartTime, EndTime, and Count of included jobs.

", + "refs": { + "BackupJobSummaryList$member": null + } + }, + "BackupJobSummaryList": { + "base": null, + "refs": { + "ListBackupJobSummariesOutput$BackupJobSummaries": "

This request returns a summary that contains Region, Account, State, ResourceType, MessageCategory, StartTime, EndTime, and Count of included jobs.

" + } + }, "BackupJobsList": { "base": null, "refs": { @@ -578,6 +614,25 @@ "ListCopyJobsInput$ByState": "

Returns only copy jobs that are in the specified state.

" } }, + "CopyJobStatus": { + "base": null, + "refs": { + "CopyJobSummary$State": "

This value is job count for jobs with the specified state.

", + "ListCopyJobSummariesInput$State": "

This parameter returns the job count for jobs with the specified state.

The the value ANY returns count of all states.

AGGREGATE_ALL aggregates job counts for all states and returns the sum.

" + } + }, + "CopyJobSummary": { + "base": "

This is a summary of copy jobs created or running within the most recent 30 days.

The returned summary may contain the following: Region, Account, State, RestourceType, MessageCategory, StartTime, EndTime, and Count of included jobs.

", + "refs": { + "CopyJobSummaryList$member": null + } + }, + "CopyJobSummaryList": { + "base": null, + "refs": { + "ListCopyJobSummariesOutput$CopyJobSummaries": "

This return shows a summary that contains Region, Account, State, ResourceType, MessageCategory, StartTime, EndTime, and Count of included jobs.

" + } + }, "CopyJobsList": { "base": null, "refs": { @@ -1095,6 +1150,16 @@ "refs": { } }, + "ListBackupJobSummariesInput": { + "base": null, + "refs": { + } + }, + "ListBackupJobSummariesOutput": { + "base": null, + "refs": { + } + }, "ListBackupJobsInput": { "base": null, "refs": { @@ -1155,6 +1220,16 @@ "refs": { } }, + "ListCopyJobSummariesInput": { + "base": null, + "refs": { + } + }, + "ListCopyJobSummariesOutput": { + "base": null, + "refs": { + } + }, "ListCopyJobsInput": { "base": null, "refs": { @@ -1261,6 +1336,16 @@ "refs": { } }, + "ListRestoreJobSummariesInput": { + "base": null, + "refs": { + } + }, + "ListRestoreJobSummariesOutput": { + "base": null, + "refs": { + } + }, "ListRestoreJobsInput": { "base": null, "refs": { @@ -1323,12 +1408,14 @@ "MaxResults": { "base": null, "refs": { + "ListBackupJobSummariesInput$MaxResults": "

This parameter sets the maximum number of items to be returned.

The value is an integer. Range of accepted values is from 1 to 500.

", "ListBackupJobsInput$MaxResults": "

The maximum number of items to be returned.

", "ListBackupPlanTemplatesInput$MaxResults": "

The maximum number of items to be returned.

", "ListBackupPlanVersionsInput$MaxResults": "

The maximum number of items to be returned.

", "ListBackupPlansInput$MaxResults": "

The maximum number of items to be returned.

", "ListBackupSelectionsInput$MaxResults": "

The maximum number of items to be returned.

", "ListBackupVaultsInput$MaxResults": "

The maximum number of items to be returned.

", + "ListCopyJobSummariesInput$MaxResults": "

This parameter sets the maximum number of items to be returned.

The value is an integer. Range of accepted values is from 1 to 500.

", "ListCopyJobsInput$MaxResults": "

The maximum number of items to be returned.

", "ListLegalHoldsInput$MaxResults": "

The maximum number of resource list items to be returned.

", "ListProtectedResourcesByBackupVaultInput$MaxResults": "

The maximum number of items to be returned.

", @@ -1338,10 +1425,20 @@ "ListRecoveryPointsByResourceInput$MaxResults": "

The maximum number of items to be returned.

Amazon RDS requires a value of at least 20.

", "ListReportJobsInput$MaxResults": "

The number of desired results from 1 to 1000. Optional. If unspecified, the query will return 1 MB of data.

", "ListReportPlansInput$MaxResults": "

The number of desired results from 1 to 1000. Optional. If unspecified, the query will return 1 MB of data.

", + "ListRestoreJobSummariesInput$MaxResults": "

This parameter sets the maximum number of items to be returned.

The value is an integer. Range of accepted values is from 1 to 500.

", "ListRestoreJobsInput$MaxResults": "

The maximum number of items to be returned.

", "ListTagsInput$MaxResults": "

The maximum number of items to be returned.

" } }, + "MessageCategory": { + "base": null, + "refs": { + "BackupJobSummary$MessageCategory": "

This parameter is the job count for the specified message category.

Example strings include AccessDenied, Success, and InvalidParameters. See Monitoring for a list of MessageCategory strings.

The the value ANY returns count of all message categories.

AGGREGATE_ALL aggregates job counts for all message categories and returns the sum.

", + "CopyJobSummary$MessageCategory": "

This parameter is the job count for the specified message category.

Example strings include AccessDenied, Success, and InvalidParameters. See Monitoring for a list of MessageCategory strings.

The the value ANY returns count of all message categories.

AGGREGATE_ALL aggregates job counts for all message categories and returns the sum.

", + "ListBackupJobSummariesInput$MessageCategory": "

This parameter returns the job count for the specified message category.

Example accepted strings include AccessDenied, Success, and InvalidParameters. See Monitoring for a list of accepted MessageCategory strings.

The the value ANY returns count of all message categories.

AGGREGATE_ALL aggregates job counts for all message categories and returns the sum.

", + "ListCopyJobSummariesInput$MessageCategory": "

This parameter returns the job count for the specified message category.

Example accepted strings include AccessDenied, Success, and InvalidParameters. See Monitoring for a list of accepted MessageCategory strings.

The the value ANY returns count of all message categories.

AGGREGATE_ALL aggregates job counts for all message categories and returns the sum.

" + } + }, "Metadata": { "base": null, "refs": { @@ -1468,6 +1565,14 @@ "ListRecoveryPointsByLegalHoldOutput$RecoveryPoints": "

This is a list of the recovery points returned by ListRecoveryPointsByLegalHold.

" } }, + "Region": { + "base": null, + "refs": { + "BackupJobSummary$Region": "

The Amazon Web Services Regions within the job summary.

", + "CopyJobSummary$Region": "

This is the Amazon Web Services Regions within the job summary.

", + "RestoreJobSummary$Region": "

The Amazon Web Services Regions within the job summary.

" + } + }, "ReportDeliveryChannel": { "base": "

Contains information from your report plan about where to deliver your reports, specifically your Amazon S3 bucket name, S3 key prefix, and the formats of your reports.

", "refs": { @@ -1569,20 +1674,26 @@ "refs": { "AdvancedBackupSetting$ResourceType": "

Specifies an object containing resource type and backup options. The only supported resource type is Amazon EC2 instances with Windows Volume Shadow Copy Service (VSS). For a CloudFormation example, see the sample CloudFormation template to enable Windows VSS in the Backup User Guide.

Valid values: EC2.

", "BackupJob$ResourceType": "

The type of Amazon Web Services resource to be backed up; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database. For Windows Volume Shadow Copy Service (VSS) backups, the only supported resource type is Amazon EC2.

", + "BackupJobSummary$ResourceType": "

This value is the job count for the specified resource type. The request GetSupportedResourceTypes returns strings for supported resource types.

", "CopyJob$ResourceType": "

The type of Amazon Web Services resource to be copied; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.

", + "CopyJobSummary$ResourceType": "

This value is the job count for the specified resource type. The request GetSupportedResourceTypes returns strings for supported resource types

", "DescribeBackupJobOutput$ResourceType": "

The type of Amazon Web Services resource to be backed up; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.

", "DescribeProtectedResourceOutput$ResourceType": "

The type of Amazon Web Services resource saved as a recovery point; for example, an Amazon EBS volume or an Amazon RDS database.

", "DescribeRecoveryPointOutput$ResourceType": "

The type of Amazon Web Services resource to save as a recovery point; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.

", "DescribeRestoreJobOutput$ResourceType": "

Returns metadata associated with a restore job listed by resource type.

", + "ListBackupJobSummariesInput$ResourceType": "

Returns the job count for the specified resource type. Use request GetSupportedResourceTypes to obtain strings for supported resource types.

The the value ANY returns count of all resource types.

AGGREGATE_ALL aggregates job counts for all resource types and returns the sum.

The type of Amazon Web Services resource to be backed up; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.

", "ListBackupJobsInput$ByResourceType": "

Returns only backup jobs for the specified resources:

", + "ListCopyJobSummariesInput$ResourceType": "

Returns the job count for the specified resource type. Use request GetSupportedResourceTypes to obtain strings for supported resource types.

The the value ANY returns count of all resource types.

AGGREGATE_ALL aggregates job counts for all resource types and returns the sum.

The type of Amazon Web Services resource to be backed up; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.

", "ListCopyJobsInput$ByResourceType": "

Returns only backup jobs for the specified resources:

", "ListRecoveryPointsByBackupVaultInput$ByResourceType": "

Returns only recovery points that match the specified resource type.

", + "ListRestoreJobSummariesInput$ResourceType": "

Returns the job count for the specified resource type. Use request GetSupportedResourceTypes to obtain strings for supported resource types.

The the value ANY returns count of all resource types.

AGGREGATE_ALL aggregates job counts for all resource types and returns the sum.

The type of Amazon Web Services resource to be backed up; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.

", "ProtectedResource$ResourceType": "

The type of Amazon Web Services resource; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database. For Windows Volume Shadow Copy Service (VSS) backups, the only supported resource type is Amazon EC2.

", "RecoveryPointByBackupVault$ResourceType": "

The type of Amazon Web Services resource saved as a recovery point; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database. For Windows Volume Shadow Copy Service (VSS) backups, the only supported resource type is Amazon EC2.

", "RecoveryPointMember$ResourceType": "

This is the Amazon Web Services resource type that is saved as a recovery point.

", "ResourceTypeManagementPreference$key": null, "ResourceTypeOptInPreference$key": null, "ResourceTypes$member": null, + "RestoreJobSummary$ResourceType": "

This value is the job count for the specified resource type. The request GetSupportedResourceTypes returns strings for supported resource types.

", "RestoreJobsListMember$ResourceType": "

The resource type of the listed restore jobs; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database. For Windows Volume Shadow Copy Service (VSS) backups, the only supported resource type is Amazon EC2.

", "StartRestoreJobInput$ResourceType": "

Starts a job to restore a recovery point for one of the following resources:

" } @@ -1620,6 +1731,13 @@ "StartRestoreJobOutput$RestoreJobId": "

Uniquely identifies the job that restores a recovery point.

" } }, + "RestoreJobState": { + "base": null, + "refs": { + "ListRestoreJobSummariesInput$State": "

This parameter returns the job count for jobs with the specified state.

The the value ANY returns count of all states.

AGGREGATE_ALL aggregates job counts for all states and returns the sum.

", + "RestoreJobSummary$State": "

This value is job count for jobs with the specified state.

" + } + }, "RestoreJobStatus": { "base": null, "refs": { @@ -1628,6 +1746,18 @@ "RestoreJobsListMember$Status": "

A status code specifying the state of the job initiated by Backup to restore a recovery point.

" } }, + "RestoreJobSummary": { + "base": "

This is a summary of restore jobs created or running within the most recent 30 days.

The returned summary may contain the following: Region, Account, State, ResourceType, MessageCategory, StartTime, EndTime, and Count of included jobs.

", + "refs": { + "RestoreJobSummaryList$member": null + } + }, + "RestoreJobSummaryList": { + "base": null, + "refs": { + "ListRestoreJobSummariesOutput$RestoreJobSummaries": "

This return contains a summary that contains Region, Account, State, ResourceType, MessageCategory, StartTime, EndTime, and Count of included jobs.

" + } + }, "RestoreJobsList": { "base": null, "refs": { @@ -1845,8 +1975,11 @@ "integer": { "base": null, "refs": { + "BackupJobSummary$Count": "

The value as a number of jobs in a job summary.

", + "CopyJobSummary$Count": "

The value as a number of jobs in a job summary.

", "Framework$NumberOfControls": "

The number of controls contained by the framework.

", - "ReportSetting$NumberOfFrameworks": "

The number of frameworks a report covers.

" + "ReportSetting$NumberOfFrameworks": "

The number of frameworks a report covers.

", + "RestoreJobSummary$Count": "

The value as a number of jobs in a job summary.

" } }, "long": { @@ -1871,6 +2004,7 @@ "BackupJob$BackupType": "

Represents the type of backup for a backup job.

", "BackupJob$ParentJobId": "

This uniquely identifies a request to Backup to back up a resource. The return will be the parent (composite) job ID.

", "BackupJob$ResourceName": "

This is the non-unique name of the resource that belongs to the specified backup.

", + "BackupJob$MessageCategory": "

This parameter is the job count for the specified message category.

Example strings include AccessDenied, Success, and InvalidParameters. See Monitoring for a list of MessageCategory strings.

The the value ANY returns count of all message categories.

AGGREGATE_ALL aggregates job counts for all message categories and returns the sum.

", "BackupPlanTemplatesListMember$BackupPlanTemplateId": "

Uniquely identifies a stored backup plan template.

", "BackupPlanTemplatesListMember$BackupPlanTemplateName": "

The optional display name of a backup plan template.

", "BackupPlansListMember$BackupPlanId": "

Uniquely identifies a backup plan.

", @@ -1893,6 +2027,7 @@ "CopyJob$ParentJobId": "

This uniquely identifies a request to Backup to copy a resource. The return will be the parent (composite) job ID.

", "CopyJob$CompositeMemberIdentifier": "

This is the identifier of a resource within a composite group, such as nested (child) recovery point belonging to a composite (parent) stack. The ID is transferred from the logical ID within a stack.

", "CopyJob$ResourceName": "

This is the non-unique name of the resource that belongs to the specified backup.

", + "CopyJob$MessageCategory": "

This parameter is the job count for the specified message category.

Example strings include AccessDenied, Success, and InvalidParameters. See Monitoring for a list of MessageCategory strings.

The the value ANY returns count of all message categories.

AGGREGATE_ALL aggregates job counts for all message categories and returns the sum

", "CreateBackupPlanInput$CreatorRequestId": "

Identifies the request and allows failed requests to be retried without the risk of running the operation twice. If the request includes a CreatorRequestId that matches an existing backup plan, that plan is returned. This parameter is optional.

If used, this parameter must contain 1 to 50 alphanumeric or '-_.' characters.

", "CreateBackupPlanOutput$BackupPlanId": "

Uniquely identifies a backup plan.

", "CreateBackupPlanOutput$VersionId": "

Unique, randomly generated, Unicode, UTF-8 encoded strings that are at most 1,024 bytes long. They cannot be edited.

", @@ -1927,6 +2062,7 @@ "DescribeBackupJobOutput$BackupType": "

Represents the actual backup type selected for a backup job. For example, if a successful Windows Volume Shadow Copy Service (VSS) backup was taken, BackupType returns \"WindowsVSS\". If BackupType is empty, then the backup type was a regular backup.

", "DescribeBackupJobOutput$ParentJobId": "

This returns the parent (composite) resource backup job ID.

", "DescribeBackupJobOutput$ResourceName": "

This is the non-unique name of the resource that belongs to the specified backup.

", + "DescribeBackupJobOutput$MessageCategory": "

This is the job count for the specified message category.

Example strings may include AccessDenied, Success, and InvalidParameters. See Monitoring for a list of MessageCategory strings.

", "DescribeBackupVaultInput$BackupVaultName": "

The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Amazon Web Services Region where they are created. They consist of lowercase letters, numbers, and hyphens.

", "DescribeBackupVaultInput$BackupVaultAccountId": "

This is the account ID of the specified backup vault.

", "DescribeBackupVaultOutput$BackupVaultName": "

The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created. They consist of lowercase letters, numbers, and hyphens.

", @@ -1982,8 +2118,12 @@ "LimitExceededException$Message": null, "LimitExceededException$Type": "

", "LimitExceededException$Context": "

", + "ListBackupJobSummariesInput$NextToken": "

The next item following a partial list of returned resources. For example, if a request is made to return maxResults number of resources, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", + "ListBackupJobSummariesOutput$AggregationPeriod": "

This is the period that sets the boundaries for returned results.

", + "ListBackupJobSummariesOutput$NextToken": "

The next item following a partial list of returned resources. For example, if a request is made to return maxResults number of resources, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListBackupJobsInput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListBackupJobsInput$ByParentJobId": "

This is a filter to list child (nested) jobs based on parent job ID.

", + "ListBackupJobsInput$ByMessageCategory": "

This returns a list of backup jobs for the specified message category.

Example strings may include AccessDenied, Success, and InvalidParameters. See Monitoring for a list of MessageCategory strings.

", "ListBackupJobsOutput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListBackupPlanTemplatesInput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListBackupPlanTemplatesOutput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", @@ -1997,9 +2137,13 @@ "ListBackupSelectionsOutput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListBackupVaultsInput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListBackupVaultsOutput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", + "ListCopyJobSummariesInput$NextToken": "

The next item following a partial list of returned resources. For example, if a request is made to return maxResults number of resources, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", + "ListCopyJobSummariesOutput$AggregationPeriod": "

This is the period that sets the boundaries for returned results.

", + "ListCopyJobSummariesOutput$NextToken": "

The next item following a partial list of returned resources. For example, if a request is made to return maxResults number of resources, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListCopyJobsInput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListCopyJobsInput$ByDestinationVaultArn": "

An Amazon Resource Name (ARN) that uniquely identifies a source backup vault to copy from; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault.

", "ListCopyJobsInput$ByParentJobId": "

This is a filter to list child (nested) jobs based on parent job ID.

", + "ListCopyJobsInput$ByMessageCategory": "

This parameter returns the job count for the specified message category.

Example accepted strings include AccessDenied, Success, and InvalidParameters. See Monitoring for a list of accepted MessageCategory strings.

The the value ANY returns count of all message categories.

AGGREGATE_ALL aggregates job counts for all message categories and returns the sum.

", "ListCopyJobsOutput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListFrameworksInput$NextToken": "

An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.

", "ListFrameworksOutput$NextToken": "

An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.

", @@ -2022,6 +2166,9 @@ "ListReportJobsOutput$NextToken": "

An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.

", "ListReportPlansInput$NextToken": "

An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.

", "ListReportPlansOutput$NextToken": "

An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.

", + "ListRestoreJobSummariesInput$NextToken": "

The next item following a partial list of returned resources. For example, if a request is made to return maxResults number of resources, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", + "ListRestoreJobSummariesOutput$AggregationPeriod": "

This is the period that sets the boundaries for returned results.

", + "ListRestoreJobSummariesOutput$NextToken": "

The next item following a partial list of returned resources. For example, if a request is made to return maxResults number of resources, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListRestoreJobsInput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListRestoreJobsOutput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", "ListTagsInput$NextToken": "

The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.

", @@ -2103,6 +2250,8 @@ "BackupJob$CompletionDate": "

The date and time a job to create a backup job is completed, in Unix format and Coordinated Universal Time (UTC). The value of CompletionDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "BackupJob$ExpectedCompletionDate": "

The date and time a job to back up resources is expected to be completed, in Unix format and Coordinated Universal Time (UTC). The value of ExpectedCompletionDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "BackupJob$StartBy": "

Specifies the time in Unix format and Coordinated Universal Time (UTC) when a backup job must be started before it is canceled. The value is calculated by adding the start window to the scheduled time. So if the scheduled time were 6:00 PM and the start window is 2 hours, the StartBy time would be 8:00 PM on the date specified. The value of StartBy is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", + "BackupJobSummary$StartTime": "

The value of time in number format of a job start time.

This value is the time in Unix format, Coordinated Universal Time (UTC), and accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", + "BackupJobSummary$EndTime": "

The value of time in number format of a job end time.

This value is the time in Unix format, Coordinated Universal Time (UTC), and accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "BackupPlansListMember$CreationDate": "

The date and time a resource backup plan is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "BackupPlansListMember$DeletionDate": "

The date and time a backup plan is deleted, in Unix format and Coordinated Universal Time (UTC). The value of DeletionDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "BackupPlansListMember$LastExecutionDate": "

The last time a job to back up resources was run with this rule. A date and time, in Unix format and Coordinated Universal Time (UTC). The value of LastExecutionDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", @@ -2113,6 +2262,8 @@ "CalculatedLifecycle$DeleteAt": "

A timestamp that specifies when to delete a recovery point.

", "CopyJob$CreationDate": "

The date and time a copy job is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "CopyJob$CompletionDate": "

The date and time a copy job is completed, in Unix format and Coordinated Universal Time (UTC). The value of CompletionDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", + "CopyJobSummary$StartTime": "

The value of time in number format of a job start time.

This value is the time in Unix format, Coordinated Universal Time (UTC), and accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", + "CopyJobSummary$EndTime": "

The value of time in number format of a job end time.

This value is the time in Unix format, Coordinated Universal Time (UTC), and accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "CreateBackupPlanOutput$CreationDate": "

The date and time that a backup plan is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "CreateBackupSelectionOutput$CreationDate": "

The date and time a backup selection is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "CreateBackupVaultOutput$CreationDate": "

The date and time a backup vault is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", @@ -2172,6 +2323,8 @@ "ReportPlan$CreationTime": "

The date and time that a report plan is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationTime is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "ReportPlan$LastAttemptedExecutionTime": "

The date and time that a report job associated with this report plan last attempted to run, in Unix format and Coordinated Universal Time (UTC). The value of LastAttemptedExecutionTime is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "ReportPlan$LastSuccessfulExecutionTime": "

The date and time that a report job associated with this report plan last successfully ran, in Unix format and Coordinated Universal Time (UTC). The value of LastSuccessfulExecutionTime is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", + "RestoreJobSummary$StartTime": "

The value of time in number format of a job start time.

This value is the time in Unix format, Coordinated Universal Time (UTC), and accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", + "RestoreJobSummary$EndTime": "

The value of time in number format of a job end time.

This value is the time in Unix format, Coordinated Universal Time (UTC), and accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "RestoreJobsListMember$CreationDate": "

The date and time a restore job is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "RestoreJobsListMember$CompletionDate": "

The date and time a job to restore a recovery point is completed, in Unix format and Coordinated Universal Time (UTC). The value of CompletionDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", "StartBackupJobOutput$CreationDate": "

The date and time that a backup job is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.

", diff --git a/models/apis/backup/2018-11-15/endpoint-rule-set-1.json b/models/apis/backup/2018-11-15/endpoint-rule-set-1.json index dc2e1fc92d9..532b0e96566 100644 --- a/models/apis/backup/2018-11-15/endpoint-rule-set-1.json +++ b/models/apis/backup/2018-11-15/endpoint-rule-set-1.json @@ -40,7 +40,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -83,7 +82,8 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -96,7 +96,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -110,7 +109,6 @@ "assign": "PartitionResult" } ], - "type": "tree", "rules": [ { "conditions": [ @@ -133,7 +131,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -168,7 +165,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [], @@ -179,14 +175,16 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "FIPS and DualStack are enabled, but this partition does not support one or both", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -200,14 +198,12 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ - true, { "fn": "getAttr", "argv": [ @@ -216,11 +212,11 @@ }, "supportsFIPS" ] - } + }, + true ] } ], - "type": "tree", "rules": [ { "conditions": [], @@ -231,14 +227,16 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "FIPS is enabled but this partition does not support FIPS", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -252,7 +250,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -272,7 +269,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [], @@ -283,14 +279,16 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "DualStack is enabled but this partition does not support DualStack", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [], @@ -301,9 +299,11 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], diff --git a/models/apis/backup/2018-11-15/endpoint-tests-1.json b/models/apis/backup/2018-11-15/endpoint-tests-1.json index 4985c744eb0..a040669ac04 100644 --- a/models/apis/backup/2018-11-15/endpoint-tests-1.json +++ b/models/apis/backup/2018-11-15/endpoint-tests-1.json @@ -607,6 +607,17 @@ "expect": { "error": "Invalid Configuration: Missing Region" } + }, + { + "documentation": "Partition doesn't support DualStack", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": false, + "UseDualStack": true + } } ], "version": "1.0" diff --git a/models/apis/backup/2018-11-15/paginators-1.json b/models/apis/backup/2018-11-15/paginators-1.json index 7ad393927f0..9fce4ec3072 100644 --- a/models/apis/backup/2018-11-15/paginators-1.json +++ b/models/apis/backup/2018-11-15/paginators-1.json @@ -1,5 +1,10 @@ { "pagination": { + "ListBackupJobSummaries": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, "ListBackupJobs": { "input_token": "NextToken", "output_token": "NextToken", @@ -36,6 +41,11 @@ "limit_key": "MaxResults", "result_key": "BackupVaultList" }, + "ListCopyJobSummaries": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, "ListCopyJobs": { "input_token": "NextToken", "output_token": "NextToken", @@ -93,6 +103,11 @@ "output_token": "NextToken", "limit_key": "MaxResults" }, + "ListRestoreJobSummaries": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, "ListRestoreJobs": { "input_token": "NextToken", "output_token": "NextToken", diff --git a/models/apis/cleanrooms/2022-02-17/api-2.json b/models/apis/cleanrooms/2022-02-17/api-2.json index 4c4f0aa371e..eb86fc8135e 100644 --- a/models/apis/cleanrooms/2022-02-17/api-2.json +++ b/models/apis/cleanrooms/2022-02-17/api-2.json @@ -1088,7 +1088,7 @@ "type":"string", "max":36, "min":36, - "pattern":".*[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}.*" + "pattern":"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" }, "AnalysisTemplateSummary":{ "type":"structure", @@ -1122,7 +1122,7 @@ }, "AnalysisTemplateText":{ "type":"string", - "max":15000, + "max":90000, "min":0 }, "BatchGetCollaborationAnalysisTemplateError":{ @@ -1335,7 +1335,7 @@ "type":"string", "max":36, "min":36, - "pattern":".*[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}.*" + "pattern":"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" }, "CollaborationName":{ "type":"string", @@ -1523,7 +1523,7 @@ "type":"string", "max":36, "min":36, - "pattern":".*[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}.*" + "pattern":"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" }, "ConfiguredTableAssociationSummary":{ "type":"structure", @@ -1556,7 +1556,7 @@ "type":"string", "max":36, "min":36, - "pattern":".*[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}.*" + "pattern":"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" }, "ConfiguredTableSummary":{ "type":"structure", @@ -1652,7 +1652,8 @@ "creatorDisplayName":{"shape":"DisplayName"}, "dataEncryptionMetadata":{"shape":"DataEncryptionMetadata"}, "queryLogStatus":{"shape":"CollaborationQueryLogStatus"}, - "tags":{"shape":"TagMap"} + "tags":{"shape":"TagMap"}, + "creatorPaymentConfiguration":{"shape":"PaymentConfiguration"} } }, "CreateCollaborationOutput":{ @@ -1748,7 +1749,8 @@ "collaborationIdentifier":{"shape":"CollaborationIdentifier"}, "queryLogStatus":{"shape":"MembershipQueryLogStatus"}, "tags":{"shape":"TagMap"}, - "defaultResultConfiguration":{"shape":"MembershipProtectedQueryResultConfiguration"} + "defaultResultConfiguration":{"shape":"MembershipProtectedQueryResultConfiguration"}, + "paymentConfiguration":{"shape":"MembershipPaymentConfiguration"} } }, "CreateMembershipOutput":{ @@ -2549,7 +2551,8 @@ "members":{ "accountId":{"shape":"AccountId"}, "memberAbilities":{"shape":"MemberAbilities"}, - "displayName":{"shape":"DisplayName"} + "displayName":{"shape":"DisplayName"}, + "paymentConfiguration":{"shape":"PaymentConfiguration"} } }, "MemberStatus":{ @@ -2569,7 +2572,8 @@ "displayName", "abilities", "createTime", - "updateTime" + "updateTime", + "paymentConfiguration" ], "members":{ "accountId":{"shape":"AccountId"}, @@ -2579,7 +2583,8 @@ "createTime":{"shape":"Timestamp"}, "updateTime":{"shape":"Timestamp"}, "membershipId":{"shape":"UUID"}, - "membershipArn":{"shape":"MembershipArn"} + "membershipArn":{"shape":"MembershipArn"}, + "paymentConfiguration":{"shape":"PaymentConfiguration"} } }, "MemberSummaryList":{ @@ -2600,7 +2605,8 @@ "updateTime", "status", "memberAbilities", - "queryLogStatus" + "queryLogStatus", + "paymentConfiguration" ], "members":{ "id":{"shape":"UUID"}, @@ -2615,7 +2621,8 @@ "status":{"shape":"MembershipStatus"}, "memberAbilities":{"shape":"MemberAbilities"}, "queryLogStatus":{"shape":"MembershipQueryLogStatus"}, - "defaultResultConfiguration":{"shape":"MembershipProtectedQueryResultConfiguration"} + "defaultResultConfiguration":{"shape":"MembershipProtectedQueryResultConfiguration"}, + "paymentConfiguration":{"shape":"MembershipPaymentConfiguration"} } }, "MembershipArn":{ @@ -2628,7 +2635,14 @@ "type":"string", "max":36, "min":36, - "pattern":".*[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}.*" + "pattern":"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" + }, + "MembershipPaymentConfiguration":{ + "type":"structure", + "required":["queryCompute"], + "members":{ + "queryCompute":{"shape":"MembershipQueryComputePaymentConfig"} + } }, "MembershipProtectedQueryOutputConfiguration":{ "type":"structure", @@ -2645,6 +2659,13 @@ "roleArn":{"shape":"RoleArn"} } }, + "MembershipQueryComputePaymentConfig":{ + "type":"structure", + "required":["isResponsible"], + "members":{ + "isResponsible":{"shape":"Boolean"} + } + }, "MembershipQueryLogStatus":{ "type":"string", "enum":[ @@ -2673,7 +2694,8 @@ "createTime", "updateTime", "status", - "memberAbilities" + "memberAbilities", + "paymentConfiguration" ], "members":{ "id":{"shape":"UUID"}, @@ -2686,7 +2708,8 @@ "createTime":{"shape":"Timestamp"}, "updateTime":{"shape":"Timestamp"}, "status":{"shape":"MembershipStatus"}, - "memberAbilities":{"shape":"MemberAbilities"} + "memberAbilities":{"shape":"MemberAbilities"}, + "paymentConfiguration":{"shape":"MembershipPaymentConfiguration"} } }, "MembershipSummaryList":{ @@ -2734,6 +2757,13 @@ "max":250, "min":0 }, + "PaymentConfiguration":{ + "type":"structure", + "required":["queryCompute"], + "members":{ + "queryCompute":{"shape":"QueryComputePaymentConfig"} + } + }, "ProtectedQuery":{ "type":"structure", "required":[ @@ -2770,8 +2800,8 @@ "ProtectedQueryIdentifier":{ "type":"string", "max":36, - "min":1, - "pattern":".*[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}.*" + "min":36, + "pattern":"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" }, "ProtectedQueryMemberOutputList":{ "type":"list", @@ -2842,7 +2872,7 @@ }, "ProtectedQuerySQLParametersQueryStringString":{ "type":"string", - "max":15000, + "max":90000, "min":0 }, "ProtectedQuerySingleMemberOutput":{ @@ -2895,6 +2925,13 @@ "type":"string", "enum":["SQL"] }, + "QueryComputePaymentConfig":{ + "type":"structure", + "required":["isResponsible"], + "members":{ + "isResponsible":{"shape":"Boolean"} + } + }, "QueryTables":{ "type":"list", "member":{"shape":"TableAlias"} @@ -2949,7 +2986,7 @@ "type":"string", "max":512, "min":32, - "pattern":"arn:aws:iam::[\\w]+:role/[\\w+=,./@-]+" + "pattern":"arn:aws:iam::[\\w]+:role/[\\w+=./@-]+" }, "ScalarFunctions":{ "type":"string", @@ -3171,7 +3208,7 @@ "type":"string", "max":36, "min":36, - "pattern":".*[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}.*" + "pattern":"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" }, "UntagResourceInput":{ "type":"structure", diff --git a/models/apis/cleanrooms/2022-02-17/docs-2.json b/models/apis/cleanrooms/2022-02-17/docs-2.json index 8c44b43a4ce..5e02fbf574e 100644 --- a/models/apis/cleanrooms/2022-02-17/docs-2.json +++ b/models/apis/cleanrooms/2022-02-17/docs-2.json @@ -379,10 +379,12 @@ "Boolean": { "base": null, "refs": { - "DataEncryptionMetadata$allowCleartext": "

Indicates whether encrypted tables can contain cleartext data (true) or are to cryptographically process every column (false).

", - "DataEncryptionMetadata$allowDuplicates": "

Indicates whether Fingerprint columns can contain duplicate entries (true) or are to contain only non-repeated values (false).

", - "DataEncryptionMetadata$allowJoinsOnColumnsWithDifferentNames": "

Indicates whether Fingerprint columns can be joined on any other Fingerprint column with a different name (true) or can only be joined on Fingerprint columns of the same name (false).

", - "DataEncryptionMetadata$preserveNulls": "

Indicates whether NULL values are to be copied as NULL to encrypted tables (true) or cryptographically processed (false).

" + "DataEncryptionMetadata$allowCleartext": "

Indicates whether encrypted tables can contain cleartext data (TRUE) or are to cryptographically process every column (FALSE).

", + "DataEncryptionMetadata$allowDuplicates": "

Indicates whether Fingerprint columns can contain duplicate entries (TRUE) or are to contain only non-repeated values (FALSE).

", + "DataEncryptionMetadata$allowJoinsOnColumnsWithDifferentNames": "

Indicates whether Fingerprint columns can be joined on any other Fingerprint column with a different name (TRUE) or can only be joined on Fingerprint columns of the same name (FALSE).

", + "DataEncryptionMetadata$preserveNulls": "

Indicates whether NULL values are to be copied as NULL to encrypted tables (TRUE) or cryptographically processed (FALSE).

", + "MembershipQueryComputePaymentConfig$isResponsible": "

Indicates whether the collaboration member has accepted to pay for query compute costs (TRUE) or has not accepted to pay for query compute costs (FALSE).

If the collaboration creator has not specified anyone to pay for query compute costs, then the member who can query is the default payer.

An error message is returned for the following reasons:

", + "QueryComputePaymentConfig$isResponsible": "

Indicates whether the collaboration creator has configured the collaboration member to pay for query compute costs (TRUE) or has not configured the collaboration member to pay for query compute costs (FALSE).

Exactly one member can be configured to pay for query compute costs. An error is returned if the collaboration creator sets a TRUE value for more than one member in the collaboration.

If the collaboration creator hasn't specified anyone as the member paying for query compute costs, then the member who can query is the default payer. An error is returned if the collaboration creator sets a FALSE value for the member who can query.

" } }, "CleanroomsArn": { @@ -1120,7 +1122,7 @@ "refs": { "Collaboration$memberStatus": "

The status of a member in a collaboration.

", "CollaborationSummary$memberStatus": "

The status of a member in a collaboration.

", - "MemberSummary$status": "

The status of the member. Valid values are `INVITED`, `ACTIVE`, `LEFT`, and `REMOVED`.

" + "MemberSummary$status": "

The status of the member.

" } }, "MemberSummary": { @@ -1182,6 +1184,14 @@ "UpdateProtectedQueryInput$membershipIdentifier": "

The identifier for a member of a protected query instance.

" } }, + "MembershipPaymentConfiguration": { + "base": "

An object representing the payment responsibilities accepted by the collaboration member.

", + "refs": { + "CreateMembershipInput$paymentConfiguration": "

The payment responsibilities accepted by the collaboration member.

Not required if the collaboration member has the member ability to run queries.

Required if the collaboration member doesn't have the member ability to run queries but is configured as a payer by the collaboration creator.

", + "Membership$paymentConfiguration": "

The payment responsibilities accepted by the collaboration member.

", + "MembershipSummary$paymentConfiguration": "

The payment responsibilities accepted by the collaboration member.

" + } + }, "MembershipProtectedQueryOutputConfiguration": { "base": "

Contains configurations for protected query results.

", "refs": { @@ -1196,20 +1206,26 @@ "UpdateMembershipInput$defaultResultConfiguration": "

The default protected query result configuration as specified by the member who can receive results.

" } }, + "MembershipQueryComputePaymentConfig": { + "base": "

An object representing the payment responsibilities accepted by the collaboration member for query compute costs.

", + "refs": { + "MembershipPaymentConfiguration$queryCompute": "

The payment responsibilities accepted by the collaboration member for query compute costs.

" + } + }, "MembershipQueryLogStatus": { "base": null, "refs": { - "CreateMembershipInput$queryLogStatus": "

An indicator as to whether query logging has been enabled or disabled for the collaboration.

", - "Membership$queryLogStatus": "

An indicator as to whether query logging has been enabled or disabled for the collaboration.

", - "UpdateMembershipInput$queryLogStatus": "

An indicator as to whether query logging has been enabled or disabled for the collaboration.

" + "CreateMembershipInput$queryLogStatus": "

An indicator as to whether query logging has been enabled or disabled for the membership.

", + "Membership$queryLogStatus": "

An indicator as to whether query logging has been enabled or disabled for the membership.

", + "UpdateMembershipInput$queryLogStatus": "

An indicator as to whether query logging has been enabled or disabled for the membership.

" } }, "MembershipStatus": { "base": null, "refs": { "ListMembershipsInput$status": "

A filter which will return only memberships in the specified status.

", - "Membership$status": "

The status of the membership. Valid values are `ACTIVE`, `REMOVED`, and `COLLABORATION_DELETED`.

", - "MembershipSummary$status": "

The status of the membership. Valid values are `ACTIVE`, `REMOVED`, and `COLLABORATION_DELETED`.

" + "Membership$status": "

The status of the membership.

", + "MembershipSummary$status": "

The status of the membership.

" } }, "MembershipSummary": { @@ -1273,6 +1289,14 @@ "ParameterMap$value": null } }, + "PaymentConfiguration": { + "base": "

An object representing the collaboration member's payment responsibilities set by the collaboration creator.

", + "refs": { + "CreateCollaborationInput$creatorPaymentConfiguration": "

The collaboration creator's payment responsibilities set by the collaboration creator.

If the collaboration creator hasn't specified anyone as the member paying for query compute costs, then the member who can query is the default payer.

", + "MemberSpecification$paymentConfiguration": "

The collaboration member's payment responsibilities set by the collaboration creator.

If the collaboration creator hasn't specified anyone as the member paying for query compute costs, then the member who can query is the default payer.

", + "MemberSummary$paymentConfiguration": "

The collaboration member's payment responsibilities set by the collaboration creator.

" + } + }, "ProtectedQuery": { "base": "

The parameters for an Clean Rooms protected query.

", "refs": { @@ -1395,6 +1419,12 @@ "StartProtectedQueryInput$type": "

The type of the protected query to be started.

" } }, + "QueryComputePaymentConfig": { + "base": "

An object representing the collaboration member's payment responsibilities set by the collaboration creator for query compute costs.

", + "refs": { + "PaymentConfiguration$queryCompute": "

The collaboration member's payment responsibilities set by the collaboration creator for query compute costs.

" + } + }, "QueryTables": { "base": null, "refs": { diff --git a/models/apis/cleanrooms/2022-02-17/endpoint-rule-set-1.json b/models/apis/cleanrooms/2022-02-17/endpoint-rule-set-1.json index 5ff0ca71b0f..3a514d8a54d 100644 --- a/models/apis/cleanrooms/2022-02-17/endpoint-rule-set-1.json +++ b/models/apis/cleanrooms/2022-02-17/endpoint-rule-set-1.json @@ -40,7 +40,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -59,7 +58,6 @@ }, { "conditions": [], - "type": "tree", "rules": [ { "conditions": [ @@ -87,13 +85,14 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], - "type": "tree", "rules": [ { "conditions": [ @@ -106,7 +105,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -120,7 +118,6 @@ "assign": "PartitionResult" } ], - "type": "tree", "rules": [ { "conditions": [ @@ -143,7 +140,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -178,11 +174,9 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [], - "type": "tree", "rules": [ { "conditions": [], @@ -193,16 +187,19 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "FIPS and DualStack are enabled, but this partition does not support one or both", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -216,14 +213,12 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ - true, { "fn": "getAttr", "argv": [ @@ -232,15 +227,14 @@ }, "supportsFIPS" ] - } + }, + true ] } ], - "type": "tree", "rules": [ { "conditions": [], - "type": "tree", "rules": [ { "conditions": [], @@ -251,16 +245,19 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "FIPS is enabled but this partition does not support FIPS", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -274,7 +271,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -294,11 +290,9 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [], - "type": "tree", "rules": [ { "conditions": [], @@ -309,20 +303,22 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "DualStack is enabled but this partition does not support DualStack", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [], - "type": "tree", "rules": [ { "conditions": [], @@ -333,18 +329,22 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "Invalid Configuration: Missing Region", "type": "error" } - ] + ], + "type": "tree" } ] } \ No newline at end of file diff --git a/models/apis/connect/2017-08-08/api-2.json b/models/apis/connect/2017-08-08/api-2.json index c0cdbc9b398..6e3ad19ed19 100644 --- a/models/apis/connect/2017-08-08/api-2.json +++ b/models/apis/connect/2017-08-08/api-2.json @@ -11305,6 +11305,27 @@ "type":"string", "sensitive":true }, + "SegmentAttributeName":{ + "type":"string", + "max":128, + "min":1 + }, + "SegmentAttributeValue":{ + "type":"structure", + "members":{ + "ValueString":{"shape":"SegmentAttributeValueString"} + } + }, + "SegmentAttributeValueString":{ + "type":"string", + "max":1024, + "min":1 + }, + "SegmentAttributes":{ + "type":"map", + "key":{"shape":"SegmentAttributeName"}, + "value":{"shape":"SegmentAttributeValue"} + }, "SendNotificationActionDefinition":{ "type":"structure", "required":[ @@ -11421,7 +11442,8 @@ "ChatDurationInMinutes":{"shape":"ChatDurationInMinutes"}, "SupportedMessagingContentTypes":{"shape":"SupportedMessagingContentTypes"}, "PersistentChat":{"shape":"PersistentChat"}, - "RelatedContactId":{"shape":"ContactId"} + "RelatedContactId":{"shape":"ContactId"}, + "SegmentAttributes":{"shape":"SegmentAttributes"} } }, "StartChatContactResponse":{ diff --git a/models/apis/connect/2017-08-08/docs-2.json b/models/apis/connect/2017-08-08/docs-2.json index 9390148e982..9822da3f8a4 100644 --- a/models/apis/connect/2017-08-08/docs-2.json +++ b/models/apis/connect/2017-08-08/docs-2.json @@ -5712,6 +5712,30 @@ "Credentials$RefreshToken": "

Renews a token generated for a user to access the Amazon Connect instance.

" } }, + "SegmentAttributeName": { + "base": null, + "refs": { + "SegmentAttributes$key": null + } + }, + "SegmentAttributeValue": { + "base": "

A value for a segment attribute. This is structured as a map where the key is valueString and the value is a string.

", + "refs": { + "SegmentAttributes$value": null + } + }, + "SegmentAttributeValueString": { + "base": null, + "refs": { + "SegmentAttributeValue$ValueString": "

The value of a segment attribute.

" + } + }, + "SegmentAttributes": { + "base": null, + "refs": { + "StartChatContactRequest$SegmentAttributes": "

A set of system defined key-value pairs stored on individual contact segments using an attribute map. The attributes are standard Amazon Connect attributes. They can be accessed in flows.

Attribute keys can include only alphanumeric, -, and _.

This field can be used to show channel subtype, such as connect:Guide.

The types application/vnd.amazonaws.connect.message.interactive and application/vnd.amazonaws.connect.message.interactive.response must be present in the SupportedMessagingContentTypes field of this API in order to set SegmentAttributes as { \"connect:Subtype\": {\"valueString\" : \"connect:Guide\" }}.

" + } + }, "SendNotificationActionDefinition": { "base": "

Information about the send notification action.

", "refs": { diff --git a/models/apis/glue/2017-03-31/api-2.json b/models/apis/glue/2017-03-31/api-2.json index 8e6d36da4ef..92189140d9e 100644 --- a/models/apis/glue/2017-03-31/api-2.json +++ b/models/apis/glue/2017-03-31/api-2.json @@ -193,6 +193,18 @@ {"shape":"FederationSourceRetryableException"} ] }, + "BatchGetTableOptimizer":{ + "name":"BatchGetTableOptimizer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"BatchGetTableOptimizerRequest"}, + "output":{"shape":"BatchGetTableOptimizerResponse"}, + "errors":[ + {"shape":"InternalServiceException"} + ] + }, "BatchGetTriggers":{ "name":"BatchGetTriggers", "http":{ @@ -637,6 +649,22 @@ {"shape":"ResourceNotReadyException"} ] }, + "CreateTableOptimizer":{ + "name":"CreateTableOptimizer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateTableOptimizerRequest"}, + "output":{"shape":"CreateTableOptimizerResponse"}, + "errors":[ + {"shape":"EntityNotFoundException"}, + {"shape":"InvalidInputException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AlreadyExistsException"}, + {"shape":"InternalServiceException"} + ] + }, "CreateTrigger":{ "name":"CreateTrigger", "http":{ @@ -1011,6 +1039,21 @@ {"shape":"ResourceNotReadyException"} ] }, + "DeleteTableOptimizer":{ + "name":"DeleteTableOptimizer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteTableOptimizerRequest"}, + "output":{"shape":"DeleteTableOptimizerResponse"}, + "errors":[ + {"shape":"EntityNotFoundException"}, + {"shape":"InvalidInputException"}, + {"shape":"AccessDeniedException"}, + {"shape":"InternalServiceException"} + ] + }, "DeleteTableVersion":{ "name":"DeleteTableVersion", "http":{ @@ -1824,6 +1867,21 @@ {"shape":"FederationSourceRetryableException"} ] }, + "GetTableOptimizer":{ + "name":"GetTableOptimizer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetTableOptimizerRequest"}, + "output":{"shape":"GetTableOptimizerResponse"}, + "errors":[ + {"shape":"EntityNotFoundException"}, + {"shape":"InvalidInputException"}, + {"shape":"AccessDeniedException"}, + {"shape":"InternalServiceException"} + ] + }, "GetTableVersion":{ "name":"GetTableVersion", "http":{ @@ -2313,6 +2371,21 @@ {"shape":"IllegalSessionStateException"} ] }, + "ListTableOptimizerRuns":{ + "name":"ListTableOptimizerRuns", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTableOptimizerRunsRequest"}, + "output":{"shape":"ListTableOptimizerRunsResponse"}, + "errors":[ + {"shape":"EntityNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"InvalidInputException"}, + {"shape":"InternalServiceException"} + ] + }, "ListTriggers":{ "name":"ListTriggers", "http":{ @@ -3118,6 +3191,21 @@ {"shape":"ResourceNotReadyException"} ] }, + "UpdateTableOptimizer":{ + "name":"UpdateTableOptimizer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateTableOptimizerRequest"}, + "output":{"shape":"UpdateTableOptimizerResponse"}, + "errors":[ + {"shape":"EntityNotFoundException"}, + {"shape":"InvalidInputException"}, + {"shape":"AccessDeniedException"}, + {"shape":"InternalServiceException"} + ] + }, "UpdateTrigger":{ "name":"UpdateTrigger", "http":{ @@ -3338,6 +3426,11 @@ "Mapping":{"shape":"Mappings"} } }, + "ArnString":{ + "type":"string", + "max":2048, + "min":20 + }, "AthenaConnectorSource":{ "type":"structure", "required":[ @@ -3653,6 +3746,47 @@ "max":1000, "min":0 }, + "BatchGetTableOptimizerEntries":{ + "type":"list", + "member":{"shape":"BatchGetTableOptimizerEntry"} + }, + "BatchGetTableOptimizerEntry":{ + "type":"structure", + "members":{ + "catalogId":{"shape":"CatalogIdString"}, + "databaseName":{"shape":"databaseNameString"}, + "tableName":{"shape":"tableNameString"}, + "type":{"shape":"TableOptimizerType"} + } + }, + "BatchGetTableOptimizerError":{ + "type":"structure", + "members":{ + "error":{"shape":"ErrorDetail"}, + "catalogId":{"shape":"CatalogIdString"}, + "databaseName":{"shape":"databaseNameString"}, + "tableName":{"shape":"tableNameString"}, + "type":{"shape":"TableOptimizerType"} + } + }, + "BatchGetTableOptimizerErrors":{ + "type":"list", + "member":{"shape":"BatchGetTableOptimizerError"} + }, + "BatchGetTableOptimizerRequest":{ + "type":"structure", + "required":["Entries"], + "members":{ + "Entries":{"shape":"BatchGetTableOptimizerEntries"} + } + }, + "BatchGetTableOptimizerResponse":{ + "type":"structure", + "members":{ + "TableOptimizers":{"shape":"BatchTableOptimizers"}, + "Failures":{"shape":"BatchGetTableOptimizerErrors"} + } + }, "BatchGetTriggersRequest":{ "type":"structure", "required":["TriggerNames"], @@ -3734,6 +3868,19 @@ "type":"list", "member":{"shape":"BatchStopJobRunSuccessfulSubmission"} }, + "BatchTableOptimizer":{ + "type":"structure", + "members":{ + "catalogId":{"shape":"CatalogIdString"}, + "databaseName":{"shape":"databaseNameString"}, + "tableName":{"shape":"tableNameString"}, + "tableOptimizer":{"shape":"TableOptimizer"} + } + }, + "BatchTableOptimizers":{ + "type":"list", + "member":{"shape":"BatchTableOptimizer"} + }, "BatchUpdatePartitionFailureEntry":{ "type":"structure", "members":{ @@ -5326,6 +5473,28 @@ "Session":{"shape":"Session"} } }, + "CreateTableOptimizerRequest":{ + "type":"structure", + "required":[ + "CatalogId", + "DatabaseName", + "TableName", + "Type", + "TableOptimizerConfiguration" + ], + "members":{ + "CatalogId":{"shape":"CatalogIdString"}, + "DatabaseName":{"shape":"NameString"}, + "TableName":{"shape":"NameString"}, + "Type":{"shape":"TableOptimizerType"}, + "TableOptimizerConfiguration":{"shape":"TableOptimizerConfiguration"} + } + }, + "CreateTableOptimizerResponse":{ + "type":"structure", + "members":{ + } + }, "CreateTableRequest":{ "type":"structure", "required":[ @@ -6178,6 +6347,26 @@ "Id":{"shape":"NameString"} } }, + "DeleteTableOptimizerRequest":{ + "type":"structure", + "required":[ + "CatalogId", + "DatabaseName", + "TableName", + "Type" + ], + "members":{ + "CatalogId":{"shape":"CatalogIdString"}, + "DatabaseName":{"shape":"NameString"}, + "TableName":{"shape":"NameString"}, + "Type":{"shape":"TableOptimizerType"} + } + }, + "DeleteTableOptimizerResponse":{ + "type":"structure", + "members":{ + } + }, "DeleteTableRequest":{ "type":"structure", "required":[ @@ -7772,6 +7961,30 @@ "Statement":{"shape":"Statement"} } }, + "GetTableOptimizerRequest":{ + "type":"structure", + "required":[ + "CatalogId", + "DatabaseName", + "TableName", + "Type" + ], + "members":{ + "CatalogId":{"shape":"CatalogIdString"}, + "DatabaseName":{"shape":"NameString"}, + "TableName":{"shape":"NameString"}, + "Type":{"shape":"TableOptimizerType"} + } + }, + "GetTableOptimizerResponse":{ + "type":"structure", + "members":{ + "CatalogId":{"shape":"CatalogIdString"}, + "DatabaseName":{"shape":"NameString"}, + "TableName":{"shape":"NameString"}, + "TableOptimizer":{"shape":"TableOptimizer"} + } + }, "GetTableRequest":{ "type":"structure", "required":[ @@ -9123,6 +9336,34 @@ "NextToken":{"shape":"OrchestrationToken"} } }, + "ListTableOptimizerRunsRequest":{ + "type":"structure", + "required":[ + "CatalogId", + "DatabaseName", + "TableName", + "Type" + ], + "members":{ + "CatalogId":{"shape":"CatalogIdString"}, + "DatabaseName":{"shape":"NameString"}, + "TableName":{"shape":"NameString"}, + "Type":{"shape":"TableOptimizerType"}, + "MaxResults":{"shape":"MaxListTableOptimizerRunsTokenResults"}, + "NextToken":{"shape":"ListTableOptimizerRunsToken"} + } + }, + "ListTableOptimizerRunsResponse":{ + "type":"structure", + "members":{ + "CatalogId":{"shape":"CatalogIdString"}, + "DatabaseName":{"shape":"NameString"}, + "TableName":{"shape":"NameString"}, + "NextToken":{"shape":"ListTableOptimizerRunsToken"}, + "TableOptimizerRuns":{"shape":"TableOptimizerRuns"} + } + }, + "ListTableOptimizerRunsToken":{"type":"string"}, "ListTriggersRequest":{ "type":"structure", "members":{ @@ -9314,6 +9555,7 @@ "min":0 }, "MaxConcurrentRuns":{"type":"integer"}, + "MaxListTableOptimizerRunsTokenResults":{"type":"integer"}, "MaxResultsNumber":{ "type":"integer", "box":true, @@ -10400,6 +10642,15 @@ "min":1 }, "RunId":{"type":"string"}, + "RunMetrics":{ + "type":"structure", + "members":{ + "NumberOfBytesCompacted":{"shape":"MessageString"}, + "NumberOfFilesCompacted":{"shape":"MessageString"}, + "NumberOfDpus":{"shape":"MessageString"}, + "JobDurationInHour":{"shape":"MessageString"} + } + }, "RunStatementRequest":{ "type":"structure", "required":[ @@ -11734,6 +11985,49 @@ "member":{"shape":"Table"} }, "TableName":{"type":"string"}, + "TableOptimizer":{ + "type":"structure", + "members":{ + "type":{"shape":"TableOptimizerType"}, + "configuration":{"shape":"TableOptimizerConfiguration"}, + "lastRun":{"shape":"TableOptimizerRun"} + } + }, + "TableOptimizerConfiguration":{ + "type":"structure", + "members":{ + "roleArn":{"shape":"ArnString"}, + "enabled":{"shape":"NullableBoolean"} + } + }, + "TableOptimizerEventType":{ + "type":"string", + "enum":[ + "starting", + "completed", + "failed", + "in_progress" + ] + }, + "TableOptimizerRun":{ + "type":"structure", + "members":{ + "eventType":{"shape":"TableOptimizerEventType"}, + "startTimestamp":{"shape":"TableOptimizerRunTimestamp"}, + "endTimestamp":{"shape":"TableOptimizerRunTimestamp"}, + "metrics":{"shape":"RunMetrics"}, + "error":{"shape":"MessageString"} + } + }, + "TableOptimizerRunTimestamp":{"type":"timestamp"}, + "TableOptimizerRuns":{ + "type":"list", + "member":{"shape":"TableOptimizerRun"} + }, + "TableOptimizerType":{ + "type":"string", + "enum":["compaction"] + }, "TablePrefix":{ "type":"string", "max":128, @@ -12513,6 +12807,28 @@ "JobName":{"shape":"NameString"} } }, + "UpdateTableOptimizerRequest":{ + "type":"structure", + "required":[ + "CatalogId", + "DatabaseName", + "TableName", + "Type", + "TableOptimizerConfiguration" + ], + "members":{ + "CatalogId":{"shape":"CatalogIdString"}, + "DatabaseName":{"shape":"NameString"}, + "TableName":{"shape":"NameString"}, + "Type":{"shape":"TableOptimizerType"}, + "TableOptimizerConfiguration":{"shape":"TableOptimizerConfiguration"} + } + }, + "UpdateTableOptimizerResponse":{ + "type":"structure", + "members":{ + } + }, "UpdateTableRequest":{ "type":"structure", "required":[ @@ -12784,6 +13100,14 @@ "Version":{"shape":"VersionId"}, "RowTag":{"shape":"RowTag"} } + }, + "databaseNameString":{ + "type":"string", + "min":1 + }, + "tableNameString":{ + "type":"string", + "min":1 } } } diff --git a/models/apis/glue/2017-03-31/docs-2.json b/models/apis/glue/2017-03-31/docs-2.json index d90252e1bb6..2706c6a4de7 100644 --- a/models/apis/glue/2017-03-31/docs-2.json +++ b/models/apis/glue/2017-03-31/docs-2.json @@ -14,6 +14,7 @@ "BatchGetDevEndpoints": "

Returns a list of resource metadata for a given list of development endpoint names. After calling the ListDevEndpoints operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

", "BatchGetJobs": "

Returns a list of resource metadata for a given list of job names. After calling the ListJobs operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

", "BatchGetPartition": "

Retrieves partitions in a batch request.

", + "BatchGetTableOptimizer": "

Returns the configuration for the specified table optimizers.

", "BatchGetTriggers": "

Returns a list of resource metadata for a given list of trigger names. After calling the ListTriggers operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

", "BatchGetWorkflows": "

Returns a list of resource metadata for a given list of workflow names. After calling the ListWorkflows operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

", "BatchStopJobRun": "

Stops one or more job runs for a specified job definition.

", @@ -41,6 +42,7 @@ "CreateSecurityConfiguration": "

Creates a new security configuration. A security configuration is a set of security properties that can be used by Glue. You can use a security configuration to encrypt data at rest. For information about using security configurations in Glue, see Encrypting Data Written by Crawlers, Jobs, and Development Endpoints.

", "CreateSession": "

Creates a new session.

", "CreateTable": "

Creates a new table definition in the Data Catalog.

", + "CreateTableOptimizer": "

Creates a new table optimizer for a specific function. compaction is the only currently supported optimizer type.

", "CreateTrigger": "

Creates a new trigger.

", "CreateUserDefinedFunction": "

Creates a new function definition in the Data Catalog.

", "CreateWorkflow": "

Creates a new workflow.

", @@ -65,6 +67,7 @@ "DeleteSecurityConfiguration": "

Deletes a specified security configuration.

", "DeleteSession": "

Deletes the session.

", "DeleteTable": "

Removes a table definition from the Data Catalog.

After completing this operation, you no longer have access to the table versions and partitions that belong to the deleted table. Glue deletes these \"orphaned\" resources asynchronously in a timely manner, at the discretion of the service.

To ensure the immediate deletion of all related resources, before calling DeleteTable, use DeleteTableVersion or BatchDeleteTableVersion, and DeletePartition or BatchDeletePartition, to delete any resources that belong to the table.

", + "DeleteTableOptimizer": "

Deletes an optimizer and all associated metadata for a table. The optimization will no longer be performed on the table.

", "DeleteTableVersion": "

Deletes a specified version of a table.

", "DeleteTrigger": "

Deletes a specified trigger. If the trigger is not found, no exception is thrown.

", "DeleteUserDefinedFunction": "

Deletes an existing function definition from the Data Catalog.

", @@ -119,6 +122,7 @@ "GetSession": "

Retrieves the session.

", "GetStatement": "

Retrieves the statement.

", "GetTable": "

Retrieves the Table definition in a Data Catalog for a specified table.

", + "GetTableOptimizer": "

Returns the configuration of all optimizers associated with a specified table.

", "GetTableVersion": "

Retrieves a specified version of a table.

", "GetTableVersions": "

Retrieves a list of strings that identify available versions of a specified table.

", "GetTables": "

Retrieves the definitions of some or all of the tables in a given Database.

", @@ -151,6 +155,7 @@ "ListSchemas": "

Returns a list of schemas with minimal details. Schemas in Deleting status will not be included in the results. Empty results will be returned if there are no schemas available.

When the RegistryId is not provided, all the schemas across registries will be part of the API response.

", "ListSessions": "

Retrieve a list of sessions.

", "ListStatements": "

Lists statements for the session.

", + "ListTableOptimizerRuns": "

Lists the history of previous optimizer runs for a specific table.

", "ListTriggers": "

Retrieves the names of all trigger resources in this Amazon Web Services account, or the resources with the specified tag. This operation allows you to see which resources are available in your account, and their names.

This operation takes the optional Tags field, which you can use as a filter on the response so that tagged resources can be retrieved as a group. If you choose to use tags filtering, only resources with the tag are retrieved.

", "ListWorkflows": "

Lists names of workflows created in the account.

", "PutDataCatalogEncryptionSettings": "

Sets the security configuration for a specified catalog. After the configuration has been set, the specified encryption is applied to every catalog write thereafter.

", @@ -167,7 +172,7 @@ "StartBlueprintRun": "

Starts a new run of the specified blueprint.

", "StartCrawler": "

Starts a crawl using the specified crawler, regardless of what is scheduled. If the crawler is already running, returns a CrawlerRunningException.

", "StartCrawlerSchedule": "

Changes the schedule state of the specified crawler to SCHEDULED, unless the crawler is already running or the schedule state is already SCHEDULED.

", - "StartDataQualityRuleRecommendationRun": "

Starts a recommendation run that is used to generate rules when you don't know what rules to write. Glue Data Quality analyzes the data and comes up with recommendations for a potential ruleset. You can then triage the ruleset and modify the generated ruleset to your liking.

", + "StartDataQualityRuleRecommendationRun": "

Starts a recommendation run that is used to generate rules when you don't know what rules to write. Glue Data Quality analyzes the data and comes up with recommendations for a potential ruleset. You can then triage the ruleset and modify the generated ruleset to your liking.

Recommendation runs are automatically deleted after 90 days.

", "StartDataQualityRulesetEvaluationRun": "

Once you have a ruleset definition (either recommended or your own), you call this operation to evaluate the ruleset against a data source (Glue table). The evaluation computes results which you can retrieve with the GetDataQualityResult API.

", "StartExportLabelsTaskRun": "

Begins an asynchronous task to export all labeled data for a particular transform. This task is the only label-related API call that is not part of the typical active learning workflow. You typically use StartExportLabelsTaskRun when you want to work with all of your existing labels at the same time, such as when you want to remove or change labels that were previously submitted as truth. This API operation accepts the TransformId whose labels you want to export and an Amazon Simple Storage Service (Amazon S3) path to export the labels to. The operation returns a TaskRunId. You can check on the status of your task run by calling the GetMLTaskRun API.

", "StartImportLabelsTaskRun": "

Enables you to provide additional labels (examples of truth) to be used to teach the machine learning transform and improve its quality. This API operation is generally used as part of the active learning workflow that starts with the StartMLLabelingSetGenerationTaskRun call and that ultimately results in improving the quality of your machine learning transform.

After the StartMLLabelingSetGenerationTaskRun finishes, Glue machine learning will have generated a series of questions for humans to answer. (Answering these questions is often called 'labeling' in the machine learning workflows). In the case of the FindMatches transform, these questions are of the form, “What is the correct way to group these rows together into groups composed entirely of matching records?” After the labeling process is finished, users upload their answers/labels with a call to StartImportLabelsTaskRun. After StartImportLabelsTaskRun finishes, all future runs of the machine learning transform use the new and improved labels and perform a higher-quality transformation.

By default, StartMLLabelingSetGenerationTaskRun continually learns from and combines all labels that you upload unless you set Replace to true. If you set Replace to true, StartImportLabelsTaskRun deletes and forgets all previously uploaded labels and learns only from the exact set that you upload. Replacing labels can be helpful if you realize that you previously uploaded incorrect labels, and you believe that they are having a negative effect on your transform quality.

You can check on the status of your task run by calling the GetMLTaskRun operation.

", @@ -201,6 +206,7 @@ "UpdateSchema": "

Updates the description, compatibility setting, or version checkpoint for a schema set.

For updating the compatibility setting, the call will not validate compatibility for the entire set of schema versions with the new compatibility setting. If the value for Compatibility is provided, the VersionNumber (a checkpoint) is also required. The API will validate the checkpoint version number for consistency.

If the value for the VersionNumber (checkpoint) is provided, Compatibility is optional and this can be used to set/reset a checkpoint for the schema.

This update will happen only if the schema is in the AVAILABLE state.

", "UpdateSourceControlFromJob": "

Synchronizes a job to the source control repository. This operation takes the job artifacts from the Glue internal stores and makes a commit to the remote repository that is configured on the job.

This API supports optional parameters which take in the repository information.

", "UpdateTable": "

Updates a metadata table in the Data Catalog.

", + "UpdateTableOptimizer": "

Updates the configuration for an existing table optimizer.

", "UpdateTrigger": "

Updates a trigger definition.

", "UpdateUserDefinedFunction": "

Updates an existing function definition in the Data Catalog.

", "UpdateWorkflow": "

Updates an existing workflow.

" @@ -328,6 +334,12 @@ "CodeGenConfigurationNode$ApplyMapping": "

Specifies a transform that maps data property keys in the data source to data property keys in the data target. You can rename keys, modify the data types for keys, and choose which keys to drop from the dataset.

" } }, + "ArnString": { + "base": null, + "refs": { + "TableOptimizerConfiguration$roleArn": "

A role passed by the caller which gives the service permission to update the resources associated with the optimizer on the caller's behalf.

" + } + }, "AthenaConnectorSource": { "base": "

Specifies a connector to an Amazon Athena data source.

", "refs": { @@ -548,6 +560,40 @@ "BatchGetPartitionResponse$UnprocessedKeys": "

A list of the partition values in the request for which partitions were not returned.

" } }, + "BatchGetTableOptimizerEntries": { + "base": null, + "refs": { + "BatchGetTableOptimizerRequest$Entries": "

A list of BatchGetTableOptimizerEntry objects specifying the table optimizers to retrieve.

" + } + }, + "BatchGetTableOptimizerEntry": { + "base": "

Represents a table optimizer to retrieve in the BatchGetTableOptimizer operation.

", + "refs": { + "BatchGetTableOptimizerEntries$member": null + } + }, + "BatchGetTableOptimizerError": { + "base": "

Contains details on one of the errors in the error list returned by the BatchGetTableOptimizer operation.

", + "refs": { + "BatchGetTableOptimizerErrors$member": null + } + }, + "BatchGetTableOptimizerErrors": { + "base": null, + "refs": { + "BatchGetTableOptimizerResponse$Failures": "

A list of errors from the operation.

" + } + }, + "BatchGetTableOptimizerRequest": { + "base": null, + "refs": { + } + }, + "BatchGetTableOptimizerResponse": { + "base": null, + "refs": { + } + }, "BatchGetTriggersRequest": { "base": null, "refs": { @@ -614,6 +660,18 @@ "BatchStopJobRunResponse$SuccessfulSubmissions": "

A list of the JobRuns that were successfully submitted for stopping.

" } }, + "BatchTableOptimizer": { + "base": "

Contains details for one of the table optimizers returned by the BatchGetTableOptimizer operation.

", + "refs": { + "BatchTableOptimizers$member": null + } + }, + "BatchTableOptimizers": { + "base": null, + "refs": { + "BatchGetTableOptimizerResponse$TableOptimizers": "

A list of BatchTableOptimizer objects.

" + } + }, "BatchUpdatePartitionFailureEntry": { "base": "

Contains information about a batch update partition error.

", "refs": { @@ -960,11 +1018,15 @@ "BatchDeleteTableRequest$CatalogId": "

The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.

", "BatchDeleteTableVersionRequest$CatalogId": "

The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.

", "BatchGetPartitionRequest$CatalogId": "

The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.

", + "BatchGetTableOptimizerEntry$catalogId": "

The Catalog ID of the table.

", + "BatchGetTableOptimizerError$catalogId": "

The Catalog ID of the table.

", + "BatchTableOptimizer$catalogId": "

The Catalog ID of the table.

", "BatchUpdatePartitionRequest$CatalogId": "

The ID of the catalog in which the partition is to be updated. Currently, this should be the Amazon Web Services account ID.

", "CreateConnectionRequest$CatalogId": "

The ID of the Data Catalog in which to create the connection. If none is provided, the Amazon Web Services account ID is used by default.

", "CreateDatabaseRequest$CatalogId": "

The ID of the Data Catalog in which to create the database. If none is provided, the Amazon Web Services account ID is used by default.

", "CreatePartitionIndexRequest$CatalogId": "

The catalog ID where the table resides.

", "CreatePartitionRequest$CatalogId": "

The Amazon Web Services account ID of the catalog in which the partition is to be created.

", + "CreateTableOptimizerRequest$CatalogId": "

The Catalog ID of the table.

", "CreateTableRequest$CatalogId": "

The ID of the Data Catalog in which to create the Table. If none is supplied, the Amazon Web Services account ID is used by default.

", "CreateUserDefinedFunctionRequest$CatalogId": "

The ID of the Data Catalog in which to create the function. If none is provided, the Amazon Web Services account ID is used by default.

", "Database$CatalogId": "

The ID of the Data Catalog in which the database resides.

", @@ -975,6 +1037,7 @@ "DeleteDatabaseRequest$CatalogId": "

The ID of the Data Catalog in which the database resides. If none is provided, the Amazon Web Services account ID is used by default.

", "DeletePartitionIndexRequest$CatalogId": "

The catalog ID where the table resides.

", "DeletePartitionRequest$CatalogId": "

The ID of the Data Catalog where the partition to be deleted resides. If none is provided, the Amazon Web Services account ID is used by default.

", + "DeleteTableOptimizerRequest$CatalogId": "

The Catalog ID of the table.

", "DeleteTableRequest$CatalogId": "

The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.

", "DeleteTableVersionRequest$CatalogId": "

The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.

", "DeleteUserDefinedFunctionRequest$CatalogId": "

The ID of the Data Catalog where the function to be deleted is located. If none is supplied, the Amazon Web Services account ID is used by default.

", @@ -989,6 +1052,8 @@ "GetPartitionIndexesRequest$CatalogId": "

The catalog ID where the table resides.

", "GetPartitionRequest$CatalogId": "

The ID of the Data Catalog where the partition in question resides. If none is provided, the Amazon Web Services account ID is used by default.

", "GetPartitionsRequest$CatalogId": "

The ID of the Data Catalog where the partitions in question reside. If none is provided, the Amazon Web Services account ID is used by default.

", + "GetTableOptimizerRequest$CatalogId": "

The Catalog ID of the table.

", + "GetTableOptimizerResponse$CatalogId": "

The Catalog ID of the table.

", "GetTableRequest$CatalogId": "

The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.

", "GetTableVersionRequest$CatalogId": "

The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.

", "GetTableVersionsRequest$CatalogId": "

The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.

", @@ -999,6 +1064,8 @@ "GetUserDefinedFunctionRequest$CatalogId": "

The ID of the Data Catalog where the function to be retrieved is located. If none is provided, the Amazon Web Services account ID is used by default.

", "GetUserDefinedFunctionsRequest$CatalogId": "

The ID of the Data Catalog where the functions to be retrieved are located. If none is provided, the Amazon Web Services account ID is used by default.

", "ImportCatalogToGlueRequest$CatalogId": "

The ID of the catalog to import. Currently, this should be the Amazon Web Services account ID.

", + "ListTableOptimizerRunsRequest$CatalogId": "

The Catalog ID of the table.

", + "ListTableOptimizerRunsResponse$CatalogId": "

The Catalog ID of the table.

", "Partition$CatalogId": "

The ID of the Data Catalog in which the partition resides.

", "PutDataCatalogEncryptionSettingsRequest$CatalogId": "

The ID of the Data Catalog to set the security configuration for. If none is provided, the Amazon Web Services account ID is used by default.

", "SearchTablesRequest$CatalogId": "

A unique identifier, consisting of account_id .

", @@ -1009,6 +1076,7 @@ "UpdateConnectionRequest$CatalogId": "

The ID of the Data Catalog in which the connection resides. If none is provided, the Amazon Web Services account ID is used by default.

", "UpdateDatabaseRequest$CatalogId": "

The ID of the Data Catalog in which the metadata database resides. If none is provided, the Amazon Web Services account ID is used by default.

", "UpdatePartitionRequest$CatalogId": "

The ID of the Data Catalog where the partition to be updated resides. If none is provided, the Amazon Web Services account ID is used by default.

", + "UpdateTableOptimizerRequest$CatalogId": "

The Catalog ID of the table.

", "UpdateTableRequest$CatalogId": "

The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.

", "UpdateUserDefinedFunctionRequest$CatalogId": "

The ID of the Data Catalog where the function to be updated is located. If none is provided, the Amazon Web Services account ID is used by default.

", "UserDefinedFunction$CatalogId": "

The ID of the Data Catalog in which the function resides.

" @@ -1813,6 +1881,16 @@ "refs": { } }, + "CreateTableOptimizerRequest": { + "base": null, + "refs": { + } + }, + "CreateTableOptimizerResponse": { + "base": null, + "refs": { + } + }, "CreateTableRequest": { "base": null, "refs": { @@ -2498,6 +2576,16 @@ "refs": { } }, + "DeleteTableOptimizerRequest": { + "base": null, + "refs": { + } + }, + "DeleteTableOptimizerResponse": { + "base": null, + "refs": { + } + }, "DeleteTableRequest": { "base": null, "refs": { @@ -2994,6 +3082,7 @@ "ErrorDetail": { "base": "

Contains details about an error.

", "refs": { + "BatchGetTableOptimizerError$error": "

An ErrorDetail object containing code and message details about the error.

", "BatchStopJobRunError$ErrorDetail": "

Specifies details about the error that was encountered.

", "BatchUpdatePartitionFailureEntry$ErrorDetail": "

The details about the batch update partition error.

", "ColumnError$Error": "

An error message with the reason for the failure of an operation.

", @@ -3950,6 +4039,16 @@ "refs": { } }, + "GetTableOptimizerRequest": { + "base": null, + "refs": { + } + }, + "GetTableOptimizerResponse": { + "base": null, + "refs": { + } + }, "GetTableRequest": { "base": null, "refs": { @@ -5012,6 +5111,23 @@ "refs": { } }, + "ListTableOptimizerRunsRequest": { + "base": null, + "refs": { + } + }, + "ListTableOptimizerRunsResponse": { + "base": null, + "refs": { + } + }, + "ListTableOptimizerRunsToken": { + "base": null, + "refs": { + "ListTableOptimizerRunsRequest$NextToken": "

A continuation token, if this is a continuation call.

", + "ListTableOptimizerRunsResponse$NextToken": "

A continuation token for paginating the returned list of optimizer runs, returned if the current segment of the list is not the last.

" + } + }, "ListTriggersRequest": { "base": null, "refs": { @@ -5192,6 +5308,12 @@ "ExecutionProperty$MaxConcurrentRuns": "

The maximum number of concurrent runs allowed for the job. The default is 1. An error is returned when this threshold is reached. The maximum value you can specify is controlled by a service limit.

" } }, + "MaxListTableOptimizerRunsTokenResults": { + "base": null, + "refs": { + "ListTableOptimizerRunsRequest$MaxResults": "

The maximum number of optimizer runs to return on each call.

" + } + }, "MaxResultsNumber": { "base": null, "refs": { @@ -5253,9 +5375,14 @@ "PermissionTypeMismatchException$Message": "

There is a mismatch between the SupportedPermissionType used in the query request and the permissions defined on the target table.

", "ResourceNotReadyException$Message": "

A message describing the problem.

", "ResourceNumberLimitExceededException$Message": "

A message describing the problem.

", + "RunMetrics$NumberOfBytesCompacted": "

The number of bytes removed by the compaction job run.

", + "RunMetrics$NumberOfFilesCompacted": "

The number of files removed by the compaction job run.

", + "RunMetrics$NumberOfDpus": "

The number of DPU hours consumed by the job.

", + "RunMetrics$JobDurationInHour": "

The duration of the job in hours.

", "SchedulerNotRunningException$Message": "

A message describing the problem.

", "SchedulerRunningException$Message": "

A message describing the problem.

", "SchedulerTransitioningException$Message": "

A message describing the problem.

", + "TableOptimizerRun$error": "

An error that occured during the optimizer run.

", "ValidationException$Message": "

A message describing the problem.

", "VersionMismatchException$Message": "

A message describing the problem.

" } @@ -5422,6 +5549,8 @@ "CreateSecurityConfigurationResponse$Name": "

The name assigned to the new security configuration.

", "CreateSessionRequest$Id": "

The ID of the session request.

", "CreateSessionRequest$SecurityConfiguration": "

The name of the SecurityConfiguration structure to be used with the session

", + "CreateTableOptimizerRequest$DatabaseName": "

The name of the database in the catalog in which the table resides.

", + "CreateTableOptimizerRequest$TableName": "

The name of the table.

", "CreateTableRequest$DatabaseName": "

The catalog database in which to create the new table. For Hive compatibility, this name is entirely lowercase.

", "CreateTriggerRequest$Name": "

The name of the trigger.

", "CreateTriggerRequest$WorkflowName": "

The name of the workflow associated with the trigger.

", @@ -5477,6 +5606,8 @@ "DeleteSecurityConfigurationRequest$Name": "

The name of the security configuration to delete.

", "DeleteSessionRequest$Id": "

The ID of the session to be deleted.

", "DeleteSessionResponse$Id": "

Returns the ID of the deleted session.

", + "DeleteTableOptimizerRequest$DatabaseName": "

The name of the database in the catalog in which the table resides.

", + "DeleteTableOptimizerRequest$TableName": "

The name of the table.

", "DeleteTableRequest$DatabaseName": "

The name of the catalog database in which the table resides. For Hive compatibility, this name is entirely lowercase.

", "DeleteTableRequest$Name": "

The name of the table to be deleted. For Hive compatibility, this name is entirely lowercase.

", "DeleteTableVersionRequest$DatabaseName": "

The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

", @@ -5530,6 +5661,10 @@ "GetSecurityConfigurationRequest$Name": "

The name of the security configuration to retrieve.

", "GetSessionRequest$Id": "

The ID of the session.

", "GetStatementRequest$SessionId": "

The Session ID of the statement.

", + "GetTableOptimizerRequest$DatabaseName": "

The name of the database in the catalog in which the table resides.

", + "GetTableOptimizerRequest$TableName": "

The name of the table.

", + "GetTableOptimizerResponse$DatabaseName": "

The name of the database in the catalog in which the table resides.

", + "GetTableOptimizerResponse$TableName": "

The name of the table.

", "GetTableRequest$DatabaseName": "

The name of the database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

", "GetTableRequest$Name": "

The name of the table for which to retrieve the definition. For Hive compatibility, this name is entirely lowercase.

", "GetTableVersionRequest$DatabaseName": "

The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

", @@ -5571,6 +5706,10 @@ "KeySchemaElement$Name": "

The name of a partition key.

", "ListCrawlsRequest$CrawlerName": "

The name of the crawler whose runs you want to retrieve.

", "ListStatementsRequest$SessionId": "

The Session ID of the statements.

", + "ListTableOptimizerRunsRequest$DatabaseName": "

The name of the database in the catalog in which the table resides.

", + "ListTableOptimizerRunsRequest$TableName": "

The name of the table.

", + "ListTableOptimizerRunsResponse$DatabaseName": "

The name of the database in the catalog in which the table resides.

", + "ListTableOptimizerRunsResponse$TableName": "

The name of the table.

", "ListTriggersRequest$DependentJobName": "

The name of the job for which to retrieve triggers. The trigger that can start this job is returned. If there is no such trigger, all triggers are returned.

", "MLTransform$Name": "

A user-defined name for the machine learning transform. Names are not guaranteed unique and can be changed at any time.

", "MLUserDataEncryption$KmsKeyId": "

The ID for the customer-provided KMS key.

", @@ -5662,6 +5801,8 @@ "UpdateSourceControlFromJobRequest$BranchName": "

An optional branch in the remote repository.

", "UpdateSourceControlFromJobRequest$Folder": "

An optional folder in the remote repository.

", "UpdateSourceControlFromJobResponse$JobName": "

The name of the Glue job.

", + "UpdateTableOptimizerRequest$DatabaseName": "

The name of the database in the catalog in which the table resides.

", + "UpdateTableOptimizerRequest$TableName": "

The name of the table.

", "UpdateTableRequest$DatabaseName": "

The name of the catalog database in which the table resides. For Hive compatibility, this name is entirely lowercase.

", "UpdateTriggerRequest$Name": "

The name of the trigger to update.

", "UpdateUserDefinedFunctionRequest$DatabaseName": "

The name of the catalog database where the function to be updated is located.

", @@ -5921,6 +6062,7 @@ "InvalidInputException$FromFederationSource": "

Indicates whether or not the exception relates to a federated source.

", "LakeFormationConfiguration$UseLakeFormationCredentials": "

Specifies whether to use Lake Formation credentials for the crawler instead of the IAM role credentials.

", "MongoDBTarget$ScanAll": "

Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true.

", + "TableOptimizerConfiguration$enabled": "

Whether table optimization is enabled.

", "UpdateCsvClassifierRequest$DisableValueTrimming": "

Specifies not to trim values before identifying the type of column values. The default value is true.

", "UpdateCsvClassifierRequest$AllowSingleColumn": "

Enables the processing of files that contain only one column.

", "UpdateCsvClassifierRequest$CustomDatatypeConfigured": "

Specifies the configuration of custom datatypes.

" @@ -6831,6 +6973,12 @@ "ResetJobBookmarkRequest$RunId": "

The unique run identifier associated with this job run.

" } }, + "RunMetrics": { + "base": "

Metrics for the optimizer run.

", + "refs": { + "TableOptimizerRun$metrics": "

A RunMetrics object containing metrics for the optimizer run.

" + } + }, "RunStatementRequest": { "base": null, "refs": { @@ -7748,6 +7896,60 @@ "MappingEntry$TargetTable": "

The target table.

" } }, + "TableOptimizer": { + "base": "

Contains details about an optimizer associated with a table.

", + "refs": { + "BatchTableOptimizer$tableOptimizer": "

A TableOptimizer object that contains details on the configuration and last run of a table optimzer.

", + "GetTableOptimizerResponse$TableOptimizer": "

The optimizer associated with the specified table.

" + } + }, + "TableOptimizerConfiguration": { + "base": "

Contains details on the configuration of a table optimizer. You pass this configuration when creating or updating a table optimizer.

", + "refs": { + "CreateTableOptimizerRequest$TableOptimizerConfiguration": "

A TableOptimizerConfiguration object representing the configuration of a table optimizer.

", + "TableOptimizer$configuration": "

A TableOptimizerConfiguration object that was specified when creating or updating a table optimizer.

", + "UpdateTableOptimizerRequest$TableOptimizerConfiguration": "

A TableOptimizerConfiguration object representing the configuration of a table optimizer.

" + } + }, + "TableOptimizerEventType": { + "base": null, + "refs": { + "TableOptimizerRun$eventType": "

An event type representing the status of the table optimizer run.

" + } + }, + "TableOptimizerRun": { + "base": "

Contains details for a table optimizer run.

", + "refs": { + "TableOptimizer$lastRun": "

A TableOptimizerRun object representing the last run of the table optimizer.

", + "TableOptimizerRuns$member": null + } + }, + "TableOptimizerRunTimestamp": { + "base": null, + "refs": { + "TableOptimizerRun$startTimestamp": "

Represents the epoch timestamp at which the compaction job was started within Lake Formation.

", + "TableOptimizerRun$endTimestamp": "

Represents the epoch timestamp at which the compaction job ended.

" + } + }, + "TableOptimizerRuns": { + "base": null, + "refs": { + "ListTableOptimizerRunsResponse$TableOptimizerRuns": "

A list of the optimizer runs associated with a table.

" + } + }, + "TableOptimizerType": { + "base": null, + "refs": { + "BatchGetTableOptimizerEntry$type": "

The type of table optimizer.

", + "BatchGetTableOptimizerError$type": "

The type of table optimizer.

", + "CreateTableOptimizerRequest$Type": "

The type of table optimizer. Currently, the only valid value is compaction.

", + "DeleteTableOptimizerRequest$Type": "

The type of table optimizer.

", + "GetTableOptimizerRequest$Type": "

The type of table optimizer.

", + "ListTableOptimizerRunsRequest$Type": "

The type of table optimizer. Currently, the only valid value is compaction.

", + "TableOptimizer$type": "

The type of table optimizer. Currently, the only valid value is compaction.

", + "UpdateTableOptimizerRequest$Type": "

The type of table optimizer. Currently, the only valid value is compaction.

" + } + }, "TablePrefix": { "base": null, "refs": { @@ -8501,6 +8703,16 @@ "refs": { } }, + "UpdateTableOptimizerRequest": { + "base": null, + "refs": { + } + }, + "UpdateTableOptimizerResponse": { + "base": null, + "refs": { + } + }, "UpdateTableRequest": { "base": null, "refs": { @@ -8779,6 +8991,22 @@ "refs": { "Classifier$XMLClassifier": "

A classifier for XML content.

" } + }, + "databaseNameString": { + "base": null, + "refs": { + "BatchGetTableOptimizerEntry$databaseName": "

The name of the database in the catalog in which the table resides.

", + "BatchGetTableOptimizerError$databaseName": "

The name of the database in the catalog in which the table resides.

", + "BatchTableOptimizer$databaseName": "

The name of the database in the catalog in which the table resides.

" + } + }, + "tableNameString": { + "base": null, + "refs": { + "BatchGetTableOptimizerEntry$tableName": "

The name of the table.

", + "BatchGetTableOptimizerError$tableName": "

The name of the table.

", + "BatchTableOptimizer$tableName": "

The name of the table.

" + } } } } diff --git a/models/apis/glue/2017-03-31/paginators-1.json b/models/apis/glue/2017-03-31/paginators-1.json index f3d73d0ab62..a1dfea9c200 100644 --- a/models/apis/glue/2017-03-31/paginators-1.json +++ b/models/apis/glue/2017-03-31/paginators-1.json @@ -180,6 +180,11 @@ "limit_key": "MaxResults", "output_token": "NextToken" }, + "ListTableOptimizerRuns": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken" + }, "ListTriggers": { "input_token": "NextToken", "limit_key": "MaxResults", diff --git a/models/apis/iot/2015-05-28/api-2.json b/models/apis/iot/2015-05-28/api-2.json index 407a4606d8f..a6a71431a6f 100644 --- a/models/apis/iot/2015-05-28/api-2.json +++ b/models/apis/iot/2015-05-28/api-2.json @@ -4893,7 +4893,8 @@ "metric":{"shape":"BehaviorMetric"}, "metricDimension":{"shape":"MetricDimension"}, "criteria":{"shape":"BehaviorCriteria"}, - "suppressAlerts":{"shape":"SuppressAlerts"} + "suppressAlerts":{"shape":"SuppressAlerts"}, + "exportMetric":{"shape":"ExportMetric"} } }, "BehaviorCriteria":{ @@ -6130,7 +6131,8 @@ "deprecatedMessage":"Use additionalMetricsToRetainV2." }, "additionalMetricsToRetainV2":{"shape":"AdditionalMetricsToRetainV2List"}, - "tags":{"shape":"TagList"} + "tags":{"shape":"TagList"}, + "metricsExportConfig":{"shape":"MetricsExportConfig"} } }, "CreateSecurityProfileResponse":{ @@ -6592,6 +6594,7 @@ } } }, + "DeleteMetricsExportConfig":{"type":"boolean"}, "DeleteMitigationActionRequest":{ "type":"structure", "required":["actionName"], @@ -7562,7 +7565,8 @@ "additionalMetricsToRetainV2":{"shape":"AdditionalMetricsToRetainV2List"}, "version":{"shape":"Version"}, "creationDate":{"shape":"Timestamp"}, - "lastModifiedDate":{"shape":"Timestamp"} + "lastModifiedDate":{"shape":"Timestamp"}, + "metricsExportConfig":{"shape":"MetricsExportConfig"} } }, "DescribeStreamRequest":{ @@ -8185,6 +8189,7 @@ "rateIncreaseCriteria":{"shape":"RateIncreaseCriteria"} } }, + "ExportMetric":{"type":"boolean"}, "FailedChecksCount":{"type":"integer"}, "FailedFindingsCount":{"type":"long"}, "FailedThings":{"type":"integer"}, @@ -11321,7 +11326,8 @@ "required":["metric"], "members":{ "metric":{"shape":"BehaviorMetric"}, - "metricDimension":{"shape":"MetricDimension"} + "metricDimension":{"shape":"MetricDimension"}, + "exportMetric":{"shape":"ExportMetric"} } }, "MetricValue":{ @@ -11335,6 +11341,17 @@ "strings":{"shape":"StringList"} } }, + "MetricsExportConfig":{ + "type":"structure", + "required":[ + "mqttTopic", + "roleArn" + ], + "members":{ + "mqttTopic":{"shape":"MqttTopic"}, + "roleArn":{"shape":"RoleArn"} + } + }, "Minimum":{"type":"double"}, "MinimumNumberOfExecutedThings":{ "type":"integer", @@ -11449,6 +11466,11 @@ "max":65535, "min":1 }, + "MqttTopic":{ + "type":"string", + "max":512, + "min":1 + }, "MqttUsername":{ "type":"string", "max":65535, @@ -14227,7 +14249,9 @@ "shape":"OptionalVersion", "location":"querystring", "locationName":"expectedVersion" - } + }, + "metricsExportConfig":{"shape":"MetricsExportConfig"}, + "deleteMetricsExportConfig":{"shape":"DeleteMetricsExportConfig"} } }, "UpdateSecurityProfileResponse":{ @@ -14246,7 +14270,8 @@ "additionalMetricsToRetainV2":{"shape":"AdditionalMetricsToRetainV2List"}, "version":{"shape":"Version"}, "creationDate":{"shape":"Timestamp"}, - "lastModifiedDate":{"shape":"Timestamp"} + "lastModifiedDate":{"shape":"Timestamp"}, + "metricsExportConfig":{"shape":"MetricsExportConfig"} } }, "UpdateStreamRequest":{ diff --git a/models/apis/iot/2015-05-28/docs-2.json b/models/apis/iot/2015-05-28/docs-2.json index c984b325ddc..d41f88a3ac4 100644 --- a/models/apis/iot/2015-05-28/docs-2.json +++ b/models/apis/iot/2015-05-28/docs-2.json @@ -44,7 +44,7 @@ "CreateSecurityProfile": "

Creates a Device Defender security profile.

Requires permission to access the CreateSecurityProfile action.

", "CreateStream": "

Creates a stream for delivering one or more large files in chunks over MQTT. A stream transports data bytes in chunks or blocks packaged as MQTT messages from a source like S3. You can have one or more files associated with a stream.

Requires permission to access the CreateStream action.

", "CreateThing": "

Creates a thing record in the registry. If this call is made multiple times using the same thing name and configuration, the call will succeed. If this call is made with the same thing name but different configuration a ResourceAlreadyExistsException is thrown.

This is a control plane operation. See Authorization for information about authorizing control plane actions.

Requires permission to access the CreateThing action.

", - "CreateThingGroup": "

Create a thing group.

This is a control plane operation. See Authorization for information about authorizing control plane actions.

Requires permission to access the CreateThingGroup action.

", + "CreateThingGroup": "

Create a thing group.

This is a control plane operation. See Authorization for information about authorizing control plane actions.

If the ThingGroup that you create has the exact same attributes as an existing ThingGroup, you will get a 200 success response.

Requires permission to access the CreateThingGroup action.

", "CreateThingType": "

Creates a new thing type.

Requires permission to access the CreateThingType action.

", "CreateTopicRule": "

Creates a rule. Creating rules is an administrator-level action. Any user who has permission to create rules will be able to access data processed by the rule.

Requires permission to access the CreateTopicRule action.

", "CreateTopicRuleDestination": "

Creates a topic rule destination. The destination must be confirmed prior to use.

Requires permission to access the CreateTopicRuleDestination action.

", @@ -2418,6 +2418,12 @@ "refs": { } }, + "DeleteMetricsExportConfig": { + "base": null, + "refs": { + "UpdateSecurityProfileRequest$deleteMetricsExportConfig": "

Set the value as true to delete metrics export related configurations.

" + } + }, "DeleteMitigationActionRequest": { "base": null, "refs": { @@ -3498,6 +3504,13 @@ "JobExecutionsRolloutConfig$exponentialRate": "

The rate of increase for a job rollout. This parameter allows you to define an exponential rate for a job rollout.

" } }, + "ExportMetric": { + "base": null, + "refs": { + "Behavior$exportMetric": "

Value indicates exporting metrics related to the behavior when it is true.

", + "MetricToRetain$exportMetric": "

Value added in both Behavior and AdditionalMetricsToRetainV2 to indicate if Device Defender Detect should export the corresponding metrics.

" + } + }, "FailedChecksCount": { "base": null, "refs": { @@ -3537,9 +3550,9 @@ "Fields": { "base": null, "refs": { - "ThingGroupIndexingConfiguration$managedFields": "

Contains fields that are indexed and whose types are already known by the Fleet Indexing service. This is an optional field. For more information, see Managed fields in the Amazon Web Services IoT Core Developer Guide.

", + "ThingGroupIndexingConfiguration$managedFields": "

Contains fields that are indexed and whose types are already known by the Fleet Indexing service. This is an optional field. For more information, see Managed fields in the Amazon Web Services IoT Core Developer Guide.

You can't modify managed fields by updating fleet indexing configuration.

", "ThingGroupIndexingConfiguration$customFields": "

A list of thing group fields to index. This list cannot contain any managed fields. Use the GetIndexingConfiguration API to get a list of managed fields.

Contains custom field names and their data type.

", - "ThingIndexingConfiguration$managedFields": "

Contains fields that are indexed and whose types are already known by the Fleet Indexing service.

", + "ThingIndexingConfiguration$managedFields": "

Contains fields that are indexed and whose types are already known by the Fleet Indexing service. This is an optional field. For more information, see Managed fields in the Amazon Web Services IoT Core Developer Guide.

You can't modify managed fields by updating fleet indexing configuration.

", "ThingIndexingConfiguration$customFields": "

Contains custom field names and their data type.

" } }, @@ -5443,6 +5456,15 @@ "ViolationEvent$metricValue": "

The value of the metric (the measurement).

" } }, + "MetricsExportConfig": { + "base": "

Set configurations for metrics export.

", + "refs": { + "CreateSecurityProfileRequest$metricsExportConfig": "

Specifies the MQTT topic and role ARN required for metric export.

", + "DescribeSecurityProfileResponse$metricsExportConfig": "

Specifies the MQTT topic and role ARN required for metric export.

", + "UpdateSecurityProfileRequest$metricsExportConfig": "

Specifies the MQTT topic and role ARN required for metric export.

", + "UpdateSecurityProfileResponse$metricsExportConfig": "

Specifies the MQTT topic and role ARN required for metric export.

" + } + }, "Minimum": { "base": null, "refs": { @@ -5598,6 +5620,12 @@ "MqttContext$password": "

The value of the password key in an MQTT authorization request.

" } }, + "MqttTopic": { + "base": null, + "refs": { + "MetricsExportConfig$mqttTopic": "

The MQTT topic that Device Defender Detect should publish messages to for metrics export.

" + } + }, "MqttUsername": { "base": null, "refs": { @@ -6437,7 +6465,7 @@ "base": null, "refs": { "ListIndicesRequest$maxResults": "

The maximum number of results to return at one time.

", - "SearchIndexRequest$maxResults": "

The maximum number of results to return at one time.

" + "SearchIndexRequest$maxResults": "

The maximum number of results to return at one time. The response might contain fewer results but will never contain more.

" } }, "QueryString": { @@ -6930,6 +6958,7 @@ "DescribeProvisioningTemplateResponse$provisioningRoleArn": "

The ARN of the role associated with the provisioning template. This IoT role grants permission to provision a device.

", "DescribeThingRegistrationTaskResponse$roleArn": "

The role ARN that grants access to the input file bucket.

", "EnableIoTLoggingParams$roleArnForLogging": "

The Amazon Resource Name (ARN) of the IAM role used for logging.

", + "MetricsExportConfig$roleArn": "

This role ARN has permission to publish MQTT messages, after which Device Defender Detect can assume the role and publish messages on your behalf.

", "MitigationAction$roleArn": "

The IAM role ARN used to apply this mitigation action.

", "PresignedUrlConfig$roleArn": "

The ARN of an IAM role that grants permission to download files from the S3 bucket where the job data/updates are stored. The role must also grant permission for IoT to download the files.

For information about addressing the confused deputy problem, see cross-service confused deputy prevention in the Amazon Web Services IoT Core developer guide.

", "RegistrationConfig$roleArn": "

The ARN of the role.

", diff --git a/models/apis/iot/2015-05-28/endpoint-rule-set-1.json b/models/apis/iot/2015-05-28/endpoint-rule-set-1.json index 64486d98236..abd0635a617 100644 --- a/models/apis/iot/2015-05-28/endpoint-rule-set-1.json +++ b/models/apis/iot/2015-05-28/endpoint-rule-set-1.json @@ -40,7 +40,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -83,7 +82,8 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -96,7 +96,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -110,7 +109,6 @@ "assign": "PartitionResult" } ], - "type": "tree", "rules": [ { "conditions": [ @@ -133,7 +131,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -168,7 +165,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [], @@ -179,14 +175,16 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "FIPS and DualStack are enabled, but this partition does not support one or both", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -200,14 +198,12 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ - true, { "fn": "getAttr", "argv": [ @@ -216,11 +212,11 @@ }, "supportsFIPS" ] - } + }, + true ] } ], - "type": "tree", "rules": [ { "conditions": [], @@ -231,14 +227,16 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "FIPS is enabled but this partition does not support FIPS", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -252,7 +250,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -272,7 +269,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [], @@ -283,14 +279,16 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "DualStack is enabled but this partition does not support DualStack", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -376,9 +374,11 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], diff --git a/models/apis/lambda/2015-03-31/api-2.json b/models/apis/lambda/2015-03-31/api-2.json index 1b017becdad..b8e0a0bb567 100644 --- a/models/apis/lambda/2015-03-31/api-2.json +++ b/models/apis/lambda/2015-03-31/api-2.json @@ -3866,7 +3866,8 @@ "ruby3.2", "python3.11", "nodejs20.x", - "provided.al2023" + "provided.al2023", + "python3.12" ] }, "RuntimeVersionArn":{ diff --git a/models/apis/mediatailor/2018-04-23/api-2.json b/models/apis/mediatailor/2018-04-23/api-2.json index 10d8e979729..b4ca164f21d 100644 --- a/models/apis/mediatailor/2018-04-23/api-2.json +++ b/models/apis/mediatailor/2018-04-23/api-2.json @@ -1719,6 +1719,7 @@ }, "MaxResults":{ "type":"integer", + "box":true, "max":100, "min":1 }, @@ -2364,14 +2365,22 @@ "VodSourceName":{"shape":"__string"} } }, - "__boolean":{"type":"boolean"}, - "__integer":{"type":"integer"}, + "__boolean":{ + "type":"boolean", + "box":true + }, + "__integer":{ + "type":"integer", + "box":true + }, "__integerMin1":{ "type":"integer", + "box":true, "min":1 }, "__integerMin1Max100":{ "type":"integer", + "box":true, "max":100, "min":1 }, @@ -2427,7 +2436,10 @@ "type":"list", "member":{"shape":"__string"} }, - "__long":{"type":"long"}, + "__long":{ + "type":"long", + "box":true + }, "__mapOf__string":{ "type":"map", "key":{"shape":"__string"}, diff --git a/models/apis/mediatailor/2018-04-23/endpoint-rule-set-1.json b/models/apis/mediatailor/2018-04-23/endpoint-rule-set-1.json index f6487bddd7f..2ec070203ec 100644 --- a/models/apis/mediatailor/2018-04-23/endpoint-rule-set-1.json +++ b/models/apis/mediatailor/2018-04-23/endpoint-rule-set-1.json @@ -40,7 +40,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -83,7 +82,8 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -96,7 +96,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -110,7 +109,6 @@ "assign": "PartitionResult" } ], - "type": "tree", "rules": [ { "conditions": [ @@ -133,7 +131,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -168,7 +165,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [], @@ -179,14 +175,16 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "FIPS and DualStack are enabled, but this partition does not support one or both", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -200,14 +198,12 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ - true, { "fn": "getAttr", "argv": [ @@ -216,11 +212,11 @@ }, "supportsFIPS" ] - } + }, + true ] } ], - "type": "tree", "rules": [ { "conditions": [], @@ -231,14 +227,16 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "FIPS is enabled but this partition does not support FIPS", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -252,7 +250,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -272,7 +269,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [], @@ -283,14 +279,16 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "DualStack is enabled but this partition does not support DualStack", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [], @@ -301,9 +299,11 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], diff --git a/models/apis/pipes/2015-10-07/api-2.json b/models/apis/pipes/2015-10-07/api-2.json index 5281dc7d71b..c13415dc04c 100644 --- a/models/apis/pipes/2015-10-07/api-2.json +++ b/models/apis/pipes/2015-10-07/api-2.json @@ -337,6 +337,25 @@ "max":1000, "min":0 }, + "CloudwatchLogGroupArn":{ + "type":"string", + "max":1600, + "min":1, + "pattern":"^(^arn:aws([a-z]|\\-)*:logs:([a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}):(\\d{12}):log-group:.+)$" + }, + "CloudwatchLogsLogDestination":{ + "type":"structure", + "members":{ + "LogGroupArn":{"shape":"CloudwatchLogGroupArn"} + } + }, + "CloudwatchLogsLogDestinationParameters":{ + "type":"structure", + "required":["LogGroupArn"], + "members":{ + "LogGroupArn":{"shape":"CloudwatchLogGroupArn"} + } + }, "ConflictException":{ "type":"structure", "required":[ @@ -368,6 +387,7 @@ "DesiredState":{"shape":"RequestedPipeState"}, "Enrichment":{"shape":"OptionalArn"}, "EnrichmentParameters":{"shape":"PipeEnrichmentParameters"}, + "LogConfiguration":{"shape":"PipeLogConfigurationParameters"}, "Name":{ "shape":"PipeName", "location":"uri", @@ -454,6 +474,7 @@ "Enrichment":{"shape":"OptionalArn"}, "EnrichmentParameters":{"shape":"PipeEnrichmentParameters"}, "LastModifiedTime":{"shape":"Timestamp"}, + "LogConfiguration":{"shape":"PipeLogConfiguration"}, "Name":{"shape":"PipeName"}, "RoleArn":{"shape":"RoleArn"}, "Source":{"shape":"ArnOrUrl"}, @@ -633,6 +654,25 @@ "max":5, "min":0 }, + "FirehoseArn":{ + "type":"string", + "max":1600, + "min":1, + "pattern":"^(^arn:aws([a-z]|\\-)*:firehose:([a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}):(\\d{12}):deliverystream/.+)$" + }, + "FirehoseLogDestination":{ + "type":"structure", + "members":{ + "DeliveryStreamArn":{"shape":"FirehoseArn"} + } + }, + "FirehoseLogDestinationParameters":{ + "type":"structure", + "required":["DeliveryStreamArn"], + "members":{ + "DeliveryStreamArn":{"shape":"FirehoseArn"} + } + }, "HeaderKey":{ "type":"string", "max":512, @@ -651,6 +691,14 @@ "pattern":"^[ \\t]*[\\x20-\\x7E]+([ \\t]+[\\x20-\\x7E]+)*[ \\t]*|(\\$(\\.[\\w/_-]+(\\[(\\d+|\\*)\\])*)*)$", "sensitive":true }, + "IncludeExecutionData":{ + "type":"list", + "member":{"shape":"IncludeExecutionDataOption"} + }, + "IncludeExecutionDataOption":{ + "type":"string", + "enum":["ALL"] + }, "InputTemplate":{ "type":"string", "max":8192, @@ -804,6 +852,15 @@ "tags":{"shape":"TagMap"} } }, + "LogLevel":{ + "type":"string", + "enum":[ + "OFF", + "ERROR", + "INFO", + "TRACE" + ] + }, "LogStreamName":{ "type":"string", "max":256, @@ -908,9 +965,7 @@ }, "PathParameterList":{ "type":"list", - "member":{"shape":"PathParameter"}, - "max":1, - "min":0 + "member":{"shape":"PathParameter"} }, "Pipe":{ "type":"structure", @@ -959,6 +1014,27 @@ "type":"list", "member":{"shape":"Pipe"} }, + "PipeLogConfiguration":{ + "type":"structure", + "members":{ + "CloudwatchLogsLogDestination":{"shape":"CloudwatchLogsLogDestination"}, + "FirehoseLogDestination":{"shape":"FirehoseLogDestination"}, + "IncludeExecutionData":{"shape":"IncludeExecutionData"}, + "Level":{"shape":"LogLevel"}, + "S3LogDestination":{"shape":"S3LogDestination"} + } + }, + "PipeLogConfigurationParameters":{ + "type":"structure", + "required":["Level"], + "members":{ + "CloudwatchLogsLogDestination":{"shape":"CloudwatchLogsLogDestinationParameters"}, + "FirehoseLogDestination":{"shape":"FirehoseLogDestinationParameters"}, + "IncludeExecutionData":{"shape":"IncludeExecutionData"}, + "Level":{"shape":"LogLevel"}, + "S3LogDestination":{"shape":"S3LogDestinationParameters"} + } + }, "PipeName":{ "type":"string", "max":64, @@ -1081,7 +1157,11 @@ "CREATE_FAILED", "UPDATE_FAILED", "START_FAILED", - "STOP_FAILED" + "STOP_FAILED", + "DELETE_FAILED", + "CREATE_ROLLBACK_FAILED", + "DELETE_ROLLBACK_FAILED", + "UPDATE_ROLLBACK_FAILED" ] }, "PipeStateReason":{ @@ -1330,6 +1410,50 @@ "min":1, "pattern":"^arn:(aws[a-zA-Z-]*)?:iam::\\d{12}:role/?[a-zA-Z0-9+=,.@\\-_/]+$" }, + "S3LogDestination":{ + "type":"structure", + "members":{ + "BucketName":{"shape":"String"}, + "BucketOwner":{"shape":"String"}, + "OutputFormat":{"shape":"S3OutputFormat"}, + "Prefix":{"shape":"String"} + } + }, + "S3LogDestinationParameters":{ + "type":"structure", + "required":[ + "BucketName", + "BucketOwner" + ], + "members":{ + "BucketName":{"shape":"S3LogDestinationParametersBucketNameString"}, + "BucketOwner":{"shape":"S3LogDestinationParametersBucketOwnerString"}, + "OutputFormat":{"shape":"S3OutputFormat"}, + "Prefix":{"shape":"S3LogDestinationParametersPrefixString"} + } + }, + "S3LogDestinationParametersBucketNameString":{ + "type":"string", + "max":63, + "min":3 + }, + "S3LogDestinationParametersBucketOwnerString":{ + "type":"string", + "pattern":"^\\d{12}$" + }, + "S3LogDestinationParametersPrefixString":{ + "type":"string", + "max":256, + "min":0 + }, + "S3OutputFormat":{ + "type":"string", + "enum":[ + "json", + "plain", + "w3c" + ] + }, "SageMakerPipelineParameter":{ "type":"structure", "required":[ @@ -1453,6 +1577,7 @@ "Sqls":{ "type":"list", "member":{"shape":"Sql"}, + "max":40, "min":1 }, "StartPipeRequest":{ @@ -1657,6 +1782,7 @@ "DesiredState":{"shape":"RequestedPipeState"}, "Enrichment":{"shape":"OptionalArn"}, "EnrichmentParameters":{"shape":"PipeEnrichmentParameters"}, + "LogConfiguration":{"shape":"PipeLogConfigurationParameters"}, "Name":{ "shape":"PipeName", "location":"uri", diff --git a/models/apis/pipes/2015-10-07/docs-2.json b/models/apis/pipes/2015-10-07/docs-2.json index ab357d43ea7..b43dd4b5425 100644 --- a/models/apis/pipes/2015-10-07/docs-2.json +++ b/models/apis/pipes/2015-10-07/docs-2.json @@ -11,14 +11,14 @@ "StopPipe": "

Stop an existing pipe.

", "TagResource": "

Assigns one or more tags (key-value pairs) to the specified pipe. Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values.

Tags don't have any semantic meaning to Amazon Web Services and are interpreted strictly as strings of characters.

You can use the TagResource action with a pipe that already has tags. If you specify a new tag key, this tag is appended to the list of tags associated with the pipe. If you specify a tag key that is already associated with the pipe, the new tag value that you specify replaces the previous value for that tag.

You can associate as many as 50 tags with a pipe.

", "UntagResource": "

Removes one or more tags from the specified pipes.

", - "UpdatePipe": "

Update an existing pipe. When you call UpdatePipe, only the fields that are included in the request are changed, the rest are unchanged. The exception to this is if you modify any Amazon Web Services-service specific fields in the SourceParameters, EnrichmentParameters, or TargetParameters objects. The fields in these objects are updated atomically as one and override existing values. This is by design and means that if you don't specify an optional field in one of these Parameters objects, that field will be set to its system-default value after the update.

For more information about pipes, see Amazon EventBridge Pipes in the Amazon EventBridge User Guide.

" + "UpdatePipe": "

Update an existing pipe. When you call UpdatePipe, EventBridge only the updates fields you have specified in the request; the rest remain unchanged. The exception to this is if you modify any Amazon Web Services-service specific fields in the SourceParameters, EnrichmentParameters, or TargetParameters objects. For example, DynamoDBStreamParameters or EventBridgeEventBusParameters. EventBridge updates the fields in these objects atomically as one and overrides existing values. This is by design, and means that if you don't specify an optional field in one of these Parameters objects, EventBridge sets that field to its system-default value during the update.

For more information about pipes, see Amazon EventBridge Pipes in the Amazon EventBridge User Guide.

" }, "shapes": { "Arn": { "base": null, "refs": { "CreatePipeRequest$Target": "

The ARN of the target resource.

", - "DeadLetterConfig$Arn": "

The ARN of the Amazon SQS queue specified as the target for the dead-letter queue.

", + "DeadLetterConfig$Arn": "

The ARN of the specified target for the dead-letter queue.

For Amazon Kinesis stream and Amazon DynamoDB stream sources, specify either an Amazon SNS topic or Amazon SQS queue ARN.

", "DescribePipeResponse$Target": "

The ARN of the target resource.

", "Pipe$Target": "

The ARN of the target resource.

", "UpdatePipeRequest$Target": "

The ARN of the target resource.

" @@ -175,6 +175,25 @@ "CapacityProviderStrategyItem$weight": "

The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.

" } }, + "CloudwatchLogGroupArn": { + "base": null, + "refs": { + "CloudwatchLogsLogDestination$LogGroupArn": "

The Amazon Web Services Resource Name (ARN) for the CloudWatch log group to which EventBridge sends the log records.

", + "CloudwatchLogsLogDestinationParameters$LogGroupArn": "

The Amazon Web Services Resource Name (ARN) for the CloudWatch log group to which EventBridge sends the log records.

" + } + }, + "CloudwatchLogsLogDestination": { + "base": "

The Amazon CloudWatch Logs logging configuration settings for the pipe.

", + "refs": { + "PipeLogConfiguration$CloudwatchLogsLogDestination": "

The Amazon CloudWatch Logs logging configuration settings for the pipe.

" + } + }, + "CloudwatchLogsLogDestinationParameters": { + "base": "

The Amazon CloudWatch Logs logging configuration settings for the pipe.

", + "refs": { + "PipeLogConfigurationParameters$CloudwatchLogsLogDestination": "

The Amazon CloudWatch Logs logging configuration settings for the pipe.

" + } + }, "ConflictException": { "base": "

An action you attempted resulted in an exception.

", "refs": { @@ -350,7 +369,7 @@ "EventBridgeEndpointId": { "base": null, "refs": { - "PipeTargetEventBridgeEventBusParameters$EndpointId": "

The URL subdomain of the endpoint. For example, if the URL for Endpoint is https://abcde.veo.endpoints.event.amazonaws.com, then the EndpointId is abcde.veo.

When using Java, you must include auth-crt on the class path.

" + "PipeTargetEventBridgeEventBusParameters$EndpointId": "

The URL subdomain of the endpoint. For example, if the URL for Endpoint is https://abcde.veo.endpoints.event.amazonaws.com, then the EndpointId is abcde.veo.

" } }, "EventBridgeEventResourceList": { @@ -378,10 +397,10 @@ } }, "FilterCriteria": { - "base": "

The collection of event patterns used to filter events. For more information, see Events and Event Patterns in the Amazon EventBridge User Guide.

", + "base": "

The collection of event patterns used to filter events.

To remove a filter, specify a FilterCriteria object with an empty array of Filter objects.

For more information, see Events and Event Patterns in the Amazon EventBridge User Guide.

", "refs": { - "PipeSourceParameters$FilterCriteria": "

The collection of event patterns used to filter events. For more information, see Events and Event Patterns in the Amazon EventBridge User Guide.

", - "UpdatePipeSourceParameters$FilterCriteria": "

The collection of event patterns used to filter events. For more information, see Events and Event Patterns in the Amazon EventBridge User Guide.

" + "PipeSourceParameters$FilterCriteria": "

The collection of event patterns used to filter events.

To remove a filter, specify a FilterCriteria object with an empty array of Filter objects.

For more information, see Events and Event Patterns in the Amazon EventBridge User Guide.

", + "UpdatePipeSourceParameters$FilterCriteria": "

The collection of event patterns used to filter events.

To remove a filter, specify a FilterCriteria object with an empty array of Filter objects.

For more information, see Events and Event Patterns in the Amazon EventBridge User Guide.

" } }, "FilterList": { @@ -390,6 +409,25 @@ "FilterCriteria$Filters": "

The event patterns.

" } }, + "FirehoseArn": { + "base": null, + "refs": { + "FirehoseLogDestination$DeliveryStreamArn": "

The Amazon Resource Name (ARN) of the Kinesis Data Firehose delivery stream to which EventBridge delivers the pipe log records.

", + "FirehoseLogDestinationParameters$DeliveryStreamArn": "

Specifies the Amazon Resource Name (ARN) of the Kinesis Data Firehose delivery stream to which EventBridge delivers the pipe log records.

" + } + }, + "FirehoseLogDestination": { + "base": "

The Amazon Kinesis Data Firehose logging configuration settings for the pipe.

", + "refs": { + "PipeLogConfiguration$FirehoseLogDestination": "

The Amazon Kinesis Data Firehose logging configuration settings for the pipe.

" + } + }, + "FirehoseLogDestinationParameters": { + "base": "

The Amazon Kinesis Data Firehose logging configuration settings for the pipe.

", + "refs": { + "PipeLogConfigurationParameters$FirehoseLogDestination": "

The Amazon Kinesis Data Firehose logging configuration settings for the pipe.

" + } + }, "HeaderKey": { "base": null, "refs": { @@ -409,11 +447,24 @@ "HeaderParametersMap$value": null } }, + "IncludeExecutionData": { + "base": null, + "refs": { + "PipeLogConfiguration$IncludeExecutionData": "

Whether the execution data (specifically, the payload, awsRequest, and awsResponse fields) is included in the log messages for this pipe.

This applies to all log destinations for the pipe.

For more information, see Including execution data in logs in the Amazon EventBridge User Guide.

", + "PipeLogConfigurationParameters$IncludeExecutionData": "

Specify ON to include the execution data (specifically, the payload and awsRequest fields) in the log messages for this pipe.

This applies to all log destinations for the pipe.

For more information, see Including execution data in logs in the Amazon EventBridge User Guide.

The default is OFF.

" + } + }, + "IncludeExecutionDataOption": { + "base": null, + "refs": { + "IncludeExecutionData$member": null + } + }, "InputTemplate": { "base": null, "refs": { - "PipeEnrichmentParameters$InputTemplate": "

Valid JSON text passed to the enrichment. In this case, nothing from the event itself is passed to the enrichment. For more information, see The JavaScript Object Notation (JSON) Data Interchange Format.

", - "PipeTargetParameters$InputTemplate": "

Valid JSON text passed to the target. In this case, nothing from the event itself is passed to the target. For more information, see The JavaScript Object Notation (JSON) Data Interchange Format.

" + "PipeEnrichmentParameters$InputTemplate": "

Valid JSON text passed to the enrichment. In this case, nothing from the event itself is passed to the enrichment. For more information, see The JavaScript Object Notation (JSON) Data Interchange Format.

To remove an input template, specify an empty string.

", + "PipeTargetParameters$InputTemplate": "

Valid JSON text passed to the target. In this case, nothing from the event itself is passed to the target. For more information, see The JavaScript Object Notation (JSON) Data Interchange Format.

To remove an input template, specify an empty string.

" } }, "Integer": { @@ -529,6 +580,13 @@ "refs": { } }, + "LogLevel": { + "base": null, + "refs": { + "PipeLogConfiguration$Level": "

The level of logging detail to include. This applies to all log destinations for the pipe.

", + "PipeLogConfigurationParameters$Level": "

The level of logging detail to include. This applies to all log destinations for the pipe.

For more information, see Specifying EventBridge Pipes log level in the Amazon EventBridge User Guide.

" + } + }, "LogStreamName": { "base": null, "refs": { @@ -711,6 +769,19 @@ "ListPipesResponse$Pipes": "

The pipes returned by the call.

" } }, + "PipeLogConfiguration": { + "base": "

The logging configuration settings for the pipe.

", + "refs": { + "DescribePipeResponse$LogConfiguration": "

The logging configuration settings for the pipe.

" + } + }, + "PipeLogConfigurationParameters": { + "base": "

Specifies the logging configuration settings for the pipe.

When you call UpdatePipe, EventBridge updates the fields in the PipeLogConfigurationParameters object atomically as one and overrides existing values. This is by design. If you don't specify an optional field in any of the Amazon Web Services service parameters objects (CloudwatchLogsLogDestinationParameters, FirehoseLogDestinationParameters, or S3LogDestinationParameters), EventBridge sets that field to its system-default value during the update.

For example, suppose when you created the pipe you specified a Kinesis Data Firehose stream log destination. You then update the pipe to add an Amazon S3 log destination. In addition to specifying the S3LogDestinationParameters for the new log destination, you must also specify the fields in the FirehoseLogDestinationParameters object in order to retain the Kinesis Data Firehose stream log destination.

For more information on generating pipe log records, see Log EventBridge Pipes in the Amazon EventBridge User Guide.

", + "refs": { + "CreatePipeRequest$LogConfiguration": "

The logging configuration settings for the pipe.

", + "UpdatePipeRequest$LogConfiguration": "

The logging configuration settings for the pipe.

" + } + }, "PipeName": { "base": null, "refs": { @@ -832,14 +903,14 @@ "PipeTargetInvocationType": { "base": null, "refs": { - "PipeTargetLambdaFunctionParameters$InvocationType": "

Choose from the following options.

", - "PipeTargetStateMachineParameters$InvocationType": "

Specify whether to wait for the state machine to finish or not.

" + "PipeTargetLambdaFunctionParameters$InvocationType": "

Specify whether to invoke the function synchronously or asynchronously.

For more information, see Invocation types in the Amazon EventBridge User Guide.

", + "PipeTargetStateMachineParameters$InvocationType": "

Specify whether to invoke the Step Functions state machine synchronously or asynchronously.

For more information, see Invocation types in the Amazon EventBridge User Guide.

" } }, "PipeTargetKinesisStreamParameters": { - "base": "

The parameters for using a Kinesis stream as a source.

", + "base": "

The parameters for using a Kinesis stream as a target.

", "refs": { - "PipeTargetParameters$KinesisStreamParameters": "

The parameters for using a Kinesis stream as a source.

" + "PipeTargetParameters$KinesisStreamParameters": "

The parameters for using a Kinesis stream as a target.

" } }, "PipeTargetLambdaFunctionParameters": { @@ -849,17 +920,17 @@ } }, "PipeTargetParameters": { - "base": "

The parameters required to set up a target for your pipe.

", + "base": "

The parameters required to set up a target for your pipe.

For more information about pipe target parameters, including how to use dynamic path parameters, see Target parameters in the Amazon EventBridge User Guide.

", "refs": { - "CreatePipeRequest$TargetParameters": "

The parameters required to set up a target for your pipe.

", - "DescribePipeResponse$TargetParameters": "

The parameters required to set up a target for your pipe.

", - "UpdatePipeRequest$TargetParameters": "

The parameters required to set up a target for your pipe.

" + "CreatePipeRequest$TargetParameters": "

The parameters required to set up a target for your pipe.

For more information about pipe target parameters, including how to use dynamic path parameters, see Target parameters in the Amazon EventBridge User Guide.

", + "DescribePipeResponse$TargetParameters": "

The parameters required to set up a target for your pipe.

For more information about pipe target parameters, including how to use dynamic path parameters, see Target parameters in the Amazon EventBridge User Guide.

", + "UpdatePipeRequest$TargetParameters": "

The parameters required to set up a target for your pipe.

For more information about pipe target parameters, including how to use dynamic path parameters, see Target parameters in the Amazon EventBridge User Guide.

" } }, "PipeTargetRedshiftDataParameters": { - "base": "

These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API ExecuteStatement.

", + "base": "

These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API BatchExecuteStatement.

", "refs": { - "PipeTargetParameters$RedshiftDataParameters": "

These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API ExecuteStatement.

" + "PipeTargetParameters$RedshiftDataParameters": "

These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API BatchExecuteStatement.

" } }, "PipeTargetSageMakerPipelineParameters": { @@ -869,9 +940,9 @@ } }, "PipeTargetSqsQueueParameters": { - "base": "

The parameters for using a Amazon SQS stream as a source.

", + "base": "

The parameters for using a Amazon SQS stream as a target.

", "refs": { - "PipeTargetParameters$SqsQueueParameters": "

The parameters for using a Amazon SQS stream as a source.

" + "PipeTargetParameters$SqsQueueParameters": "

The parameters for using a Amazon SQS stream as a target.

" } }, "PipeTargetStateMachineParameters": { @@ -994,6 +1065,43 @@ "UpdatePipeRequest$RoleArn": "

The ARN of the role that allows the pipe to send data to the target.

" } }, + "S3LogDestination": { + "base": "

The Amazon S3 logging configuration settings for the pipe.

", + "refs": { + "PipeLogConfiguration$S3LogDestination": "

The Amazon S3 logging configuration settings for the pipe.

" + } + }, + "S3LogDestinationParameters": { + "base": "

The Amazon S3 logging configuration settings for the pipe.

", + "refs": { + "PipeLogConfigurationParameters$S3LogDestination": "

The Amazon S3 logging configuration settings for the pipe.

" + } + }, + "S3LogDestinationParametersBucketNameString": { + "base": null, + "refs": { + "S3LogDestinationParameters$BucketName": "

Specifies the name of the Amazon S3 bucket to which EventBridge delivers the log records for the pipe.

" + } + }, + "S3LogDestinationParametersBucketOwnerString": { + "base": null, + "refs": { + "S3LogDestinationParameters$BucketOwner": "

Specifies the Amazon Web Services account that owns the Amazon S3 bucket to which EventBridge delivers the log records for the pipe.

" + } + }, + "S3LogDestinationParametersPrefixString": { + "base": null, + "refs": { + "S3LogDestinationParameters$Prefix": "

Specifies any prefix text with which to begin Amazon S3 log object names.

You can use prefixes to organize the data that you store in Amazon S3 buckets. A prefix is a string of characters at the beginning of the object key name. A prefix can be any length, subject to the maximum length of the object key name (1,024 bytes). For more information, see Organizing objects using prefixes in the Amazon Simple Storage Service User Guide.

" + } + }, + "S3OutputFormat": { + "base": null, + "refs": { + "S3LogDestination$OutputFormat": "

The format EventBridge uses for the log records.

", + "S3LogDestinationParameters$OutputFormat": "

How EventBridge should format the log records.

" + } + }, "SageMakerPipelineParameter": { "base": "

Name/Value pair of a parameter to start execution of a SageMaker Model Building Pipeline.

", "refs": { @@ -1035,7 +1143,7 @@ "SecretManagerArnOrJsonPath": { "base": "

// For targets, can either specify an ARN or a jsonpath pointing to the ARN.

", "refs": { - "PipeTargetRedshiftDataParameters$SecretManagerArn": "

The name or ARN of the secret that enables access to the database. Required when authenticating using SageMaker.

" + "PipeTargetRedshiftDataParameters$SecretManagerArn": "

The name or ARN of the secret that enables access to the database. Required when authenticating using Secrets Manager.

" } }, "SecurityGroup": { @@ -1152,6 +1260,9 @@ "PipeTargetBatchJobParameters$JobName": "

The name of the job. It can be up to 128 letters long. The first character must be alphanumeric, can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).

", "PipeTargetEcsTaskParameters$Group": "

Specifies an Amazon ECS task group for the task. The maximum length is 255 characters.

", "PipeTargetEcsTaskParameters$PlatformVersion": "

Specifies the platform version for the task. Specify only the numeric portion of the platform version, such as 1.1.0.

This structure is used only if LaunchType is FARGATE. For more information about valid platform versions, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.

", + "S3LogDestination$BucketName": "

The name of the Amazon S3 bucket to which EventBridge delivers the log records for the pipe.

", + "S3LogDestination$BucketOwner": "

The Amazon Web Services account that owns the Amazon S3 bucket to which EventBridge delivers the log records for the pipe.

", + "S3LogDestination$Prefix": "

The prefix text with which to begin Amazon S3 log object names.

For more information, see Organizing objects using prefixes in the Amazon Simple Storage Service User Guide.

", "ServiceQuotaExceededException$message": null, "ServiceQuotaExceededException$quotaCode": "

The identifier of the quota that caused the exception.

", "ServiceQuotaExceededException$resourceId": "

The ID of the resource that caused the exception.

", diff --git a/models/apis/pipes/2015-10-07/endpoint-rule-set-1.json b/models/apis/pipes/2015-10-07/endpoint-rule-set-1.json index 29e73f0ed24..cc9736a05c8 100644 --- a/models/apis/pipes/2015-10-07/endpoint-rule-set-1.json +++ b/models/apis/pipes/2015-10-07/endpoint-rule-set-1.json @@ -40,7 +40,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -59,7 +58,6 @@ }, { "conditions": [], - "type": "tree", "rules": [ { "conditions": [ @@ -87,13 +85,14 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], - "type": "tree", "rules": [ { "conditions": [ @@ -106,7 +105,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -120,7 +118,6 @@ "assign": "PartitionResult" } ], - "type": "tree", "rules": [ { "conditions": [ @@ -143,7 +140,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -178,11 +174,9 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [], - "type": "tree", "rules": [ { "conditions": [], @@ -193,16 +187,19 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "FIPS and DualStack are enabled, but this partition does not support one or both", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -216,14 +213,12 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ - true, { "fn": "getAttr", "argv": [ @@ -232,15 +227,14 @@ }, "supportsFIPS" ] - } + }, + true ] } ], - "type": "tree", "rules": [ { "conditions": [], - "type": "tree", "rules": [ { "conditions": [], @@ -251,16 +245,19 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "FIPS is enabled but this partition does not support FIPS", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [ @@ -274,7 +271,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -294,11 +290,9 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [], - "type": "tree", "rules": [ { "conditions": [], @@ -309,20 +303,22 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "DualStack is enabled but this partition does not support DualStack", "type": "error" } - ] + ], + "type": "tree" }, { "conditions": [], - "type": "tree", "rules": [ { "conditions": [], @@ -333,18 +329,22 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" } - ] + ], + "type": "tree" }, { "conditions": [], "error": "Invalid Configuration: Missing Region", "type": "error" } - ] + ], + "type": "tree" } ] } \ No newline at end of file diff --git a/models/apis/pipes/2015-10-07/endpoint-tests-1.json b/models/apis/pipes/2015-10-07/endpoint-tests-1.json index a94a4fe7c64..95cdbb595d9 100644 --- a/models/apis/pipes/2015-10-07/endpoint-tests-1.json +++ b/models/apis/pipes/2015-10-07/endpoint-tests-1.json @@ -1,54 +1,54 @@ { "testCases": [ { - "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack enabled", + "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://pipes-fips.us-gov-east-1.api.aws" + "url": "https://pipes-fips.us-east-1.api.aws" } }, "params": { + "Region": "us-east-1", "UseFIPS": true, - "Region": "us-gov-east-1", "UseDualStack": true } }, { - "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack disabled", + "documentation": "For region us-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://pipes-fips.us-gov-east-1.amazonaws.com" + "url": "https://pipes-fips.us-east-1.amazonaws.com" } }, "params": { + "Region": "us-east-1", "UseFIPS": true, - "Region": "us-gov-east-1", "UseDualStack": false } }, { - "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack enabled", + "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://pipes.us-gov-east-1.api.aws" + "url": "https://pipes.us-east-1.api.aws" } }, "params": { + "Region": "us-east-1", "UseFIPS": false, - "Region": "us-gov-east-1", "UseDualStack": true } }, { - "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack disabled", + "documentation": "For region us-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://pipes.us-gov-east-1.amazonaws.com" + "url": "https://pipes.us-east-1.amazonaws.com" } }, "params": { + "Region": "us-east-1", "UseFIPS": false, - "Region": "us-gov-east-1", "UseDualStack": false } }, @@ -60,8 +60,8 @@ } }, "params": { - "UseFIPS": true, "Region": "cn-north-1", + "UseFIPS": true, "UseDualStack": true } }, @@ -73,8 +73,8 @@ } }, "params": { - "UseFIPS": true, "Region": "cn-north-1", + "UseFIPS": true, "UseDualStack": false } }, @@ -86,8 +86,8 @@ } }, "params": { - "UseFIPS": false, "Region": "cn-north-1", + "UseFIPS": false, "UseDualStack": true } }, @@ -99,108 +99,108 @@ } }, "params": { - "UseFIPS": false, "Region": "cn-north-1", + "UseFIPS": false, "UseDualStack": false } }, { - "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled", + "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack enabled", "expect": { - "error": "FIPS and DualStack are enabled, but this partition does not support one or both" + "endpoint": { + "url": "https://pipes-fips.us-gov-east-1.api.aws" + } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": true, - "Region": "us-iso-east-1", "UseDualStack": true } }, { - "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack disabled", + "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://pipes-fips.us-iso-east-1.c2s.ic.gov" + "url": "https://pipes-fips.us-gov-east-1.amazonaws.com" } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": true, - "Region": "us-iso-east-1", "UseDualStack": false } }, { - "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled", + "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack enabled", "expect": { - "error": "DualStack is enabled but this partition does not support DualStack" + "endpoint": { + "url": "https://pipes.us-gov-east-1.api.aws" + } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": false, - "Region": "us-iso-east-1", "UseDualStack": true } }, { - "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack disabled", + "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://pipes.us-iso-east-1.c2s.ic.gov" + "url": "https://pipes.us-gov-east-1.amazonaws.com" } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": false, - "Region": "us-iso-east-1", "UseDualStack": false } }, { - "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled", + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled", "expect": { - "endpoint": { - "url": "https://pipes-fips.us-east-1.api.aws" - } + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" }, "params": { + "Region": "us-iso-east-1", "UseFIPS": true, - "Region": "us-east-1", "UseDualStack": true } }, { - "documentation": "For region us-east-1 with FIPS enabled and DualStack disabled", + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://pipes-fips.us-east-1.amazonaws.com" + "url": "https://pipes-fips.us-iso-east-1.c2s.ic.gov" } }, "params": { + "Region": "us-iso-east-1", "UseFIPS": true, - "Region": "us-east-1", "UseDualStack": false } }, { - "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled", + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled", "expect": { - "endpoint": { - "url": "https://pipes.us-east-1.api.aws" - } + "error": "DualStack is enabled but this partition does not support DualStack" }, "params": { + "Region": "us-iso-east-1", "UseFIPS": false, - "Region": "us-east-1", "UseDualStack": true } }, { - "documentation": "For region us-east-1 with FIPS disabled and DualStack disabled", + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://pipes.us-east-1.amazonaws.com" + "url": "https://pipes.us-iso-east-1.c2s.ic.gov" } }, "params": { + "Region": "us-iso-east-1", "UseFIPS": false, - "Region": "us-east-1", "UseDualStack": false } }, @@ -210,8 +210,8 @@ "error": "FIPS and DualStack are enabled, but this partition does not support one or both" }, "params": { - "UseFIPS": true, "Region": "us-isob-east-1", + "UseFIPS": true, "UseDualStack": true } }, @@ -223,8 +223,8 @@ } }, "params": { - "UseFIPS": true, "Region": "us-isob-east-1", + "UseFIPS": true, "UseDualStack": false } }, @@ -234,8 +234,8 @@ "error": "DualStack is enabled but this partition does not support DualStack" }, "params": { - "UseFIPS": false, "Region": "us-isob-east-1", + "UseFIPS": false, "UseDualStack": true } }, @@ -247,21 +247,34 @@ } }, "params": { - "UseFIPS": false, "Region": "us-isob-east-1", + "UseFIPS": false, "UseDualStack": false } }, { - "documentation": "For custom endpoint with fips disabled and dualstack disabled", + "documentation": "For custom endpoint with region set and fips disabled and dualstack disabled", "expect": { "endpoint": { "url": "https://example.com" } }, "params": { - "UseFIPS": false, "Region": "us-east-1", + "UseFIPS": false, + "UseDualStack": false, + "Endpoint": "https://example.com" + } + }, + { + "documentation": "For custom endpoint with region not set and fips disabled and dualstack disabled", + "expect": { + "endpoint": { + "url": "https://example.com" + } + }, + "params": { + "UseFIPS": false, "UseDualStack": false, "Endpoint": "https://example.com" } @@ -272,8 +285,8 @@ "error": "Invalid Configuration: FIPS and custom endpoint are not supported" }, "params": { - "UseFIPS": true, "Region": "us-east-1", + "UseFIPS": true, "UseDualStack": false, "Endpoint": "https://example.com" } @@ -284,11 +297,17 @@ "error": "Invalid Configuration: Dualstack and custom endpoint are not supported" }, "params": { - "UseFIPS": false, "Region": "us-east-1", + "UseFIPS": false, "UseDualStack": true, "Endpoint": "https://example.com" } + }, + { + "documentation": "Missing region", + "expect": { + "error": "Invalid Configuration: Missing Region" + } } ], "version": "1.0" diff --git a/models/apis/resource-explorer-2/2022-07-28/api-2.json b/models/apis/resource-explorer-2/2022-07-28/api-2.json index fdaaf871737..842dfbb508b 100644 --- a/models/apis/resource-explorer-2/2022-07-28/api-2.json +++ b/models/apis/resource-explorer-2/2022-07-28/api-2.json @@ -212,6 +212,9 @@ }, "DisassociateDefaultView": { "errors": [ + { + "shape": "ResourceNotFoundException" + }, { "shape": "InternalServerException" }, @@ -233,6 +236,31 @@ "idempotent": true, "name": "DisassociateDefaultView" }, + "GetAccountLevelServiceConfiguration": { + "errors": [ + { + "shape": "ResourceNotFoundException" + }, + { + "shape": "InternalServerException" + }, + { + "shape": "ThrottlingException" + }, + { + "shape": "AccessDeniedException" + } + ], + "http": { + "method": "POST", + "requestUri": "/GetAccountLevelServiceConfiguration", + "responseCode": 200 + }, + "name": "GetAccountLevelServiceConfiguration", + "output": { + "shape": "GetAccountLevelServiceConfigurationOutput" + } + }, "GetDefaultView": { "errors": [ { @@ -351,6 +379,34 @@ "shape": "ListIndexesOutput" } }, + "ListIndexesForMembers": { + "errors": [ + { + "shape": "InternalServerException" + }, + { + "shape": "ValidationException" + }, + { + "shape": "ThrottlingException" + }, + { + "shape": "AccessDeniedException" + } + ], + "http": { + "method": "POST", + "requestUri": "/ListIndexesForMembers", + "responseCode": 200 + }, + "input": { + "shape": "ListIndexesForMembersInput" + }, + "name": "ListIndexesForMembers", + "output": { + "shape": "ListIndexesForMembersOutput" + } + }, "ListSupportedResourceTypes": { "errors": [ { @@ -617,6 +673,13 @@ } }, "shapes": { + "AWSServiceAccessStatus": { + "enum": [ + "ENABLED", + "DISABLED" + ], + "type": "string" + }, "AccessDeniedException": { "error": { "httpStatusCode": 403, @@ -630,6 +693,11 @@ }, "type": "structure" }, + "AccountId": { + "max": 2048, + "min": 1, + "type": "string" + }, "AssociateDefaultViewInput": { "members": { "ViewArn": { @@ -760,6 +828,9 @@ "IncludedProperties": { "shape": "IncludedPropertyList" }, + "Scope": { + "shape": "CreateViewInputScopeString" + }, "Tags": { "shape": "TagMap" }, @@ -777,6 +848,11 @@ "min": 1, "type": "string" }, + "CreateViewInputScopeString": { + "max": 2048, + "min": 1, + "type": "string" + }, "CreateViewOutput": { "members": { "View": { @@ -834,6 +910,14 @@ }, "type": "structure" }, + "GetAccountLevelServiceConfigurationOutput": { + "members": { + "OrgConfiguration": { + "shape": "OrgConfiguration" + } + }, + "type": "structure" + }, "GetDefaultViewOutput": { "members": { "ViewArn": { @@ -970,6 +1054,53 @@ }, "type": "structure" }, + "ListIndexesForMembersInput": { + "members": { + "AccountIdList": { + "shape": "ListIndexesForMembersInputAccountIdListList" + }, + "MaxResults": { + "shape": "ListIndexesForMembersInputMaxResultsInteger" + }, + "NextToken": { + "shape": "ListIndexesForMembersInputNextTokenString" + } + }, + "required": [ + "AccountIdList" + ], + "type": "structure" + }, + "ListIndexesForMembersInputAccountIdListList": { + "max": 10, + "member": { + "shape": "AccountId" + }, + "min": 1, + "type": "list" + }, + "ListIndexesForMembersInputMaxResultsInteger": { + "box": true, + "max": 10, + "min": 1, + "type": "integer" + }, + "ListIndexesForMembersInputNextTokenString": { + "max": 2048, + "min": 1, + "type": "string" + }, + "ListIndexesForMembersOutput": { + "members": { + "Indexes": { + "shape": "MemberIndexList" + }, + "NextToken": { + "shape": "String" + } + }, + "type": "structure" + }, "ListIndexesInput": { "members": { "MaxResults": { @@ -1079,7 +1210,7 @@ }, "ListViewsInputMaxResultsInteger": { "box": true, - "max": 20, + "max": 50, "min": 1, "type": "integer" }, @@ -1098,6 +1229,43 @@ "box": true, "type": "long" }, + "MemberIndex": { + "members": { + "AccountId": { + "shape": "String" + }, + "Arn": { + "shape": "String" + }, + "Region": { + "shape": "String" + }, + "Type": { + "shape": "IndexType" + } + }, + "type": "structure" + }, + "MemberIndexList": { + "member": { + "shape": "MemberIndex" + }, + "type": "list" + }, + "OrgConfiguration": { + "members": { + "AWSServiceAccessStatus": { + "shape": "AWSServiceAccessStatus" + }, + "ServiceLinkedRole": { + "shape": "String" + } + }, + "required": [ + "AWSServiceAccessStatus" + ], + "type": "structure" + }, "QueryString": { "max": 1011, "min": 0, @@ -1300,6 +1468,7 @@ "member": { "shape": "String" }, + "sensitive": true, "type": "list" }, "SupportedResourceType": { @@ -1321,6 +1490,7 @@ "key": { "shape": "String" }, + "sensitive": true, "type": "map", "value": { "shape": "String" diff --git a/models/apis/resource-explorer-2/2022-07-28/docs-2.json b/models/apis/resource-explorer-2/2022-07-28/docs-2.json index 26a3c6c56bd..6b1c02a6fc3 100644 --- a/models/apis/resource-explorer-2/2022-07-28/docs-2.json +++ b/models/apis/resource-explorer-2/2022-07-28/docs-2.json @@ -9,10 +9,12 @@ "DeleteIndex": "

Deletes the specified index and turns off Amazon Web Services Resource Explorer in the specified Amazon Web Services Region. When you delete an index, Resource Explorer stops discovering and indexing resources in that Region. Resource Explorer also deletes all views in that Region. These actions occur as asynchronous background tasks. You can check to see when the actions are complete by using the GetIndex operation and checking the Status response value.

If the index you delete is the aggregator index for the Amazon Web Services account, you must wait 24 hours before you can promote another local index to be the aggregator index for the account. Users can't perform account-wide searches using Resource Explorer until another aggregator index is configured.

", "DeleteView": "

Deletes the specified view.

If the specified view is the default view for its Amazon Web Services Region, then all Search operations in that Region must explicitly specify the view to use until you configure a new default by calling the AssociateDefaultView operation.

", "DisassociateDefaultView": "

After you call this operation, the affected Amazon Web Services Region no longer has a default view. All Search operations in that Region must explicitly specify a view or the operation fails. You can configure a new default by calling the AssociateDefaultView operation.

If an Amazon Web Services Region doesn't have a default view configured, then users must explicitly specify a view with every Search operation performed in that Region.

", + "GetAccountLevelServiceConfiguration": "

Retrieves the status of your account's Amazon Web Services service access, and validates the service linked role required to access the multi-account search feature. Only the management account or a delegated administrator with service access enabled can invoke this API call.

", "GetDefaultView": "

Retrieves the Amazon Resource Name (ARN) of the view that is the default for the Amazon Web Services Region in which you call this operation. You can then call GetView to retrieve the details of that view.

", "GetIndex": "

Retrieves details about the Amazon Web Services Resource Explorer index in the Amazon Web Services Region in which you invoked the operation.

", "GetView": "

Retrieves details of the specified view.

", "ListIndexes": "

Retrieves a list of all of the indexes in Amazon Web Services Regions that are currently collecting resource information for Amazon Web Services Resource Explorer.

", + "ListIndexesForMembers": "

Retrieves a list of a member's indexes in all Amazon Web Services Regions that are currently collecting resource information for Amazon Web Services Resource Explorer. Only the management account or a delegated administrator with service access enabled can invoke this API call.

", "ListSupportedResourceTypes": "

Retrieves a list of all resource types currently supported by Amazon Web Services Resource Explorer.

", "ListTagsForResource": "

Lists the tags that are attached to the specified resource.

", "ListViews": "

Lists the Amazon resource names (ARNs) of the views available in the Amazon Web Services Region in which you call this operation.

Always check the NextToken response parameter for a null value when calling a paginated operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken response parameter value is null only when there are no more results to display.

", @@ -23,11 +25,23 @@ "UpdateView": "

Modifies some of the details of a view. You can change the filter string and the list of included properties. You can't change the name of the view.

" }, "shapes": { + "AWSServiceAccessStatus": { + "base": null, + "refs": { + "OrgConfiguration$AWSServiceAccessStatus": "

This value displays whether your Amazon Web Services service access is ENABLED or DISABLED.

" + } + }, "AccessDeniedException": { "base": "

The credentials that you used to call this operation don't have the minimum required permissions.

", "refs": { } }, + "AccountId": { + "base": null, + "refs": { + "ListIndexesForMembersInputAccountIdListList$member": null + } + }, "AssociateDefaultViewInput": { "base": null, "refs": { @@ -79,7 +93,7 @@ } }, "ConflictException": { - "base": "

The request failed because either you specified parameters that didn’t match the original request, or you attempted to create a view with a name that already exists in this Amazon Web Services Region.

", + "base": "

If you attempted to create a view, then the request failed because either you specified parameters that didn’t match the original request, or you attempted to create a view with a name that already exists in this Amazon Web Services Region.

If you attempted to create an index, then the request failed because either you specified parameters that didn't match the original request, or an index already exists in the current Amazon Web Services Region.

If you attempted to update an index type to AGGREGATOR, then the request failed because you already have an AGGREGATOR index in a different Amazon Web Services Region.

", "refs": { } }, @@ -104,6 +118,12 @@ "CreateViewInput$ClientToken": "

This value helps ensure idempotency. Resource Explorer uses this value to prevent the accidental creation of duplicate versions. We recommend that you generate a UUID-type value to ensure the uniqueness of your views.

" } }, + "CreateViewInputScopeString": { + "base": null, + "refs": { + "CreateViewInput$Scope": "

The root ARN of the account, an organizational unit (OU), or an organization ARN. If left empty, the default is account.

" + } + }, "CreateViewOutput": { "base": null, "refs": { @@ -141,6 +161,11 @@ "ResourceProperty$Data": "

Details about this property. The content of this field is a JSON object that varies based on the resource type.

" } }, + "GetAccountLevelServiceConfigurationOutput": { + "base": null, + "refs": { + } + }, "GetDefaultViewOutput": { "base": null, "refs": { @@ -212,8 +237,9 @@ "base": null, "refs": { "GetIndexOutput$Type": "

The type of the index in this Region. For information about the aggregator index and how it differs from a local index, see Turning on cross-Region search by creating an aggregator index.

", - "Index$Type": "

The type of index. It can be one of the following values:

", + "Index$Type": "

The type of index. It can be one of the following values:

", "ListIndexesInput$Type": "

If specified, limits the output to only indexes of the specified Type, either LOCAL or AGGREGATOR.

Use this option to discover the aggregator index for your account.

", + "MemberIndex$Type": "

The type of index. It can be one of the following values:

", "UpdateIndexTypeInput$Type": "

The type of the index. To understand the difference between LOCAL and AGGREGATOR, see Turning on cross-Region search in the Amazon Web Services Resource Explorer User Guide.

", "UpdateIndexTypeOutput$Type": "

Specifies the type of the specified index after the operation completes.

" } @@ -223,6 +249,34 @@ "refs": { } }, + "ListIndexesForMembersInput": { + "base": null, + "refs": { + } + }, + "ListIndexesForMembersInputAccountIdListList": { + "base": null, + "refs": { + "ListIndexesForMembersInput$AccountIdList": "

The account IDs will limit the output to only indexes from these accounts.

" + } + }, + "ListIndexesForMembersInputMaxResultsInteger": { + "base": null, + "refs": { + "ListIndexesForMembersInput$MaxResults": "

The maximum number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value appropriate to the operation. If additional items exist beyond those included in the current response, the NextToken response element is present and has a value (is not null). Include that value as the NextToken request parameter in the next call to the operation to get the next part of the results.

An API operation can return fewer results than the maximum even when there are more results available. You should check NextToken after every operation to ensure that you receive all of the results.

" + } + }, + "ListIndexesForMembersInputNextTokenString": { + "base": null, + "refs": { + "ListIndexesForMembersInput$NextToken": "

The parameter for receiving additional results if you receive a NextToken response in a previous request. A NextToken response indicates that more output is available. Set this parameter to the value of the previous call's NextToken response to indicate where the output should continue from. The pagination tokens expire after 24 hours.

" + } + }, + "ListIndexesForMembersOutput": { + "base": null, + "refs": { + } + }, "ListIndexesInput": { "base": null, "refs": { @@ -237,7 +291,7 @@ "ListIndexesInputNextTokenString": { "base": null, "refs": { - "ListIndexesInput$NextToken": "

The parameter for receiving additional results if you receive a NextToken response in a previous request. A NextToken response indicates that more output is available. Set this parameter to the value of the previous call's NextToken response to indicate where the output should continue from.

" + "ListIndexesInput$NextToken": "

The parameter for receiving additional results if you receive a NextToken response in a previous request. A NextToken response indicates that more output is available. Set this parameter to the value of the previous call's NextToken response to indicate where the output should continue from. The pagination tokens expire after 24 hours.

" } }, "ListIndexesInputRegionsList": { @@ -299,6 +353,24 @@ "ResourceCount$TotalResources": "

The number of resources that match the search query. This value can't exceed 1,000. If there are more than 1,000 resources that match the query, then only 1,000 are counted and the Complete field is set to false. We recommend that you refine your query to return a smaller number of results.

" } }, + "MemberIndex": { + "base": "

An index is the data store used by Amazon Web Services Resource Explorer to hold information about your Amazon Web Services resources that the service discovers.

", + "refs": { + "MemberIndexList$member": null + } + }, + "MemberIndexList": { + "base": null, + "refs": { + "ListIndexesForMembersOutput$Indexes": "

A structure that contains the details and status of each index.

" + } + }, + "OrgConfiguration": { + "base": "

This is a structure that contains the status of Amazon Web Services service access, and whether you have a valid service-linked role to enable multi-account search for your organization.

", + "refs": { + "GetAccountLevelServiceConfigurationOutput$OrgConfiguration": "

Details about the organization, and whether configuration is ENABLED or DISABLED.

" + } + }, "QueryString": { "base": null, "refs": { @@ -381,7 +453,7 @@ "SearchInputNextTokenString": { "base": null, "refs": { - "SearchInput$NextToken": "

The parameter for receiving additional results if you receive a NextToken response in a previous request. A NextToken response indicates that more output is available. Set this parameter to the value of the previous call's NextToken response to indicate where the output should continue from.

" + "SearchInput$NextToken": "

The parameter for receiving additional results if you receive a NextToken response in a previous request. A NextToken response indicates that more output is available. Set this parameter to the value of the previous call's NextToken response to indicate where the output should continue from. The pagination tokens expire after 24 hours.

" } }, "SearchInputViewArnString": { @@ -398,7 +470,7 @@ "SearchOutputNextTokenString": { "base": null, "refs": { - "SearchOutput$NextToken": "

If present, indicates that more output is available than is included in the current response. Use this value in the NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null.

" + "SearchOutput$NextToken": "

If present, indicates that more output is available than is included in the current response. Use this value in the NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null. The pagination tokens expire after 24 hours.

" } }, "SearchOutputViewArnString": { @@ -421,7 +493,7 @@ "BatchGetViewError$ViewArn": "

The Amazon resource name (ARN) of the view for which Resource Explorer failed to retrieve details.

", "BatchGetViewInputViewArnsList$member": null, "ConflictException$Message": null, - "CreateIndexInput$ClientToken": "

This value helps ensure idempotency. Resource Explorer uses this value to prevent the accidental creation of duplicate versions. We recommend that you generate a UUID-type value to ensure the uniqueness of your views.

", + "CreateIndexInput$ClientToken": "

This value helps ensure idempotency. Resource Explorer uses this value to prevent the accidental creation of duplicate versions. We recommend that you generate a UUID-type value to ensure the uniqueness of your index.

", "CreateIndexOutput$Arn": "

The ARN of the new local index for the Region. You can reference this ARN in IAM permission policies to authorize the following operations: DeleteIndex | GetIndex | UpdateIndexType | CreateView

", "DeleteIndexInput$Arn": "

The Amazon resource name (ARN) of the index that you want to delete.

", "DeleteIndexOutput$Arn": "

The Amazon resource name (ARN) of the index that you successfully started the deletion process.

This operation is asynchronous. To check its status, call the GetIndex operation.

", @@ -431,13 +503,18 @@ "Index$Arn": "

The Amazon resource name (ARN) of the index.

", "Index$Region": "

The Amazon Web Services Region in which the index exists.

", "InternalServerException$Message": null, + "ListIndexesForMembersOutput$NextToken": "

If present, indicates that more output is available than is included in the current response. Use this value in the NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null. The pagination tokens expire after 24 hours.

", "ListIndexesInputRegionsList$member": null, - "ListIndexesOutput$NextToken": "

If present, indicates that more output is available than is included in the current response. Use this value in the NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null.

", - "ListSupportedResourceTypesInput$NextToken": "

The parameter for receiving additional results if you receive a NextToken response in a previous request. A NextToken response indicates that more output is available. Set this parameter to the value of the previous call's NextToken response to indicate where the output should continue from.

", - "ListSupportedResourceTypesOutput$NextToken": "

If present, indicates that more output is available than is included in the current response. Use this value in the NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null.

", + "ListIndexesOutput$NextToken": "

If present, indicates that more output is available than is included in the current response. Use this value in the NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null. The pagination tokens expire after 24 hours.

", + "ListSupportedResourceTypesInput$NextToken": "

The parameter for receiving additional results if you receive a NextToken response in a previous request. A NextToken response indicates that more output is available. Set this parameter to the value of the previous call's NextToken response to indicate where the output should continue from. The pagination tokens expire after 24 hours.

", + "ListSupportedResourceTypesOutput$NextToken": "

If present, indicates that more output is available than is included in the current response. Use this value in the NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null. The pagination tokens expire after 24 hours.

", "ListTagsForResourceInput$resourceArn": "

The Amazon resource name (ARN) of the view or index that you want to attach tags to.

", - "ListViewsInput$NextToken": "

The parameter for receiving additional results if you receive a NextToken response in a previous request. A NextToken response indicates that more output is available. Set this parameter to the value of the previous call's NextToken response to indicate where the output should continue from.

", - "ListViewsOutput$NextToken": "

If present, indicates that more output is available than is included in the current response. Use this value in the NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null.

", + "ListViewsInput$NextToken": "

The parameter for receiving additional results if you receive a NextToken response in a previous request. A NextToken response indicates that more output is available. Set this parameter to the value of the previous call's NextToken response to indicate where the output should continue from. The pagination tokens expire after 24 hours.

", + "ListViewsOutput$NextToken": "

If present, indicates that more output is available than is included in the current response. Use this value in the NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null. The pagination tokens expire after 24 hours.

", + "MemberIndex$AccountId": "

The account ID for the index.

", + "MemberIndex$Arn": "

The Amazon resource name (ARN) of the index.

", + "MemberIndex$Region": "

The Amazon Web Services Region in which the index exists.

", + "OrgConfiguration$ServiceLinkedRole": "

This value shows whether or not you have a valid a service-linked role required to start the multi-account search feature.

", "RegionList$member": null, "Resource$Arn": "

The Amazon resource name (ARN) of the resource.

", "Resource$OwningAccountId": "

The Amazon Web Services account that owns the resource.

", @@ -516,7 +593,7 @@ } }, "ThrottlingException": { - "base": "

The request failed because you exceeded a rate limit for this operation. For more information, see Quotas for Resource Explorer.

", + "base": "

The request failed because you exceeded a rate limit for this operation. For more information, see Quotas for Resource Explorer.

", "refs": { } }, diff --git a/models/apis/resource-explorer-2/2022-07-28/endpoint-rule-set-1.json b/models/apis/resource-explorer-2/2022-07-28/endpoint-rule-set-1.json index 62dff1a5cb6..003af7baa24 100644 --- a/models/apis/resource-explorer-2/2022-07-28/endpoint-rule-set-1.json +++ b/models/apis/resource-explorer-2/2022-07-28/endpoint-rule-set-1.json @@ -33,7 +33,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -61,217 +60,188 @@ }, "type": "endpoint" } - ] + ], + "type": "tree" }, { - "conditions": [], - "type": "tree", + "conditions": [ + { + "fn": "isSet", + "argv": [ + { + "ref": "Region" + } + ] + } + ], "rules": [ { "conditions": [ { - "fn": "isSet", + "fn": "aws.partition", "argv": [ { "ref": "Region" } - ] + ], + "assign": "PartitionResult" } ], - "type": "tree", "rules": [ { "conditions": [ { - "fn": "aws.partition", + "fn": "booleanEquals", "argv": [ + true, { - "ref": "Region" + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsDualStack" + ] } - ], - "assign": "PartitionResult" + ] } ], - "type": "tree", "rules": [ { - "conditions": [], - "type": "tree", + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + } + ], "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ - true, { "fn": "getAttr", "argv": [ { "ref": "PartitionResult" }, - "supportsDualStack" + "supportsFIPS" ] - } + }, + true ] } ], - "type": "tree", "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseFIPS" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://resource-explorer-2-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] - }, - { - "conditions": [], - "error": "FIPS is enabled but this partition does not support FIPS", - "type": "error" - } - ] - }, { "conditions": [], "endpoint": { - "url": "https://resource-explorer-2.{Region}.{PartitionResult#dualStackDnsSuffix}", + "url": "https://resource-explorer-2-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", "properties": {}, "headers": {} }, "type": "endpoint" } - ] + ], + "type": "tree" }, { "conditions": [], - "type": "tree", - "rules": [ + "error": "FIPS is enabled but this partition does not support FIPS", + "type": "error" + } + ], + "type": "tree" + }, + { + "conditions": [], + "endpoint": { + "url": "https://resource-explorer-2.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ], + "type": "tree" + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + } + ], + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseFIPS" - }, - true - ] - } - ], - "type": "tree", - "rules": [ + "fn": "getAttr", + "argv": [ { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://resource-explorer-2-fips.{Region}.{PartitionResult#dnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] - } - ] + "ref": "PartitionResult" }, - { - "conditions": [], - "error": "FIPS is enabled but this partition does not support FIPS", - "type": "error" - } + "supportsFIPS" ] }, - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://resource-explorer-2.{Region}.{PartitionResult#dnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] - } + true ] } - ] + ], + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://resource-explorer-2-fips.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ], + "type": "tree" + }, + { + "conditions": [], + "error": "FIPS is enabled but this partition does not support FIPS", + "type": "error" } - ] + ], + "type": "tree" + }, + { + "conditions": [], + "endpoint": { + "url": "https://resource-explorer-2.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } - ] - }, - { - "conditions": [], - "error": "Invalid Configuration: Missing Region", - "type": "error" + ], + "type": "tree" } - ] + ], + "type": "tree" + }, + { + "conditions": [], + "error": "Invalid Configuration: Missing Region", + "type": "error" } ] } \ No newline at end of file diff --git a/models/apis/resource-explorer-2/2022-07-28/endpoint-tests-1.json b/models/apis/resource-explorer-2/2022-07-28/endpoint-tests-1.json index 7153473aa14..6bc38817f06 100644 --- a/models/apis/resource-explorer-2/2022-07-28/endpoint-tests-1.json +++ b/models/apis/resource-explorer-2/2022-07-28/endpoint-tests-1.json @@ -107,6 +107,12 @@ "UseFIPS": true, "Endpoint": "https://example.com" } + }, + { + "documentation": "Missing region", + "expect": { + "error": "Invalid Configuration: Missing Region" + } } ], "version": "1.0" diff --git a/models/apis/resource-explorer-2/2022-07-28/paginators-1.json b/models/apis/resource-explorer-2/2022-07-28/paginators-1.json index 8cb90d36884..a88798fc261 100644 --- a/models/apis/resource-explorer-2/2022-07-28/paginators-1.json +++ b/models/apis/resource-explorer-2/2022-07-28/paginators-1.json @@ -6,6 +6,12 @@ "limit_key": "MaxResults", "result_key": "Indexes" }, + "ListIndexesForMembers": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults", + "result_key": "Indexes" + }, "ListSupportedResourceTypes": { "input_token": "NextToken", "output_token": "NextToken", diff --git a/models/apis/sagemaker/2017-07-24/api-2.json b/models/apis/sagemaker/2017-07-24/api-2.json index 9b015c2676f..ed16f19c124 100644 --- a/models/apis/sagemaker/2017-07-24/api-2.json +++ b/models/apis/sagemaker/2017-07-24/api-2.json @@ -12317,11 +12317,7 @@ }, "InferenceSpecification":{ "type":"structure", - "required":[ - "Containers", - "SupportedContentTypes", - "SupportedResponseMIMETypes" - ], + "required":["Containers"], "members":{ "Containers":{"shape":"ModelPackageContainerDefinitionList"}, "SupportedTransformInstanceTypes":{"shape":"TransformInstanceTypes"}, diff --git a/models/apis/sagemaker/2017-07-24/docs-2.json b/models/apis/sagemaker/2017-07-24/docs-2.json index 8af67aa4e0e..7aca195274a 100644 --- a/models/apis/sagemaker/2017-07-24/docs-2.json +++ b/models/apis/sagemaker/2017-07-24/docs-2.json @@ -1113,7 +1113,7 @@ "refs": { "AutoMLResolvedAttributes$AutoMLJobObjective": null, "CreateAutoMLJobRequest$AutoMLJobObjective": "

Specifies a metric to minimize or maximize as the objective of a job. If not specified, the default objective metric depends on the problem type. See AutoMLJobObjective for the default values.

", - "CreateAutoMLJobV2Request$AutoMLJobObjective": "

Specifies a metric to minimize or maximize as the objective of a job. If not specified, the default objective metric depends on the problem type. For the list of default values per problem type, see AutoMLJobObjective.

", + "CreateAutoMLJobV2Request$AutoMLJobObjective": "

Specifies a metric to minimize or maximize as the objective of a job. If not specified, the default objective metric depends on the problem type. For the list of default values per problem type, see AutoMLJobObjective.

", "DescribeAutoMLJobResponse$AutoMLJobObjective": "

Returns the job's objective.

", "DescribeAutoMLJobV2Response$AutoMLJobObjective": "

Returns the job's objective.

", "ResolvedAttributes$AutoMLJobObjective": null @@ -1170,7 +1170,7 @@ "AutoMLMetricEnum": { "base": null, "refs": { - "AutoMLJobObjective$MetricName": "

The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.

The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.

", + "AutoMLJobObjective$MetricName": "

The name of the objective metric used to measure the predictive quality of a machine learning system. During training, the model's parameters are updated iteratively to optimize its performance based on the feedback provided by the objective metric when evaluating the model on the validation dataset.

The list of available metrics supported by Autopilot and the default metric applied when you do not specify a metric name explicitly depend on the problem type.

", "FinalAutoMLJobObjectiveMetric$MetricName": "

The name of the metric with the best result. For a description of the possible objective metrics, see AutoMLJobObjective$MetricName.

", "FinalAutoMLJobObjectiveMetric$StandardMetricName": "

The name of the standard metric. For a description of the standard metrics, see Autopilot candidate metrics.

", "MetricDatum$MetricName": "

The name of the metric.

" @@ -1328,7 +1328,7 @@ "BaseModelName": { "base": null, "refs": { - "TextGenerationJobConfig$BaseModelName": "

The name of the base model to fine-tune. Autopilot supports fine-tuning a variety of large language models. For information on the list of supported models, see Text generation models supporting fine-tuning in Autopilot. If no BaseModelName is provided, the default model used is Falcon-7B-Instruct.

", + "TextGenerationJobConfig$BaseModelName": "

The name of the base model to fine-tune. Autopilot supports fine-tuning a variety of large language models. For information on the list of supported models, see Text generation models supporting fine-tuning in Autopilot. If no BaseModelName is provided, the default model used is Falcon-7B-Instruct.

", "TextGenerationResolvedAttributes$BaseModelName": "

The name of the base model to fine-tune.

" } }, diff --git a/models/apis/signer/2017-08-25/docs-2.json b/models/apis/signer/2017-08-25/docs-2.json index f8b58f8bd02..2af8413fe95 100644 --- a/models/apis/signer/2017-08-25/docs-2.json +++ b/models/apis/signer/2017-08-25/docs-2.json @@ -1,6 +1,6 @@ { "version": "2.0", - "service": "

AWS Signer is a fully managed code signing service to help you ensure the trust and integrity of your code.

AWS Signer supports the following applications:

With code signing for AWS Lambda, you can sign AWS Lambda deployment packages. Integrated support is provided for Amazon S3, Amazon CloudWatch, and AWS CloudTrail. In order to sign code, you create a signing profile and then use Signer to sign Lambda zip files in S3.

With code signing for IoT, you can sign code for any IoT device that is supported by AWS. IoT code signing is available for Amazon FreeRTOS and AWS IoT Device Management, and is integrated with AWS Certificate Manager (ACM). In order to sign code, you import a third-party code signing certificate using ACM, and use that to sign updates in Amazon FreeRTOS and AWS IoT Device Management.

With code signing for containers …(TBD)

For more information about AWS Signer, see the AWS Signer Developer Guide.

", + "service": "

AWS Signer is a fully managed code-signing service to help you ensure the trust and integrity of your code.

Signer supports the following applications:

With code signing for AWS Lambda, you can sign AWS Lambda deployment packages. Integrated support is provided for Amazon S3, Amazon CloudWatch, and AWS CloudTrail. In order to sign code, you create a signing profile and then use Signer to sign Lambda zip files in S3.

With code signing for IoT, you can sign code for any IoT device that is supported by AWS. IoT code signing is available for Amazon FreeRTOS and AWS IoT Device Management, and is integrated with AWS Certificate Manager (ACM). In order to sign code, you import a third-party code-signing certificate using ACM, and use that to sign updates in Amazon FreeRTOS and AWS IoT Device Management.

With Signer and the Notation CLI from the Notary
 Project, you can sign container images stored in a container registry such as Amazon Elastic Container Registry (ECR). The signatures are stored in the registry alongside the images, where they are available for verifying image authenticity and integrity.

For more information about Signer, see the AWS Signer Developer Guide.

", "operations": { "AddProfilePermission": "

Adds cross-account permissions to a signing profile.

", "CancelSigningProfile": "

Changes the state of an ACTIVE signing profile to CANCELED. A canceled profile is still viewable with the ListSigningProfiles operation, but it cannot perform new signing jobs, and is deleted two years after cancelation.

", @@ -9,16 +9,16 @@ "GetSigningPlatform": "

Returns information on a specific signing platform.

", "GetSigningProfile": "

Returns information on a specific signing profile.

", "ListProfilePermissions": "

Lists the cross-account permissions associated with a signing profile.

", - "ListSigningJobs": "

Lists all your signing jobs. You can use the maxResults parameter to limit the number of signing jobs that are returned in the response. If additional jobs remain to be listed, code signing returns a nextToken value. Use this value in subsequent calls to ListSigningJobs to fetch the remaining values. You can continue calling ListSigningJobs with your maxResults parameter and with new values that code signing returns in the nextToken parameter until all of your signing jobs have been returned.

", - "ListSigningPlatforms": "

Lists all signing platforms available in code signing that match the request parameters. If additional jobs remain to be listed, code signing returns a nextToken value. Use this value in subsequent calls to ListSigningJobs to fetch the remaining values. You can continue calling ListSigningJobs with your maxResults parameter and with new values that code signing returns in the nextToken parameter until all of your signing jobs have been returned.

", - "ListSigningProfiles": "

Lists all available signing profiles in your AWS account. Returns only profiles with an ACTIVE status unless the includeCanceled request field is set to true. If additional jobs remain to be listed, code signing returns a nextToken value. Use this value in subsequent calls to ListSigningJobs to fetch the remaining values. You can continue calling ListSigningJobs with your maxResults parameter and with new values that code signing returns in the nextToken parameter until all of your signing jobs have been returned.

", + "ListSigningJobs": "

Lists all your signing jobs. You can use the maxResults parameter to limit the number of signing jobs that are returned in the response. If additional jobs remain to be listed, AWS Signer returns a nextToken value. Use this value in subsequent calls to ListSigningJobs to fetch the remaining values. You can continue calling ListSigningJobs with your maxResults parameter and with new values that Signer returns in the nextToken parameter until all of your signing jobs have been returned.

", + "ListSigningPlatforms": "

Lists all signing platforms available in AWS Signer that match the request parameters. If additional jobs remain to be listed, Signer returns a nextToken value. Use this value in subsequent calls to ListSigningJobs to fetch the remaining values. You can continue calling ListSigningJobs with your maxResults parameter and with new values that Signer returns in the nextToken parameter until all of your signing jobs have been returned.

", + "ListSigningProfiles": "

Lists all available signing profiles in your AWS account. Returns only profiles with an ACTIVE status unless the includeCanceled request field is set to true. If additional jobs remain to be listed, AWS Signer returns a nextToken value. Use this value in subsequent calls to ListSigningJobs to fetch the remaining values. You can continue calling ListSigningJobs with your maxResults parameter and with new values that Signer returns in the nextToken parameter until all of your signing jobs have been returned.

", "ListTagsForResource": "

Returns a list of the tags associated with a signing profile resource.

", - "PutSigningProfile": "

Creates a signing profile. A signing profile is a code signing template that can be used to carry out a pre-defined signing job.

", + "PutSigningProfile": "

Creates a signing profile. A signing profile is a code-signing template that can be used to carry out a pre-defined signing job.

", "RemoveProfilePermission": "

Removes cross-account permissions from a signing profile.

", "RevokeSignature": "

Changes the state of a signing job to REVOKED. This indicates that the signature is no longer valid.

", "RevokeSigningProfile": "

Changes the state of a signing profile to REVOKED. This indicates that signatures generated using the signing profile after an effective start date are no longer valid.

", "SignPayload": "

Signs a binary payload and returns a signature envelope.

", - "StartSigningJob": "

Initiates a signing job to be performed on the code provided. Signing jobs are viewable by the ListSigningJobs operation for two years after they are performed. Note the following requirements:

You can call the DescribeSigningJob and the ListSigningJobs actions after you call StartSigningJob.

For a Java example that shows how to use this action, see StartSigningJob.

", + "StartSigningJob": "

Initiates a signing job to be performed on the code provided. Signing jobs are viewable by the ListSigningJobs operation for two years after they are performed. Note the following requirements:

You can call the DescribeSigningJob and the ListSigningJobs actions after you call StartSigningJob.

For a Java example that shows how to use this action, see StartSigningJob.

", "TagResource": "

Adds one or more tags to a signing profile. Tags are labels that you can use to identify and organize your AWS resources. Each tag consists of a key and an optional value. To specify the signing profile, use its Amazon Resource Name (ARN). To specify the tag, use a key-value pair.

", "UntagResource": "

Removes one or more tags from a signing profile. To remove the tags, specify a list of tag keys.

" }, @@ -92,7 +92,7 @@ "base": null, "refs": { "GetSigningPlatformResponse$category": "

The category type of the target signing platform.

", - "SigningPlatform$category": "

The category of a code signing platform.

" + "SigningPlatform$category": "

The category of a signing platform.

" } }, "CertificateArn": { @@ -104,7 +104,7 @@ "CertificateHashes": { "base": null, "refs": { - "GetRevocationStatusRequest$certificateHashes": "

A list of composite signed hashes that identify certificates.

A certificate identifier consists of a subject certificate TBS hash (signed by the parent CA) combined with a parent CA TBS hash (signed by the parent CA’s CA). Root certificates are defined as their own CA.

" + "GetRevocationStatusRequest$certificateHashes": "

A list of composite signed hashes that identify certificates.

A certificate identifier consists of a subject certificate TBS hash (signed by the parent CA) combined with a parent CA TBS hash (signed by the parent CA’s CA). Root certificates are defined as their own CA.

The following example shows how to calculate a hash for this parameter using OpenSSL commands:

openssl asn1parse -in childCert.pem -strparse 4 -out childCert.tbs

openssl sha384 < childCert.tbs -binary > childCertTbsHash

openssl asn1parse -in parentCert.pem -strparse 4 -out parentCert.tbs

openssl sha384 < parentCert.tbs -binary > parentCertTbsHash xxd -p childCertTbsHash > certificateHash.hex xxd -p parentCertTbsHash >> certificateHash.hex

cat certificateHash.hex | tr -d '\\n'

" } }, "ClientRequestToken": { @@ -147,21 +147,21 @@ "EncryptionAlgorithm": { "base": null, "refs": { - "EncryptionAlgorithmOptions$defaultValue": "

The default encryption algorithm that is used by a code signing job.

", + "EncryptionAlgorithmOptions$defaultValue": "

The default encryption algorithm that is used by a code-signing job.

", "EncryptionAlgorithms$member": null, - "SigningConfigurationOverrides$encryptionAlgorithm": "

A specified override of the default encryption algorithm that is used in a code signing job.

" + "SigningConfigurationOverrides$encryptionAlgorithm": "

A specified override of the default encryption algorithm that is used in a code-signing job.

" } }, "EncryptionAlgorithmOptions": { - "base": "

The encryption algorithm options that are available to a code signing job.

", + "base": "

The encryption algorithm options that are available to a code-signing job.

", "refs": { - "SigningConfiguration$encryptionAlgorithmOptions": "

The encryption algorithm options that are available for a code signing job.

" + "SigningConfiguration$encryptionAlgorithmOptions": "

The encryption algorithm options that are available for a code-signing job.

" } }, "EncryptionAlgorithms": { "base": null, "refs": { - "EncryptionAlgorithmOptions$allowedValues": "

The set of accepted encryption algorithms that are allowed in a code signing job.

" + "EncryptionAlgorithmOptions$allowedValues": "

The set of accepted encryption algorithms that are allowed in a code-signing job.

" } }, "ErrorCode": { @@ -227,35 +227,35 @@ "HashAlgorithm": { "base": null, "refs": { - "HashAlgorithmOptions$defaultValue": "

The default hash algorithm that is used in a code signing job.

", + "HashAlgorithmOptions$defaultValue": "

The default hash algorithm that is used in a code-signing job.

", "HashAlgorithms$member": null, - "SigningConfigurationOverrides$hashAlgorithm": "

A specified override of the default hash algorithm that is used in a code signing job.

" + "SigningConfigurationOverrides$hashAlgorithm": "

A specified override of the default hash algorithm that is used in a code-signing job.

" } }, "HashAlgorithmOptions": { - "base": "

The hash algorithms that are available to a code signing job.

", + "base": "

The hash algorithms that are available to a code-signing job.

", "refs": { - "SigningConfiguration$hashAlgorithmOptions": "

The hash algorithm options that are available for a code signing job.

" + "SigningConfiguration$hashAlgorithmOptions": "

The hash algorithm options that are available for a code-signing job.

" } }, "HashAlgorithms": { "base": null, "refs": { - "HashAlgorithmOptions$allowedValues": "

The set of accepted hash algorithms allowed in a code signing job.

" + "HashAlgorithmOptions$allowedValues": "

The set of accepted hash algorithms allowed in a code-signing job.

" } }, "ImageFormat": { "base": null, "refs": { "ImageFormats$member": null, - "SigningImageFormat$defaultFormat": "

The default format of a code signing image.

", + "SigningImageFormat$defaultFormat": "

The default format of a signing image.

", "SigningPlatformOverrides$signingImageFormat": "

A signed image is a JSON object. When overriding the default signing platform configuration, a customer can select either of two signing formats, JSONEmbedded or JSONDetached. (A third format value, JSON, is reserved for future use.) With JSONEmbedded, the signing image has the payload embedded in it. With JSONDetached, the payload is not be embedded in the signing image.

" } }, "ImageFormats": { "base": null, "refs": { - "SigningImageFormat$supportedFormats": "

The supported formats of a code signing image.

" + "SigningImageFormat$supportedFormats": "

The supported formats of a signing image.

" } }, "Integer": { @@ -349,13 +349,13 @@ "base": null, "refs": { "GetSigningPlatformResponse$maxSizeInMB": "

The maximum size (in MB) of the payload that can be signed by the target platform.

", - "SigningPlatform$maxSizeInMB": "

The maximum size (in MB) of code that can be signed by a code signing platform.

" + "SigningPlatform$maxSizeInMB": "

The maximum size (in MB) of code that can be signed by a signing platform.

" } }, "Metadata": { "base": null, "refs": { - "SignPayloadResponse$metadata": "

Information including the signing profile ARN and the signing job ID. Clients use metadata to signature records, for example, as annotations added to the signature manifest inside an OCI registry.

" + "SignPayloadResponse$metadata": "

Information including the signing profile ARN and the signing job ID.

" } }, "NextToken": { @@ -414,7 +414,7 @@ "Prefix": { "base": null, "refs": { - "S3Destination$prefix": "

An Amazon S3 prefix that you can use to limit responses to those that begin with the specified prefix.

" + "S3Destination$prefix": "

An S3 prefix that you can use to limit responses to those that begin with the specified prefix.

" } }, "ProfileName": { @@ -500,23 +500,23 @@ "RevokedEntities": { "base": null, "refs": { - "GetRevocationStatusResponse$revokedEntities": "

A list of revoked entities (including one or more of the signing profile ARN, signing job ID, and certificate hash) supplied as input to the API.

" + "GetRevocationStatusResponse$revokedEntities": "

A list of revoked entities (including zero or more of the signing profile ARN, signing job ARN, and certificate hashes) supplied as input to the API.

" } }, "S3Destination": { - "base": "

The name and prefix of the S3 bucket where code signing saves your signed objects.

", + "base": "

The name and prefix of the Amazon S3 bucket where AWS Signer saves your signed objects.

", "refs": { "Destination$s3": "

The S3Destination object.

" } }, "S3SignedObject": { - "base": "

The S3 bucket name and key where code signing saved your signed code image.

", + "base": "

The Amazon S3 bucket name and key where Signer saved your signed code image.

", "refs": { "SignedObject$s3": "

The S3SignedObject.

" } }, "S3Source": { - "base": "

Information about the S3 bucket where you saved your unsigned code.

", + "base": "

Information about the Amazon S3 bucket where you saved your unsigned code.

", "refs": { "Source$s3": "

The S3Source object.

" } @@ -547,15 +547,15 @@ "SignedObject": { "base": "

Points to an S3SignedObject object that contains information about your signed code image.

", "refs": { - "DescribeSigningJobResponse$signedObject": "

Name of the S3 bucket where the signed code image is saved by code signing.

", + "DescribeSigningJobResponse$signedObject": "

Name of the S3 bucket where the signed code image is saved by AWS Signer.

", "SigningJob$signedObject": "

A SignedObject structure that contains information about a signing job's signed code image.

" } }, "SigningConfiguration": { - "base": "

The configuration of a code signing operation.

", + "base": "

The configuration of a signing operation.

", "refs": { "GetSigningPlatformResponse$signingConfiguration": "

A list of configurations applied to the target platform at signing.

", - "SigningPlatform$signingConfiguration": "

The configuration of a code signing platform. This includes the designated hash algorithm and encryption algorithm of a signing platform.

" + "SigningPlatform$signingConfiguration": "

The configuration of a signing platform. This includes the designated hash algorithm and encryption algorithm of a signing platform.

" } }, "SigningConfigurationOverrides": { @@ -565,7 +565,7 @@ } }, "SigningImageFormat": { - "base": "

The image format of a code signing platform or profile.

", + "base": "

The image format of a AWS Signer platform or profile.

", "refs": { "GetSigningPlatformResponse$signingImageFormat": "

The format of the target platform's signing image.

", "SigningPlatform$signingImageFormat": null @@ -617,17 +617,17 @@ "DescribeSigningJobResponse$signingParameters": "

Map of user-assigned key-value pairs used during signing. These values contain any information that you specified for use in your signing job.

", "GetSigningProfileResponse$signingParameters": "

A map of key-value pairs for signing operations that is attached to the target signing profile.

", "PutSigningProfileRequest$signingParameters": "

Map of key-value pairs for signing. These can include any information that you want to use during signing.

", - "SigningProfile$signingParameters": "

The parameters that are available for use by a code signing user.

" + "SigningProfile$signingParameters": "

The parameters that are available for use by a Signer user.

" } }, "SigningPlatform": { - "base": "

Contains information about the signing configurations and parameters that are used to perform a code signing job.

", + "base": "

Contains information about the signing configurations and parameters that are used to perform a code-signing job.

", "refs": { "SigningPlatforms$member": null } }, "SigningPlatformOverrides": { - "base": "

Any overrides that are applied to the signing configuration of a code signing platform.

", + "base": "

Any overrides that are applied to the signing configuration of a signing platform.

", "refs": { "DescribeSigningJobResponse$overrides": "

A list of any overrides that were applied to the signing operation.

", "GetSigningProfileResponse$overrides": "

A list of overrides applied by the target signing profile for signing operations.

", @@ -641,7 +641,7 @@ } }, "SigningProfile": { - "base": "

Contains information about the ACM certificates and code signing configuration parameters that can be used by a given code signing user.

", + "base": "

Contains information about the ACM certificates and signing configuration parameters that can be used by a given code signing user.

", "refs": { "SigningProfiles$member": null } @@ -656,7 +656,7 @@ "base": null, "refs": { "GetSigningProfileResponse$status": "

The status of the target signing profile.

", - "SigningProfile$status": "

The status of a code signing profile.

", + "SigningProfile$status": "

The status of a signing profile.

", "Statuses$member": null } }, @@ -734,13 +734,13 @@ "RemoveProfilePermissionRequest$statementId": "

A unique identifier for the cross-account permissions statement.

", "RemoveProfilePermissionResponse$revisionId": "

An identifier for the current revision of the profile permissions.

", "RevokedEntities$member": null, - "SignPayloadRequest$payloadFormat": "

Payload content type

", + "SignPayloadRequest$payloadFormat": "

Payload content type. The single valid type is application/vnd.cncf.notary.payload.v1+json.

", "SigningJobRevocationRecord$reason": "

A caller-supplied reason for revocation.

", "SigningJobRevocationRecord$revokedBy": "

The identity of the revoker.

", - "SigningPlatform$platformId": "

The ID of a code signing platform.

", - "SigningPlatform$displayName": "

The display name of a code signing platform.

", - "SigningPlatform$partner": "

Any partner entities linked to a code signing platform.

", - "SigningPlatform$target": "

The types of targets that can be signed by a code signing platform.

", + "SigningPlatform$platformId": "

The ID of a signing platform.

", + "SigningPlatform$displayName": "

The display name of a signing platform.

", + "SigningPlatform$partner": "

Any partner entities linked to a signing platform.

", + "SigningPlatform$target": "

The types of targets that can be signed by a signing platform.

", "SigningProfileRevocationRecord$revokedBy": "

The identity of the revoker.

", "TagResourceRequest$resourceArn": "

The Amazon Resource Name (ARN) for the signing profile.

", "UntagResourceRequest$resourceArn": "

The Amazon Resource Name (ARN) for the signing profile.

" diff --git a/models/apis/signer/2017-08-25/endpoint-rule-set-1.json b/models/apis/signer/2017-08-25/endpoint-rule-set-1.json index 0774b7ef747..8d1648942b6 100644 --- a/models/apis/signer/2017-08-25/endpoint-rule-set-1.json +++ b/models/apis/signer/2017-08-25/endpoint-rule-set-1.json @@ -40,7 +40,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -58,293 +57,258 @@ "type": "error" }, { - "conditions": [], - "type": "tree", - "rules": [ + "conditions": [ { - "conditions": [ + "fn": "booleanEquals", + "argv": [ { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseDualStack" - }, - true - ] - } - ], - "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", - "type": "error" - }, - { - "conditions": [], - "endpoint": { - "url": { - "ref": "Endpoint" + "ref": "UseDualStack" }, - "properties": {}, - "headers": {} - }, - "type": "endpoint" + true + ] } - ] + ], + "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", + "type": "error" + }, + { + "conditions": [], + "endpoint": { + "url": { + "ref": "Endpoint" + }, + "properties": {}, + "headers": {} + }, + "type": "endpoint" } - ] + ], + "type": "tree" }, { - "conditions": [], - "type": "tree", + "conditions": [ + { + "fn": "isSet", + "argv": [ + { + "ref": "Region" + } + ] + } + ], "rules": [ { "conditions": [ { - "fn": "isSet", + "fn": "aws.partition", "argv": [ { "ref": "Region" } - ] + ], + "assign": "PartitionResult" } ], - "type": "tree", "rules": [ { "conditions": [ { - "fn": "aws.partition", + "fn": "booleanEquals", "argv": [ { - "ref": "Region" - } - ], - "assign": "PartitionResult" + "ref": "UseFIPS" + }, + true + ] + }, + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] } ], - "type": "tree", "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseFIPS" - }, - true + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } ] }, { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - }, - { - "fn": "booleanEquals", - "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://signer-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsDualStack" ] } ] - }, + } + ], + "rules": [ { "conditions": [], - "error": "FIPS and DualStack are enabled, but this partition does not support one or both", - "type": "error" + "endpoint": { + "url": "https://signer-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } - ] + ], + "type": "tree" }, + { + "conditions": [], + "error": "FIPS and DualStack are enabled, but this partition does not support one or both", + "type": "error" + } + ], + "type": "tree" + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + } + ], + "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ { - "ref": "UseFIPS" + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] }, true ] } ], - "type": "tree", "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://signer-fips.{Region}.{PartitionResult#dnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] - } - ] - }, { "conditions": [], - "error": "FIPS is enabled but this partition does not support FIPS", - "type": "error" + "endpoint": { + "url": "https://signer-fips.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } - ] + ], + "type": "tree" }, + { + "conditions": [], + "error": "FIPS is enabled but this partition does not support FIPS", + "type": "error" + } + ], + "type": "tree" + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] + } + ], + "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://signer.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsDualStack" ] } ] - }, - { - "conditions": [], - "error": "DualStack is enabled but this partition does not support DualStack", - "type": "error" } - ] - }, - { - "conditions": [], - "type": "tree", + ], "rules": [ { "conditions": [], "endpoint": { - "url": "https://signer.{Region}.{PartitionResult#dnsSuffix}", + "url": "https://signer.{Region}.{PartitionResult#dualStackDnsSuffix}", "properties": {}, "headers": {} }, "type": "endpoint" } - ] + ], + "type": "tree" + }, + { + "conditions": [], + "error": "DualStack is enabled but this partition does not support DualStack", + "type": "error" } - ] + ], + "type": "tree" + }, + { + "conditions": [], + "endpoint": { + "url": "https://signer.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } - ] - }, - { - "conditions": [], - "error": "Invalid Configuration: Missing Region", - "type": "error" + ], + "type": "tree" } - ] + ], + "type": "tree" + }, + { + "conditions": [], + "error": "Invalid Configuration: Missing Region", + "type": "error" } ] } \ No newline at end of file diff --git a/models/apis/states/2016-11-23/api-2.json b/models/apis/states/2016-11-23/api-2.json index d2a9a7fadbd..93b2cd933c2 100644 --- a/models/apis/states/2016-11-23/api-2.json +++ b/models/apis/states/2016-11-23/api-2.json @@ -347,6 +347,22 @@ ], "idempotent":true }, + "RedriveExecution":{ + "name":"RedriveExecution", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RedriveExecutionInput"}, + "output":{"shape":"RedriveExecutionOutput"}, + "errors":[ + {"shape":"ExecutionDoesNotExist"}, + {"shape":"ExecutionNotRedrivable"}, + {"shape":"ExecutionLimitExceeded"}, + {"shape":"InvalidArn"} + ], + "idempotent":true + }, "SendTaskFailure":{ "name":"SendTaskFailure", "http":{ @@ -642,6 +658,12 @@ "min":1, "pattern":"^(?=.*[a-zA-Z_\\-\\.])[a-zA-Z0-9_\\-\\.]+$" }, + "ClientToken":{ + "type":"string", + "max":64, + "min":1, + "pattern":"[!-~]+" + }, "CloudWatchEventsExecutionDataDetails":{ "type":"structure", "members":{ @@ -845,7 +867,11 @@ "error":{"shape":"SensitiveError"}, "cause":{"shape":"SensitiveCause"}, "stateMachineVersionArn":{"shape":"Arn"}, - "stateMachineAliasArn":{"shape":"Arn"} + "stateMachineAliasArn":{"shape":"Arn"}, + "redriveCount":{"shape":"RedriveCount"}, + "redriveDate":{"shape":"Timestamp"}, + "redriveStatus":{"shape":"ExecutionRedriveStatus"}, + "redriveStatusReason":{"shape":"SensitiveData"} } }, "DescribeMapRunInput":{ @@ -878,7 +904,9 @@ "toleratedFailurePercentage":{"shape":"ToleratedFailurePercentage"}, "toleratedFailureCount":{"shape":"ToleratedFailureCount"}, "itemCounts":{"shape":"MapRunItemCounts"}, - "executionCounts":{"shape":"MapRunExecutionCounts"} + "executionCounts":{"shape":"MapRunExecutionCounts"}, + "redriveCount":{"shape":"RedriveCount"}, + "redriveDate":{"shape":"Timestamp"} } }, "DescribeStateMachineAliasInput":{ @@ -1024,7 +1052,40 @@ "box":true }, "stateMachineVersionArn":{"shape":"Arn"}, - "stateMachineAliasArn":{"shape":"Arn"} + "stateMachineAliasArn":{"shape":"Arn"}, + "redriveCount":{ + "shape":"RedriveCount", + "box":true + }, + "redriveDate":{"shape":"Timestamp"} + } + }, + "ExecutionNotRedrivable":{ + "type":"structure", + "members":{ + "message":{"shape":"ErrorMessage"} + }, + "exception":true + }, + "ExecutionRedriveFilter":{ + "type":"string", + "enum":[ + "REDRIVEN", + "NOT_REDRIVEN" + ] + }, + "ExecutionRedriveStatus":{ + "type":"string", + "enum":[ + "REDRIVABLE", + "NOT_REDRIVABLE", + "REDRIVABLE_BY_MAP_RUN" + ] + }, + "ExecutionRedrivenEventDetails":{ + "type":"structure", + "members":{ + "redriveCount":{"shape":"RedriveCount"} } }, "ExecutionStartedEventDetails":{ @@ -1044,7 +1105,8 @@ "SUCCEEDED", "FAILED", "TIMED_OUT", - "ABORTED" + "ABORTED", + "PENDING_REDRIVE" ] }, "ExecutionSucceededEventDetails":{ @@ -1126,6 +1188,7 @@ "executionSucceededEventDetails":{"shape":"ExecutionSucceededEventDetails"}, "executionAbortedEventDetails":{"shape":"ExecutionAbortedEventDetails"}, "executionTimedOutEventDetails":{"shape":"ExecutionTimedOutEventDetails"}, + "executionRedrivenEventDetails":{"shape":"ExecutionRedrivenEventDetails"}, "mapStateStartedEventDetails":{"shape":"MapStateStartedEventDetails"}, "mapIterationStartedEventDetails":{"shape":"MapIterationEventDetails"}, "mapIterationSucceededEventDetails":{"shape":"MapIterationEventDetails"}, @@ -1140,7 +1203,8 @@ "stateEnteredEventDetails":{"shape":"StateEnteredEventDetails"}, "stateExitedEventDetails":{"shape":"StateExitedEventDetails"}, "mapRunStartedEventDetails":{"shape":"MapRunStartedEventDetails"}, - "mapRunFailedEventDetails":{"shape":"MapRunFailedEventDetails"} + "mapRunFailedEventDetails":{"shape":"MapRunFailedEventDetails"}, + "mapRunRedrivenEventDetails":{"shape":"MapRunRedrivenEventDetails"} } }, "HistoryEventExecutionDataDetails":{ @@ -1214,7 +1278,9 @@ "MapRunAborted", "MapRunFailed", "MapRunStarted", - "MapRunSucceeded" + "MapRunSucceeded", + "ExecutionRedriven", + "MapRunRedriven" ] }, "Identity":{ @@ -1353,7 +1419,8 @@ "statusFilter":{"shape":"ExecutionStatus"}, "maxResults":{"shape":"PageSize"}, "nextToken":{"shape":"ListExecutionsPageToken"}, - "mapRunArn":{"shape":"LongArn"} + "mapRunArn":{"shape":"LongArn"}, + "redriveFilter":{"shape":"ExecutionRedriveFilter"} } }, "ListExecutionsOutput":{ @@ -1480,6 +1547,7 @@ "max":2000, "min":1 }, + "LongObject":{"type":"long"}, "MapIterationEventDetails":{ "type":"structure", "members":{ @@ -1507,7 +1575,9 @@ "timedOut":{"shape":"UnsignedLong"}, "aborted":{"shape":"UnsignedLong"}, "total":{"shape":"UnsignedLong"}, - "resultsWritten":{"shape":"UnsignedLong"} + "resultsWritten":{"shape":"UnsignedLong"}, + "failuresNotRedrivable":{"shape":"LongObject"}, + "pendingRedrive":{"shape":"LongObject"} } }, "MapRunFailedEventDetails":{ @@ -1537,7 +1607,9 @@ "timedOut":{"shape":"UnsignedLong"}, "aborted":{"shape":"UnsignedLong"}, "total":{"shape":"UnsignedLong"}, - "resultsWritten":{"shape":"UnsignedLong"} + "resultsWritten":{"shape":"UnsignedLong"}, + "failuresNotRedrivable":{"shape":"LongObject"}, + "pendingRedrive":{"shape":"LongObject"} } }, "MapRunLabel":{"type":"string"}, @@ -1561,6 +1633,13 @@ "stopDate":{"shape":"Timestamp"} } }, + "MapRunRedrivenEventDetails":{ + "type":"structure", + "members":{ + "mapRunArn":{"shape":"LongArn"}, + "redriveCount":{"shape":"RedriveCount"} + } + }, "MapRunStartedEventDetails":{ "type":"structure", "members":{ @@ -1629,6 +1708,28 @@ "stateMachineVersionArn":{"shape":"Arn"} } }, + "RedriveCount":{ + "type":"integer", + "box":true + }, + "RedriveExecutionInput":{ + "type":"structure", + "required":["executionArn"], + "members":{ + "executionArn":{"shape":"Arn"}, + "clientToken":{ + "shape":"ClientToken", + "idempotencyToken":true + } + } + }, + "RedriveExecutionOutput":{ + "type":"structure", + "required":["redriveDate"], + "members":{ + "redriveDate":{"shape":"Timestamp"} + } + }, "ResourceNotFound":{ "type":"structure", "members":{ diff --git a/models/apis/states/2016-11-23/docs-2.json b/models/apis/states/2016-11-23/docs-2.json index 3028592c156..a3eeed27a5f 100644 --- a/models/apis/states/2016-11-23/docs-2.json +++ b/models/apis/states/2016-11-23/docs-2.json @@ -1,33 +1,34 @@ { "version": "2.0", - "service": "Step Functions

Step Functions is a service that lets you coordinate the components of distributed applications and microservices using visual workflows.

You can use Step Functions to build applications from individual components, each of which performs a discrete function, or task, allowing you to scale and change applications quickly. Step Functions provides a console that helps visualize the components of your application as a series of steps. Step Functions automatically triggers and tracks each step, and retries steps when there are errors, so your application executes predictably and in the right order every time. Step Functions logs the state of each step, so you can quickly diagnose and debug any issues.

Step Functions manages operations and underlying infrastructure to ensure your application is available at any scale. You can run tasks on Amazon Web Services, your own servers, or any system that has access to Amazon Web Services. You can access and use Step Functions using the console, the Amazon Web Services SDKs, or an HTTP API. For more information about Step Functions, see the Step Functions Developer Guide .

", + "service": "Step Functions

Step Functions is a service that lets you coordinate the components of distributed applications and microservices using visual workflows.

You can use Step Functions to build applications from individual components, each of which performs a discrete function, or task, allowing you to scale and change applications quickly. Step Functions provides a console that helps visualize the components of your application as a series of steps. Step Functions automatically triggers and tracks each step, and retries steps when there are errors, so your application executes predictably and in the right order every time. Step Functions logs the state of each step, so you can quickly diagnose and debug any issues.

Step Functions manages operations and underlying infrastructure to ensure your application is available at any scale. You can run tasks on Amazon Web Services, your own servers, or any system that has access to Amazon Web Services. You can access and use Step Functions using the console, the Amazon Web Services SDKs, or an HTTP API. For more information about Step Functions, see the Step Functions Developer Guide .

If you use the Step Functions API actions using Amazon Web Services SDK integrations, make sure the API actions are in camel case and parameter names are in Pascal case. For example, you could use Step Functions API action startSyncExecution and specify its parameter as StateMachineArn.

", "operations": { "CreateActivity": "

Creates an activity. An activity is a task that you write in any programming language and host on any machine that has access to Step Functions. Activities must poll Step Functions using the GetActivityTask API action and respond using SendTask* API actions. This function lets Step Functions know the existence of your activity and returns an identifier for use in a state machine and when polling from the activity.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

CreateActivity is an idempotent API. Subsequent requests won’t create a duplicate resource if it was already created. CreateActivity's idempotency check is based on the activity name. If a following request has different tags values, Step Functions will ignore these differences and treat it as an idempotent request of the previous. In this case, tags will not be updated, even if they are different.

", "CreateStateMachine": "

Creates a state machine. A state machine consists of a collection of states that can do work (Task states), determine to which states to transition next (Choice states), stop an execution with an error (Fail states), and so on. State machines are specified using a JSON-based, structured language. For more information, see Amazon States Language in the Step Functions User Guide.

If you set the publish parameter of this API action to true, it publishes version 1 as the first revision of the state machine.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

CreateStateMachine is an idempotent API. Subsequent requests won’t create a duplicate resource if it was already created. CreateStateMachine's idempotency check is based on the state machine name, definition, type, LoggingConfiguration, and TracingConfiguration. The check is also based on the publish and versionDescription parameters. If a following request has a different roleArn or tags, Step Functions will ignore these differences and treat it as an idempotent request of the previous. In this case, roleArn and tags will not be updated, even if they are different.

", "CreateStateMachineAlias": "

Creates an alias for a state machine that points to one or two versions of the same state machine. You can set your application to call StartExecution with an alias and update the version the alias uses without changing the client's code.

You can also map an alias to split StartExecution requests between two versions of a state machine. To do this, add a second RoutingConfig object in the routingConfiguration parameter. You must also specify the percentage of execution run requests each version should receive in both RoutingConfig objects. Step Functions randomly chooses which version runs a given execution based on the percentage you specify.

To create an alias that points to a single version, specify a single RoutingConfig object with a weight set to 100.

You can create up to 100 aliases for each state machine. You must delete unused aliases using the DeleteStateMachineAlias API action.

CreateStateMachineAlias is an idempotent API. Step Functions bases the idempotency check on the stateMachineArn, description, name, and routingConfiguration parameters. Requests that contain the same values for these parameters return a successful idempotent response without creating a duplicate resource.

Related operations:

", "DeleteActivity": "

Deletes an activity.

", - "DeleteStateMachine": "

Deletes a state machine. This is an asynchronous operation: It sets the state machine's status to DELETING and begins the deletion process.

A qualified state machine ARN can either refer to a Distributed Map state defined within a state machine, a version ARN, or an alias ARN.

The following are some examples of qualified and unqualified state machine ARNs:

This API action also deletes all versions and aliases associated with a state machine.

For EXPRESS state machines, the deletion happens eventually (usually in less than a minute). Running executions may emit logs after DeleteStateMachine API is called.

", + "DeleteStateMachine": "

Deletes a state machine. This is an asynchronous operation. It sets the state machine's status to DELETING and begins the deletion process. A state machine is deleted only when all its executions are completed. On the next state transition, the state machine's executions are terminated.

A qualified state machine ARN can either refer to a Distributed Map state defined within a state machine, a version ARN, or an alias ARN.

The following are some examples of qualified and unqualified state machine ARNs:

This API action also deletes all versions and aliases associated with a state machine.

For EXPRESS state machines, the deletion happens eventually (usually in less than a minute). Running executions may emit logs after DeleteStateMachine API is called.

", "DeleteStateMachineAlias": "

Deletes a state machine alias.

After you delete a state machine alias, you can't use it to start executions. When you delete a state machine alias, Step Functions doesn't delete the state machine versions that alias references.

Related operations:

", "DeleteStateMachineVersion": "

Deletes a state machine version. After you delete a version, you can't call StartExecution using that version's ARN or use the version with a state machine alias.

Deleting a state machine version won't terminate its in-progress executions.

You can't delete a state machine version currently referenced by one or more aliases. Before you delete a version, you must either delete the aliases or update them to point to another state machine version.

Related operations:

", "DescribeActivity": "

Describes an activity.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

", - "DescribeExecution": "

Provides information about a state machine execution, such as the state machine associated with the execution, the execution input and output, and relevant execution metadata. Use this API action to return the Map Run Amazon Resource Name (ARN) if the execution was dispatched by a Map Run.

If you specify a version or alias ARN when you call the StartExecution API action, DescribeExecution returns that ARN.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

Executions of an EXPRESS state machinearen't supported by DescribeExecution unless a Map Run dispatched them.

", - "DescribeMapRun": "

Provides information about a Map Run's configuration, progress, and results. For more information, see Examining Map Run in the Step Functions Developer Guide.

", + "DescribeExecution": "

Provides information about a state machine execution, such as the state machine associated with the execution, the execution input and output, and relevant execution metadata. If you've redriven an execution, you can use this API action to return information about the redrives of that execution. In addition, you can use this API action to return the Map Run Amazon Resource Name (ARN) if the execution was dispatched by a Map Run.

If you specify a version or alias ARN when you call the StartExecution API action, DescribeExecution returns that ARN.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

Executions of an EXPRESS state machine aren't supported by DescribeExecution unless a Map Run dispatched them.

", + "DescribeMapRun": "

Provides information about a Map Run's configuration, progress, and results. If you've redriven a Map Run, this API action also returns information about the redrives of that Map Run. For more information, see Examining Map Run in the Step Functions Developer Guide.

", "DescribeStateMachine": "

Provides information about a state machine's definition, its IAM role Amazon Resource Name (ARN), and configuration.

A qualified state machine ARN can either refer to a Distributed Map state defined within a state machine, a version ARN, or an alias ARN.

The following are some examples of qualified and unqualified state machine ARNs:

This API action returns the details for a state machine version if the stateMachineArn you specify is a state machine version ARN.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

", "DescribeStateMachineAlias": "

Returns details about a state machine alias.

Related operations:

", "DescribeStateMachineForExecution": "

Provides information about a state machine's definition, its execution role ARN, and configuration. If a Map Run dispatched the execution, this action returns the Map Run Amazon Resource Name (ARN) in the response. The state machine returned is the state machine associated with the Map Run.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

This API action is not supported by EXPRESS state machines.

", "GetActivityTask": "

Used by workers to retrieve a task (with the specified activity ARN) which has been scheduled for execution by a running state machine. This initiates a long poll, where the service holds the HTTP connection open and responds as soon as a task becomes available (i.e. an execution of a task of this type is needed.) The maximum time the service holds on to the request before responding is 60 seconds. If no task is available within 60 seconds, the poll returns a taskToken with a null string.

This API action isn't logged in CloudTrail.

Workers should set their client side socket timeout to at least 65 seconds (5 seconds higher than the maximum time the service may hold the poll request).

Polling with GetActivityTask can cause latency in some implementations. See Avoid Latency When Polling for Activity Tasks in the Step Functions Developer Guide.

", "GetExecutionHistory": "

Returns the history of the specified execution as a list of events. By default, the results are returned in ascending order of the timeStamp of the events. Use the reverseOrder parameter to get the latest events first.

If nextToken is returned, there are more results available. The value of nextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. Each pagination token expires after 24 hours. Using an expired pagination token will return an HTTP 400 InvalidToken error.

This API action is not supported by EXPRESS state machines.

", "ListActivities": "

Lists the existing activities.

If nextToken is returned, there are more results available. The value of nextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. Each pagination token expires after 24 hours. Using an expired pagination token will return an HTTP 400 InvalidToken error.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

", - "ListExecutions": "

Lists all executions of a state machine or a Map Run. You can list all executions related to a state machine by specifying a state machine Amazon Resource Name (ARN), or those related to a Map Run by specifying a Map Run ARN.

You can also provide a state machine alias ARN or version ARN to list the executions associated with a specific alias or version.

Results are sorted by time, with the most recent execution first.

If nextToken is returned, there are more results available. The value of nextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. Each pagination token expires after 24 hours. Using an expired pagination token will return an HTTP 400 InvalidToken error.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

This API action is not supported by EXPRESS state machines.

", + "ListExecutions": "

Lists all executions of a state machine or a Map Run. You can list all executions related to a state machine by specifying a state machine Amazon Resource Name (ARN), or those related to a Map Run by specifying a Map Run ARN. Using this API action, you can also list all redriven executions.

You can also provide a state machine alias ARN or version ARN to list the executions associated with a specific alias or version.

Results are sorted by time, with the most recent execution first.

If nextToken is returned, there are more results available. The value of nextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. Each pagination token expires after 24 hours. Using an expired pagination token will return an HTTP 400 InvalidToken error.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

This API action is not supported by EXPRESS state machines.

", "ListMapRuns": "

Lists all Map Runs that were started by a given state machine execution. Use this API action to obtain Map Run ARNs, and then call DescribeMapRun to obtain more information, if needed.

", "ListStateMachineAliases": "

Lists aliases for a specified state machine ARN. Results are sorted by time, with the most recently created aliases listed first.

To list aliases that reference a state machine version, you can specify the version ARN in the stateMachineArn parameter.

If nextToken is returned, there are more results available. The value of nextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. Each pagination token expires after 24 hours. Using an expired pagination token will return an HTTP 400 InvalidToken error.

Related operations:

", "ListStateMachineVersions": "

Lists versions for the specified state machine Amazon Resource Name (ARN).

The results are sorted in descending order of the version creation time.

If nextToken is returned, there are more results available. The value of nextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. Each pagination token expires after 24 hours. Using an expired pagination token will return an HTTP 400 InvalidToken error.

Related operations:

", "ListStateMachines": "

Lists the existing state machines.

If nextToken is returned, there are more results available. The value of nextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. Each pagination token expires after 24 hours. Using an expired pagination token will return an HTTP 400 InvalidToken error.

This operation is eventually consistent. The results are best effort and may not reflect very recent updates and changes.

", "ListTagsForResource": "

List tags for a given resource.

Tags may only contain Unicode letters, digits, white space, or these symbols: _ . : / = + - @.

", "PublishStateMachineVersion": "

Creates a version from the current revision of a state machine. Use versions to create immutable snapshots of your state machine. You can start executions from versions either directly or with an alias. To create an alias, use CreateStateMachineAlias.

You can publish up to 1000 versions for each state machine. You must manually delete unused versions using the DeleteStateMachineVersion API action.

PublishStateMachineVersion is an idempotent API. It doesn't create a duplicate state machine version if it already exists for the current revision. Step Functions bases PublishStateMachineVersion's idempotency check on the stateMachineArn, name, and revisionId parameters. Requests with the same parameters return a successful idempotent response. If you don't specify a revisionId, Step Functions checks for a previously published version of the state machine's current revision.

Related operations:

", - "SendTaskFailure": "

Used by activity workers and task states using the callback pattern to report that the task identified by the taskToken failed.

", - "SendTaskHeartbeat": "

Used by activity workers and task states using the callback pattern to report to Step Functions that the task represented by the specified taskToken is still making progress. This action resets the Heartbeat clock. The Heartbeat threshold is specified in the state machine's Amazon States Language definition (HeartbeatSeconds). This action does not in itself create an event in the execution history. However, if the task times out, the execution history contains an ActivityTimedOut entry for activities, or a TaskTimedOut entry for for tasks using the job run or callback pattern.

The Timeout of a task, defined in the state machine's Amazon States Language definition, is its maximum allowed duration, regardless of the number of SendTaskHeartbeat requests received. Use HeartbeatSeconds to configure the timeout interval for heartbeats.

", - "SendTaskSuccess": "

Used by activity workers and task states using the callback pattern to report that the task identified by the taskToken completed successfully.

", + "RedriveExecution": "

Restarts unsuccessful executions of Standard workflows that didn't complete successfully in the last 14 days. These include failed, aborted, or timed out executions. When you redrive an execution, it continues the failed execution from the unsuccessful step and uses the same input. Step Functions preserves the results and execution history of the successful steps, and doesn't rerun these steps when you redrive an execution. Redriven executions use the same state machine definition and execution ARN as the original execution attempt.

For workflows that include an Inline Map or Parallel state, RedriveExecution API action reschedules and redrives only the iterations and branches that failed or aborted.

To redrive a workflow that includes a Distributed Map state with failed child workflow executions, you must redrive the parent workflow. The parent workflow redrives all the unsuccessful states, including Distributed Map.

This API action is not supported by EXPRESS state machines.

However, you can restart the unsuccessful executions of Express child workflows in a Distributed Map by redriving its Map Run. When you redrive a Map Run, the Express child workflows are rerun using the StartExecution API action. For more information, see Redriving Map Runs.

You can redrive executions if your original execution meets the following conditions:

", + "SendTaskFailure": "

Used by activity workers, Task states using the callback pattern, and optionally Task states using the job run pattern to report that the task identified by the taskToken failed.

", + "SendTaskHeartbeat": "

Used by activity workers and Task states using the callback pattern, and optionally Task states using the job run pattern to report to Step Functions that the task represented by the specified taskToken is still making progress. This action resets the Heartbeat clock. The Heartbeat threshold is specified in the state machine's Amazon States Language definition (HeartbeatSeconds). This action does not in itself create an event in the execution history. However, if the task times out, the execution history contains an ActivityTimedOut entry for activities, or a TaskTimedOut entry for tasks using the job run or callback pattern.

The Timeout of a task, defined in the state machine's Amazon States Language definition, is its maximum allowed duration, regardless of the number of SendTaskHeartbeat requests received. Use HeartbeatSeconds to configure the timeout interval for heartbeats.

", + "SendTaskSuccess": "

Used by activity workers, Task states using the callback pattern, and optionally Task states using the job run pattern to report that the task identified by the taskToken completed successfully.

", "StartExecution": "

Starts a state machine execution.

A qualified state machine ARN can either refer to a Distributed Map state defined within a state machine, a version ARN, or an alias ARN.

The following are some examples of qualified and unqualified state machine ARNs:

If you start an execution with an unqualified state machine ARN, Step Functions uses the latest revision of the state machine for the execution.

To start executions of a state machine version, call StartExecution and provide the version ARN or the ARN of an alias that points to the version.

StartExecution is idempotent for STANDARD workflows. For a STANDARD workflow, if you call StartExecution with the same name and input as a running execution, the call succeeds and return the same response as the original request. If the execution is closed or if the input is different, it returns a 400 ExecutionAlreadyExists error. You can reuse names after 90 days.

StartExecution isn't idempotent for EXPRESS workflows.

", "StartSyncExecution": "

Starts a Synchronous Express state machine execution. StartSyncExecution is not available for STANDARD workflows.

StartSyncExecution will return a 200 OK response, even if your execution fails, because the status code in the API response doesn't reflect function errors. Error codes are reserved for errors that prevent your execution from running, such as permissions errors, limit errors, or issues with your state machine code and configuration.

This API action isn't logged in CloudTrail.

", "StopExecution": "

Stops an execution.

This API action is not supported by EXPRESS state machines.

", @@ -158,6 +159,7 @@ "MapRunListItem$stateMachineArn": "

The Amazon Resource Name (ARN) of the executed state machine.

", "PublishStateMachineVersionInput$stateMachineArn": "

The Amazon Resource Name (ARN) of the state machine.

", "PublishStateMachineVersionOutput$stateMachineVersionArn": "

The Amazon Resource Name (ARN) (ARN) that identifies the state machine version.

", + "RedriveExecutionInput$executionArn": "

The Amazon Resource Name (ARN) of the execution to be redriven.

", "ResourceNotFound$resourceName": null, "RoutingConfigurationListItem$stateMachineVersionArn": "

The Amazon Resource Name (ARN) that identifies one or two state machine versions defined in the routing configuration.

If you specify the ARN of a second version, it must belong to the same state machine as the first version.

", "StartExecutionInput$stateMachineArn": "

The Amazon Resource Name (ARN) of the state machine to execute.

The stateMachineArn parameter accepts one of the following inputs:

", @@ -200,6 +202,12 @@ "CreateStateMachineAliasInput$name": "

The name of the state machine alias.

To avoid conflict with version ARNs, don't use an integer in the name of the alias.

" } }, + "ClientToken": { + "base": null, + "refs": { + "RedriveExecutionInput$clientToken": "

A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If you don’t specify a client token, the Amazon Web Services SDK automatically generates a client token and uses it for the request to ensure idempotency. The API uses one of the last 10 client tokens provided.

" + } + }, "CloudWatchEventsExecutionDataDetails": { "base": "

Provides details about execution input or output.

", "refs": { @@ -381,6 +389,7 @@ "ExecutionAlreadyExists$message": null, "ExecutionDoesNotExist$message": null, "ExecutionLimitExceeded$message": null, + "ExecutionNotRedrivable$message": null, "InvalidArn$message": null, "InvalidDefinition$message": null, "InvalidExecutionInput$message": null, @@ -449,6 +458,29 @@ "ExecutionList$member": null } }, + "ExecutionNotRedrivable": { + "base": "

The execution Amazon Resource Name (ARN) that you specified for executionArn cannot be redriven.

", + "refs": { + } + }, + "ExecutionRedriveFilter": { + "base": null, + "refs": { + "ListExecutionsInput$redriveFilter": "

Sets a filter to list executions based on whether or not they have been redriven.

For a Distributed Map, redriveFilter sets a filter to list child workflow executions based on whether or not they have been redriven.

If you do not provide a redriveFilter, Step Functions returns a list of both redriven and non-redriven executions.

If you provide a state machine ARN in redriveFilter, the API returns a validation exception.

" + } + }, + "ExecutionRedriveStatus": { + "base": null, + "refs": { + "DescribeExecutionOutput$redriveStatus": "

Indicates whether or not an execution can be redriven at a given point in time.

" + } + }, + "ExecutionRedrivenEventDetails": { + "base": "

Contains details about a redriven execution.

", + "refs": { + "HistoryEvent$executionRedrivenEventDetails": "

Contains details about the redrive attempt of an execution.

" + } + }, "ExecutionStartedEventDetails": { "base": "

Contains details about the start of the execution.

", "refs": { @@ -737,6 +769,7 @@ "ExecutionListItem$mapRunArn": "

The Amazon Resource Name (ARN) of a Map Run. This field is returned only if mapRunArn was specified in the ListExecutions API action. If stateMachineArn was specified in ListExecutions, the mapRunArn isn't returned.

", "ListExecutionsInput$mapRunArn": "

The Amazon Resource Name (ARN) of the Map Run that started the child workflow executions. If the mapRunArn field is specified, a list of all of the child workflow executions started by a Map Run is returned. For more information, see Examining Map Run in the Step Functions Developer Guide.

You can specify either a mapRunArn or a stateMachineArn, but not both.

", "MapRunListItem$mapRunArn": "

The Amazon Resource Name (ARN) of the Map Run.

", + "MapRunRedrivenEventDetails$mapRunArn": "

The Amazon Resource Name (ARN) of a Map Run that was redriven.

", "MapRunStartedEventDetails$mapRunArn": "

The Amazon Resource Name (ARN) of a Map Run that was started.

", "StateMachineAliasListItem$stateMachineAliasArn": "

The Amazon Resource Name (ARN) that identifies a state machine alias. The alias ARN is a combination of state machine ARN and the alias name separated by a colon (:). For example, stateMachineARN:PROD.

", "StateMachineVersionListItem$stateMachineVersionArn": "

The Amazon Resource Name (ARN) that identifies a state machine version. The version ARN is a combination of state machine ARN and the version number separated by a colon (:). For example, stateMachineARN:1.

", @@ -744,6 +777,15 @@ "UpdateMapRunInput$mapRunArn": "

The Amazon Resource Name (ARN) of a Map Run.

" } }, + "LongObject": { + "base": null, + "refs": { + "MapRunExecutionCounts$failuresNotRedrivable": "

The number of FAILED, ABORTED, or TIMED_OUT child workflow executions that cannot be redriven because their execution status is terminal. For example, if your execution event history contains 25,000 entries, or the toleratedFailureCount or toleratedFailurePercentage for the Distributed Map has exceeded.

", + "MapRunExecutionCounts$pendingRedrive": "

The number of unsuccessful child workflow executions currently waiting to be redriven. The status of these child workflow executions could be FAILED, ABORTED, or TIMED_OUT in the original execution attempt or a previous redrive attempt.

", + "MapRunItemCounts$failuresNotRedrivable": "

The number of FAILED, ABORTED, or TIMED_OUT items in child workflow executions that cannot be redriven because the execution status of those child workflows is terminal. For example, if your execution event history contains 25,000 entries, or the toleratedFailureCount or toleratedFailurePercentage for the Distributed Map has exceeded.

", + "MapRunItemCounts$pendingRedrive": "

The number of unsuccessful items in child workflow executions currently waiting to be redriven.

" + } + }, "MapIterationEventDetails": { "base": "

Contains details about an iteration of a Map state.

", "refs": { @@ -790,6 +832,12 @@ "MapRunList$member": null } }, + "MapRunRedrivenEventDetails": { + "base": "

Contains details about a Map Run that was redriven.

", + "refs": { + "HistoryEvent$mapRunRedrivenEventDetails": "

Contains details about the redrive attempt of a Map Run.

" + } + }, "MapRunStartedEventDetails": { "base": "

Contains details about a Map Run that was started during a state machine execution.

", "refs": { @@ -834,7 +882,7 @@ "ExecutionListItem$name": "

The name of the execution.

A name must not contain:

To enable logging with CloudWatch Logs, the name should only contain 0-9, A-Z, a-z, - and _.

", "GetActivityTaskInput$workerName": "

You can provide an arbitrary name in order to identify the worker that the task is assigned to. This name is used when it is logged in the execution history.

", "MapIterationEventDetails$name": "

The name of the iteration’s parent Map state.

", - "StartExecutionInput$name": "

Optional name of the execution. This name must be unique for your Amazon Web Services account, Region, and state machine for 90 days. For more information, see Limits Related to State Machine Executions in the Step Functions Developer Guide.

A name must not contain:

To enable logging with CloudWatch Logs, the name should only contain 0-9, A-Z, a-z, - and _.

", + "StartExecutionInput$name": "

Optional name of the execution. This name must be unique for your Amazon Web Services account, Region, and state machine for 90 days. For more information, see Limits Related to State Machine Executions in the Step Functions Developer Guide.

If you don't provide a name for the execution, Step Functions automatically generates a universally unique identifier (UUID) as the execution name.

A name must not contain:

To enable logging with CloudWatch Logs, the name should only contain 0-9, A-Z, a-z, - and _.

", "StartSyncExecutionInput$name": "

The name of the execution.

", "StartSyncExecutionOutput$name": "

The name of the execution.

", "StateEnteredEventDetails$name": "

The name of the state.

", @@ -905,6 +953,26 @@ "refs": { } }, + "RedriveCount": { + "base": null, + "refs": { + "DescribeExecutionOutput$redriveCount": "

The number of times you've redriven an execution. If you have not yet redriven an execution, the redriveCount is 0. This count is not updated for redrives that failed to start or are pending to be redriven.

", + "DescribeMapRunOutput$redriveCount": "

The number of times you've redriven a Map Run. If you have not yet redriven a Map Run, the redriveCount is 0. This count is not updated for redrives that failed to start or are pending to be redriven.

", + "ExecutionListItem$redriveCount": "

The number of times you've redriven an execution. If you have not yet redriven an execution, the redriveCount is 0. This count is not updated for redrives that failed to start or are pending to be redriven.

", + "ExecutionRedrivenEventDetails$redriveCount": "

The number of times you've redriven an execution. If you have not yet redriven an execution, the redriveCount is 0. This count is not updated for redrives that failed to start or are pending to be redriven.

", + "MapRunRedrivenEventDetails$redriveCount": "

The number of times the Map Run has been redriven at this point in the execution's history including this event. The redrive count for a redriven Map Run is always greater than 0.

" + } + }, + "RedriveExecutionInput": { + "base": null, + "refs": { + } + }, + "RedriveExecutionOutput": { + "base": null, + "refs": { + } + }, "ResourceNotFound": { "base": "

Could not find the referenced resource.

", "refs": { @@ -1000,6 +1068,7 @@ "ActivitySucceededEventDetails$output": "

The JSON data output by the activity task. Length constraints apply to the payload size, and are expressed as bytes in UTF-8 encoding.

", "DescribeExecutionOutput$input": "

The string that contains the JSON input data of the execution. Length constraints apply to the payload size, and are expressed as bytes in UTF-8 encoding.

", "DescribeExecutionOutput$output": "

The JSON output data of the execution. Length constraints apply to the payload size, and are expressed as bytes in UTF-8 encoding.

This field is set only if the execution succeeds. If the execution fails, this field is null.

", + "DescribeExecutionOutput$redriveStatusReason": "

When redriveStatus is NOT_REDRIVABLE, redriveStatusReason specifies the reason why an execution cannot be redriven.

", "ExecutionStartedEventDetails$input": "

The JSON data input to the execution. Length constraints apply to the payload size, and are expressed as bytes in UTF-8 encoding.

", "ExecutionSucceededEventDetails$output": "

The JSON data output by the execution. Length constraints apply to the payload size, and are expressed as bytes in UTF-8 encoding.

", "LambdaFunctionScheduledEventDetails$input": "

The JSON data input to the Lambda function. Length constraints apply to the payload size, and are expressed as bytes in UTF-8 encoding.

", @@ -1225,7 +1294,7 @@ } }, "TaskDoesNotExist": { - "base": null, + "base": "

The activity does not exist.

", "refs": { } }, @@ -1272,7 +1341,7 @@ } }, "TaskTimedOut": { - "base": null, + "base": "

The task token has either expired or the task associated with the token has already been closed.

", "refs": { } }, @@ -1311,18 +1380,22 @@ "DescribeActivityOutput$creationDate": "

The date the activity is created.

", "DescribeExecutionOutput$startDate": "

The date the execution is started.

", "DescribeExecutionOutput$stopDate": "

If the execution ended, the date the execution stopped.

", + "DescribeExecutionOutput$redriveDate": "

The date the execution was last redriven. If you have not yet redriven an execution, the redriveDate is null.

The redriveDate is unavailable if you redrive a Map Run that starts child workflow executions of type EXPRESS.

", "DescribeMapRunOutput$startDate": "

The date when the Map Run was started.

", "DescribeMapRunOutput$stopDate": "

The date when the Map Run was stopped.

", + "DescribeMapRunOutput$redriveDate": "

The date a Map Run was last redriven. If you have not yet redriven a Map Run, the redriveDate is null.

", "DescribeStateMachineAliasOutput$creationDate": "

The date the state machine alias was created.

", "DescribeStateMachineAliasOutput$updateDate": "

The date the state machine alias was last updated.

For a newly created state machine, this is the same as the creation date.

", "DescribeStateMachineForExecutionOutput$updateDate": "

The date and time the state machine associated with an execution was updated. For a newly created state machine, this is the creation date.

", "DescribeStateMachineOutput$creationDate": "

The date the state machine is created.

For a state machine version, creationDate is the date the version was created.

", "ExecutionListItem$startDate": "

The date the execution started.

", "ExecutionListItem$stopDate": "

If the execution already ended, the date the execution stopped.

", + "ExecutionListItem$redriveDate": "

The date the execution was last redriven.

", "HistoryEvent$timestamp": "

The date and time the event occurred.

", "MapRunListItem$startDate": "

The date on which the Map Run started.

", "MapRunListItem$stopDate": "

The date on which the Map Run stopped.

", "PublishStateMachineVersionOutput$creationDate": "

The date the version was created.

", + "RedriveExecutionOutput$redriveDate": "

The date the execution was last redriven.

", "StartExecutionOutput$startDate": "

The date the execution is started.

", "StartSyncExecutionOutput$startDate": "

The date the execution is started.

", "StartSyncExecutionOutput$stopDate": "

If the execution has already ended, the date the execution stopped.

", @@ -1463,7 +1536,7 @@ "VersionWeight": { "base": null, "refs": { - "RoutingConfigurationListItem$weight": "

The percentage of traffic you want to route to the second state machine version. The sum of the weights in the routing configuration must be equal to 100.

" + "RoutingConfigurationListItem$weight": "

The percentage of traffic you want to route to a state machine version. The sum of the weights in the routing configuration must be equal to 100.

" } }, "includedDetails": { diff --git a/models/apis/states/2016-11-23/endpoint-rule-set-1.json b/models/apis/states/2016-11-23/endpoint-rule-set-1.json index 4d52cffb427..ff130912f62 100644 --- a/models/apis/states/2016-11-23/endpoint-rule-set-1.json +++ b/models/apis/states/2016-11-23/endpoint-rule-set-1.json @@ -40,7 +40,6 @@ ] } ], - "type": "tree", "rules": [ { "conditions": [ @@ -58,312 +57,277 @@ "type": "error" }, { - "conditions": [], - "type": "tree", - "rules": [ + "conditions": [ { - "conditions": [ + "fn": "booleanEquals", + "argv": [ { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseDualStack" - }, - true - ] - } - ], - "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", - "type": "error" - }, - { - "conditions": [], - "endpoint": { - "url": { - "ref": "Endpoint" + "ref": "UseDualStack" }, - "properties": {}, - "headers": {} - }, - "type": "endpoint" + true + ] } - ] + ], + "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", + "type": "error" + }, + { + "conditions": [], + "endpoint": { + "url": { + "ref": "Endpoint" + }, + "properties": {}, + "headers": {} + }, + "type": "endpoint" } - ] + ], + "type": "tree" }, { - "conditions": [], - "type": "tree", + "conditions": [ + { + "fn": "isSet", + "argv": [ + { + "ref": "Region" + } + ] + } + ], "rules": [ { "conditions": [ { - "fn": "isSet", + "fn": "aws.partition", "argv": [ { "ref": "Region" } - ] + ], + "assign": "PartitionResult" } ], - "type": "tree", "rules": [ { "conditions": [ { - "fn": "aws.partition", + "fn": "booleanEquals", "argv": [ { - "ref": "Region" - } - ], - "assign": "PartitionResult" + "ref": "UseFIPS" + }, + true + ] + }, + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] } ], - "type": "tree", "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseFIPS" - }, - true + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } ] }, { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - }, - { - "fn": "booleanEquals", - "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://states-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsDualStack" ] } ] - }, + } + ], + "rules": [ { "conditions": [], - "error": "FIPS and DualStack are enabled, but this partition does not support one or both", - "type": "error" + "endpoint": { + "url": "https://states-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } - ] + ], + "type": "tree" }, + { + "conditions": [], + "error": "FIPS and DualStack are enabled, but this partition does not support one or both", + "type": "error" + } + ], + "type": "tree" + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + } + ], + "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ { - "ref": "UseFIPS" + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] }, true ] } ], - "type": "tree", "rules": [ { "conditions": [ { - "fn": "booleanEquals", + "fn": "stringEquals", "argv": [ - true, { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "stringEquals", - "argv": [ - { - "ref": "Region" - }, - "us-gov-west-1" - ] - } - ], - "endpoint": { - "url": "https://states.us-gov-west-1.amazonaws.com", - "properties": {}, - "headers": {} - }, - "type": "endpoint" + "ref": "Region" }, - { - "conditions": [], - "endpoint": { - "url": "https://states-fips.{Region}.{PartitionResult#dnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "us-gov-west-1" ] } - ] + ], + "endpoint": { + "url": "https://states.us-gov-west-1.amazonaws.com", + "properties": {}, + "headers": {} + }, + "type": "endpoint" }, { "conditions": [], - "error": "FIPS is enabled but this partition does not support FIPS", - "type": "error" + "endpoint": { + "url": "https://states-fips.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } - ] + ], + "type": "tree" }, + { + "conditions": [], + "error": "FIPS is enabled but this partition does not support FIPS", + "type": "error" + } + ], + "type": "tree" + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] + } + ], + "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ { - "conditions": [], - "endpoint": { - "url": "https://states.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsDualStack" ] } ] - }, - { - "conditions": [], - "error": "DualStack is enabled but this partition does not support DualStack", - "type": "error" } - ] - }, - { - "conditions": [], - "type": "tree", + ], "rules": [ { "conditions": [], "endpoint": { - "url": "https://states.{Region}.{PartitionResult#dnsSuffix}", + "url": "https://states.{Region}.{PartitionResult#dualStackDnsSuffix}", "properties": {}, "headers": {} }, "type": "endpoint" } - ] + ], + "type": "tree" + }, + { + "conditions": [], + "error": "DualStack is enabled but this partition does not support DualStack", + "type": "error" } - ] + ], + "type": "tree" + }, + { + "conditions": [], + "endpoint": { + "url": "https://states.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } - ] - }, - { - "conditions": [], - "error": "Invalid Configuration: Missing Region", - "type": "error" + ], + "type": "tree" } - ] + ], + "type": "tree" + }, + { + "conditions": [], + "error": "Invalid Configuration: Missing Region", + "type": "error" } ] } \ No newline at end of file diff --git a/models/endpoints/endpoints.json b/models/endpoints/endpoints.json index 8b0a4bf6d3d..89937fd1ac2 100644 --- a/models/endpoints/endpoints.json +++ b/models/endpoints/endpoints.json @@ -3363,6 +3363,8 @@ "ap-south-2" : { }, "ap-southeast-1" : { }, "ap-southeast-2" : { }, + "ap-southeast-3" : { }, + "ap-southeast-4" : { }, "ca-central-1" : { "variants" : [ { "hostname" : "codepipeline-fips.ca-central-1.amazonaws.com", @@ -3373,6 +3375,7 @@ "eu-central-2" : { }, "eu-north-1" : { }, "eu-south-1" : { }, + "eu-south-2" : { }, "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, @@ -3411,6 +3414,7 @@ "deprecated" : true, "hostname" : "codepipeline-fips.us-west-2.amazonaws.com" }, + "il-central-1" : { }, "me-central-1" : { }, "me-south-1" : { }, "sa-east-1" : { }, @@ -3799,6 +3803,12 @@ }, "hostname" : "compute-optimizer.ap-south-1.amazonaws.com" }, + "ap-south-2" : { + "credentialScope" : { + "region" : "ap-south-2" + }, + "hostname" : "compute-optimizer.ap-south-2.amazonaws.com" + }, "ap-southeast-1" : { "credentialScope" : { "region" : "ap-southeast-1" @@ -3811,6 +3821,18 @@ }, "hostname" : "compute-optimizer.ap-southeast-2.amazonaws.com" }, + "ap-southeast-3" : { + "credentialScope" : { + "region" : "ap-southeast-3" + }, + "hostname" : "compute-optimizer.ap-southeast-3.amazonaws.com" + }, + "ap-southeast-4" : { + "credentialScope" : { + "region" : "ap-southeast-4" + }, + "hostname" : "compute-optimizer.ap-southeast-4.amazonaws.com" + }, "ca-central-1" : { "credentialScope" : { "region" : "ca-central-1" @@ -3823,6 +3845,12 @@ }, "hostname" : "compute-optimizer.eu-central-1.amazonaws.com" }, + "eu-central-2" : { + "credentialScope" : { + "region" : "eu-central-2" + }, + "hostname" : "compute-optimizer.eu-central-2.amazonaws.com" + }, "eu-north-1" : { "credentialScope" : { "region" : "eu-north-1" @@ -3835,6 +3863,12 @@ }, "hostname" : "compute-optimizer.eu-south-1.amazonaws.com" }, + "eu-south-2" : { + "credentialScope" : { + "region" : "eu-south-2" + }, + "hostname" : "compute-optimizer.eu-south-2.amazonaws.com" + }, "eu-west-1" : { "credentialScope" : { "region" : "eu-west-1" @@ -3853,6 +3887,18 @@ }, "hostname" : "compute-optimizer.eu-west-3.amazonaws.com" }, + "il-central-1" : { + "credentialScope" : { + "region" : "il-central-1" + }, + "hostname" : "compute-optimizer.il-central-1.amazonaws.com" + }, + "me-central-1" : { + "credentialScope" : { + "region" : "me-central-1" + }, + "hostname" : "compute-optimizer.me-central-1.amazonaws.com" + }, "me-south-1" : { "credentialScope" : { "region" : "me-south-1" @@ -20594,8 +20640,32 @@ }, "appconfigdata" : { "endpoints" : { - "us-gov-east-1" : { }, - "us-gov-west-1" : { } + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "deprecated" : true, + "hostname" : "appconfigdata.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "deprecated" : true, + "hostname" : "appconfigdata.us-gov-west-1.amazonaws.com" + }, + "us-gov-east-1" : { + "variants" : [ { + "hostname" : "appconfigdata.us-gov-east-1.amazonaws.com", + "tags" : [ "fips" ] + } ] + }, + "us-gov-west-1" : { + "variants" : [ { + "hostname" : "appconfigdata.us-gov-west-1.amazonaws.com", + "tags" : [ "fips" ] + } ] + } } }, "application-autoscaling" : { diff --git a/service/backup/api.go b/service/backup/api.go index ae352641495..3578a55297f 100644 --- a/service/backup/api.go +++ b/service/backup/api.go @@ -3717,6 +3717,151 @@ func (c *Backup) GetSupportedResourceTypesWithContext(ctx aws.Context, input *Ge return out, req.Send() } +const opListBackupJobSummaries = "ListBackupJobSummaries" + +// ListBackupJobSummariesRequest generates a "aws/request.Request" representing the +// client's request for the ListBackupJobSummaries operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListBackupJobSummaries for more information on using the ListBackupJobSummaries +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the ListBackupJobSummariesRequest method. +// req, resp := client.ListBackupJobSummariesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListBackupJobSummaries +func (c *Backup) ListBackupJobSummariesRequest(input *ListBackupJobSummariesInput) (req *request.Request, output *ListBackupJobSummariesOutput) { + op := &request.Operation{ + Name: opListBackupJobSummaries, + HTTPMethod: "GET", + HTTPPath: "/audit/backup-job-summaries", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListBackupJobSummariesInput{} + } + + output = &ListBackupJobSummariesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListBackupJobSummaries API operation for AWS Backup. +// +// This is a request for a summary of backup jobs created or running within +// the most recent 30 days. You can include parameters AccountID, State, ResourceType, +// MessageCategory, AggregationPeriod, MaxResults, or NextToken to filter results. +// +// This request returns a summary that contains Region, Account, State, ResourceType, +// MessageCategory, StartTime, EndTime, and Count of included jobs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Backup's +// API operation ListBackupJobSummaries for usage and error information. +// +// Returned Error Types: +// +// - InvalidParameterValueException +// Indicates that something is wrong with a parameter's value. For example, +// the value is out of range. +// +// - ServiceUnavailableException +// The request failed due to a temporary failure of the server. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListBackupJobSummaries +func (c *Backup) ListBackupJobSummaries(input *ListBackupJobSummariesInput) (*ListBackupJobSummariesOutput, error) { + req, out := c.ListBackupJobSummariesRequest(input) + return out, req.Send() +} + +// ListBackupJobSummariesWithContext is the same as ListBackupJobSummaries with the addition of +// the ability to pass a context and additional request options. +// +// See ListBackupJobSummaries for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Backup) ListBackupJobSummariesWithContext(ctx aws.Context, input *ListBackupJobSummariesInput, opts ...request.Option) (*ListBackupJobSummariesOutput, error) { + req, out := c.ListBackupJobSummariesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListBackupJobSummariesPages iterates over the pages of a ListBackupJobSummaries operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListBackupJobSummaries method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListBackupJobSummaries operation. +// pageNum := 0 +// err := client.ListBackupJobSummariesPages(params, +// func(page *backup.ListBackupJobSummariesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +func (c *Backup) ListBackupJobSummariesPages(input *ListBackupJobSummariesInput, fn func(*ListBackupJobSummariesOutput, bool) bool) error { + return c.ListBackupJobSummariesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListBackupJobSummariesPagesWithContext same as ListBackupJobSummariesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Backup) ListBackupJobSummariesPagesWithContext(ctx aws.Context, input *ListBackupJobSummariesInput, fn func(*ListBackupJobSummariesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListBackupJobSummariesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListBackupJobSummariesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + for p.Next() { + if !fn(p.Page().(*ListBackupJobSummariesOutput), !p.HasNextPage()) { + break + } + } + + return p.Err() +} + const opListBackupJobs = "ListBackupJobs" // ListBackupJobsRequest generates a "aws/request.Request" representing the @@ -4597,6 +4742,151 @@ func (c *Backup) ListBackupVaultsPagesWithContext(ctx aws.Context, input *ListBa return p.Err() } +const opListCopyJobSummaries = "ListCopyJobSummaries" + +// ListCopyJobSummariesRequest generates a "aws/request.Request" representing the +// client's request for the ListCopyJobSummaries operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListCopyJobSummaries for more information on using the ListCopyJobSummaries +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the ListCopyJobSummariesRequest method. +// req, resp := client.ListCopyJobSummariesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListCopyJobSummaries +func (c *Backup) ListCopyJobSummariesRequest(input *ListCopyJobSummariesInput) (req *request.Request, output *ListCopyJobSummariesOutput) { + op := &request.Operation{ + Name: opListCopyJobSummaries, + HTTPMethod: "GET", + HTTPPath: "/audit/copy-job-summaries", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListCopyJobSummariesInput{} + } + + output = &ListCopyJobSummariesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListCopyJobSummaries API operation for AWS Backup. +// +// This request obtains a list of copy jobs created or running within the the +// most recent 30 days. You can include parameters AccountID, State, ResourceType, +// MessageCategory, AggregationPeriod, MaxResults, or NextToken to filter results. +// +// This request returns a summary that contains Region, Account, State, RestourceType, +// MessageCategory, StartTime, EndTime, and Count of included jobs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Backup's +// API operation ListCopyJobSummaries for usage and error information. +// +// Returned Error Types: +// +// - InvalidParameterValueException +// Indicates that something is wrong with a parameter's value. For example, +// the value is out of range. +// +// - ServiceUnavailableException +// The request failed due to a temporary failure of the server. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListCopyJobSummaries +func (c *Backup) ListCopyJobSummaries(input *ListCopyJobSummariesInput) (*ListCopyJobSummariesOutput, error) { + req, out := c.ListCopyJobSummariesRequest(input) + return out, req.Send() +} + +// ListCopyJobSummariesWithContext is the same as ListCopyJobSummaries with the addition of +// the ability to pass a context and additional request options. +// +// See ListCopyJobSummaries for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Backup) ListCopyJobSummariesWithContext(ctx aws.Context, input *ListCopyJobSummariesInput, opts ...request.Option) (*ListCopyJobSummariesOutput, error) { + req, out := c.ListCopyJobSummariesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListCopyJobSummariesPages iterates over the pages of a ListCopyJobSummaries operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListCopyJobSummaries method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListCopyJobSummaries operation. +// pageNum := 0 +// err := client.ListCopyJobSummariesPages(params, +// func(page *backup.ListCopyJobSummariesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +func (c *Backup) ListCopyJobSummariesPages(input *ListCopyJobSummariesInput, fn func(*ListCopyJobSummariesOutput, bool) bool) error { + return c.ListCopyJobSummariesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListCopyJobSummariesPagesWithContext same as ListCopyJobSummariesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Backup) ListCopyJobSummariesPagesWithContext(ctx aws.Context, input *ListCopyJobSummariesInput, fn func(*ListCopyJobSummariesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListCopyJobSummariesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListCopyJobSummariesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + for p.Next() { + if !fn(p.Page().(*ListCopyJobSummariesOutput), !p.HasNextPage()) { + break + } + } + + return p.Err() +} + const opListCopyJobs = "ListCopyJobs" // ListCopyJobsRequest generates a "aws/request.Request" representing the @@ -6028,36 +6318,36 @@ func (c *Backup) ListReportPlansPagesWithContext(ctx aws.Context, input *ListRep return p.Err() } -const opListRestoreJobs = "ListRestoreJobs" +const opListRestoreJobSummaries = "ListRestoreJobSummaries" -// ListRestoreJobsRequest generates a "aws/request.Request" representing the -// client's request for the ListRestoreJobs operation. The "output" return +// ListRestoreJobSummariesRequest generates a "aws/request.Request" representing the +// client's request for the ListRestoreJobSummaries operation. The "output" return // value will be populated with the request's response once the request completes // successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListRestoreJobs for more information on using the ListRestoreJobs +// See ListRestoreJobSummaries for more information on using the ListRestoreJobSummaries // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // -// // Example sending a request using the ListRestoreJobsRequest method. -// req, resp := client.ListRestoreJobsRequest(params) +// // Example sending a request using the ListRestoreJobSummariesRequest method. +// req, resp := client.ListRestoreJobSummariesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListRestoreJobs -func (c *Backup) ListRestoreJobsRequest(input *ListRestoreJobsInput) (req *request.Request, output *ListRestoreJobsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListRestoreJobSummaries +func (c *Backup) ListRestoreJobSummariesRequest(input *ListRestoreJobSummariesInput) (req *request.Request, output *ListRestoreJobSummariesOutput) { op := &request.Operation{ - Name: opListRestoreJobs, + Name: opListRestoreJobSummaries, HTTPMethod: "GET", - HTTPPath: "/restore-jobs/", + HTTPPath: "/audit/restore-job-summaries", Paginator: &request.Paginator{ InputTokens: []string{"NextToken"}, OutputTokens: []string{"NextToken"}, @@ -6067,99 +6357,97 @@ func (c *Backup) ListRestoreJobsRequest(input *ListRestoreJobsInput) (req *reque } if input == nil { - input = &ListRestoreJobsInput{} + input = &ListRestoreJobSummariesInput{} } - output = &ListRestoreJobsOutput{} + output = &ListRestoreJobSummariesOutput{} req = c.newRequest(op, input, output) return } -// ListRestoreJobs API operation for AWS Backup. +// ListRestoreJobSummaries API operation for AWS Backup. // -// Returns a list of jobs that Backup initiated to restore a saved resource, -// including details about the recovery process. +// This request obtains a summary of restore jobs created or running within +// the the most recent 30 days. You can include parameters AccountID, State, +// ResourceType, AggregationPeriod, MaxResults, or NextToken to filter results. +// +// This request returns a summary that contains Region, Account, State, RestourceType, +// MessageCategory, StartTime, EndTime, and Count of included jobs. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Backup's -// API operation ListRestoreJobs for usage and error information. +// API operation ListRestoreJobSummaries for usage and error information. // // Returned Error Types: // -// - ResourceNotFoundException -// A resource that is required for the action doesn't exist. -// // - InvalidParameterValueException // Indicates that something is wrong with a parameter's value. For example, // the value is out of range. // -// - MissingParameterValueException -// Indicates that a required parameter is missing. -// // - ServiceUnavailableException // The request failed due to a temporary failure of the server. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListRestoreJobs -func (c *Backup) ListRestoreJobs(input *ListRestoreJobsInput) (*ListRestoreJobsOutput, error) { - req, out := c.ListRestoreJobsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListRestoreJobSummaries +func (c *Backup) ListRestoreJobSummaries(input *ListRestoreJobSummariesInput) (*ListRestoreJobSummariesOutput, error) { + req, out := c.ListRestoreJobSummariesRequest(input) return out, req.Send() } -// ListRestoreJobsWithContext is the same as ListRestoreJobs with the addition of +// ListRestoreJobSummariesWithContext is the same as ListRestoreJobSummaries with the addition of // the ability to pass a context and additional request options. // -// See ListRestoreJobs for details on how to use this API operation. +// See ListRestoreJobSummaries for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Backup) ListRestoreJobsWithContext(ctx aws.Context, input *ListRestoreJobsInput, opts ...request.Option) (*ListRestoreJobsOutput, error) { - req, out := c.ListRestoreJobsRequest(input) +func (c *Backup) ListRestoreJobSummariesWithContext(ctx aws.Context, input *ListRestoreJobSummariesInput, opts ...request.Option) (*ListRestoreJobSummariesOutput, error) { + req, out := c.ListRestoreJobSummariesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListRestoreJobsPages iterates over the pages of a ListRestoreJobs operation, +// ListRestoreJobSummariesPages iterates over the pages of a ListRestoreJobSummaries operation, // calling the "fn" function with the response data for each page. To stop // iterating, return false from the fn function. // -// See ListRestoreJobs method for more information on how to use this operation. +// See ListRestoreJobSummaries method for more information on how to use this operation. // // Note: This operation can generate multiple requests to a service. // -// // Example iterating over at most 3 pages of a ListRestoreJobs operation. +// // Example iterating over at most 3 pages of a ListRestoreJobSummaries operation. // pageNum := 0 -// err := client.ListRestoreJobsPages(params, -// func(page *backup.ListRestoreJobsOutput, lastPage bool) bool { +// err := client.ListRestoreJobSummariesPages(params, +// func(page *backup.ListRestoreJobSummariesOutput, lastPage bool) bool { // pageNum++ // fmt.Println(page) // return pageNum <= 3 // }) -func (c *Backup) ListRestoreJobsPages(input *ListRestoreJobsInput, fn func(*ListRestoreJobsOutput, bool) bool) error { - return c.ListRestoreJobsPagesWithContext(aws.BackgroundContext(), input, fn) +func (c *Backup) ListRestoreJobSummariesPages(input *ListRestoreJobSummariesInput, fn func(*ListRestoreJobSummariesOutput, bool) bool) error { + return c.ListRestoreJobSummariesPagesWithContext(aws.BackgroundContext(), input, fn) } -// ListRestoreJobsPagesWithContext same as ListRestoreJobsPages except +// ListRestoreJobSummariesPagesWithContext same as ListRestoreJobSummariesPages except // it takes a Context and allows setting request options on the pages. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Backup) ListRestoreJobsPagesWithContext(ctx aws.Context, input *ListRestoreJobsInput, fn func(*ListRestoreJobsOutput, bool) bool, opts ...request.Option) error { +func (c *Backup) ListRestoreJobSummariesPagesWithContext(ctx aws.Context, input *ListRestoreJobSummariesInput, fn func(*ListRestoreJobSummariesOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ NewRequest: func() (*request.Request, error) { - var inCpy *ListRestoreJobsInput + var inCpy *ListRestoreJobSummariesInput if input != nil { tmp := *input inCpy = &tmp } - req, _ := c.ListRestoreJobsRequest(inCpy) + req, _ := c.ListRestoreJobSummariesRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil @@ -6167,7 +6455,154 @@ func (c *Backup) ListRestoreJobsPagesWithContext(ctx aws.Context, input *ListRes } for p.Next() { - if !fn(p.Page().(*ListRestoreJobsOutput), !p.HasNextPage()) { + if !fn(p.Page().(*ListRestoreJobSummariesOutput), !p.HasNextPage()) { + break + } + } + + return p.Err() +} + +const opListRestoreJobs = "ListRestoreJobs" + +// ListRestoreJobsRequest generates a "aws/request.Request" representing the +// client's request for the ListRestoreJobs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListRestoreJobs for more information on using the ListRestoreJobs +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the ListRestoreJobsRequest method. +// req, resp := client.ListRestoreJobsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListRestoreJobs +func (c *Backup) ListRestoreJobsRequest(input *ListRestoreJobsInput) (req *request.Request, output *ListRestoreJobsOutput) { + op := &request.Operation{ + Name: opListRestoreJobs, + HTTPMethod: "GET", + HTTPPath: "/restore-jobs/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListRestoreJobsInput{} + } + + output = &ListRestoreJobsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListRestoreJobs API operation for AWS Backup. +// +// Returns a list of jobs that Backup initiated to restore a saved resource, +// including details about the recovery process. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Backup's +// API operation ListRestoreJobs for usage and error information. +// +// Returned Error Types: +// +// - ResourceNotFoundException +// A resource that is required for the action doesn't exist. +// +// - InvalidParameterValueException +// Indicates that something is wrong with a parameter's value. For example, +// the value is out of range. +// +// - MissingParameterValueException +// Indicates that a required parameter is missing. +// +// - ServiceUnavailableException +// The request failed due to a temporary failure of the server. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListRestoreJobs +func (c *Backup) ListRestoreJobs(input *ListRestoreJobsInput) (*ListRestoreJobsOutput, error) { + req, out := c.ListRestoreJobsRequest(input) + return out, req.Send() +} + +// ListRestoreJobsWithContext is the same as ListRestoreJobs with the addition of +// the ability to pass a context and additional request options. +// +// See ListRestoreJobs for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Backup) ListRestoreJobsWithContext(ctx aws.Context, input *ListRestoreJobsInput, opts ...request.Option) (*ListRestoreJobsOutput, error) { + req, out := c.ListRestoreJobsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListRestoreJobsPages iterates over the pages of a ListRestoreJobs operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListRestoreJobs method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListRestoreJobs operation. +// pageNum := 0 +// err := client.ListRestoreJobsPages(params, +// func(page *backup.ListRestoreJobsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +func (c *Backup) ListRestoreJobsPages(input *ListRestoreJobsInput, fn func(*ListRestoreJobsOutput, bool) bool) error { + return c.ListRestoreJobsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListRestoreJobsPagesWithContext same as ListRestoreJobsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Backup) ListRestoreJobsPagesWithContext(ctx aws.Context, input *ListRestoreJobsInput, fn func(*ListRestoreJobsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListRestoreJobsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListRestoreJobsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + for p.Next() { + if !fn(p.Page().(*ListRestoreJobsOutput), !p.HasNextPage()) { break } } @@ -7992,6 +8427,122 @@ func (s *AlreadyExistsException) RequestID() string { return s.RespMetadata.RequestID } +// This is a summary of jobs created or running within the most recent 30 days. +// +// The returned summary may contain the following: Region, Account, State, RestourceType, +// MessageCategory, StartTime, EndTime, and Count of included jobs. +type BackupJobSummary struct { + _ struct{} `type:"structure"` + + // The account ID that owns the jobs within the summary. + AccountId *string `type:"string"` + + // The value as a number of jobs in a job summary. + Count *int64 `type:"integer"` + + // The value of time in number format of a job end time. + // + // This value is the time in Unix format, Coordinated Universal Time (UTC), + // and accurate to milliseconds. For example, the value 1516925490.087 represents + // Friday, January 26, 2018 12:11:30.087 AM. + EndTime *time.Time `type:"timestamp"` + + // This parameter is the job count for the specified message category. + // + // Example strings include AccessDenied, Success, and InvalidParameters. See + // Monitoring (https://docs.aws.amazon.com/aws-backup/latest/devguide/monitoring.html) + // for a list of MessageCategory strings. + // + // The the value ANY returns count of all message categories. + // + // AGGREGATE_ALL aggregates job counts for all message categories and returns + // the sum. + MessageCategory *string `type:"string"` + + // The Amazon Web Services Regions within the job summary. + Region *string `type:"string"` + + // This value is the job count for the specified resource type. The request + // GetSupportedResourceTypes returns strings for supported resource types. + ResourceType *string `type:"string"` + + // The value of time in number format of a job start time. + // + // This value is the time in Unix format, Coordinated Universal Time (UTC), + // and accurate to milliseconds. For example, the value 1516925490.087 represents + // Friday, January 26, 2018 12:11:30.087 AM. + StartTime *time.Time `type:"timestamp"` + + // This value is job count for jobs with the specified state. + State *string `type:"string" enum:"BackupJobStatus"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BackupJobSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BackupJobSummary) GoString() string { + return s.String() +} + +// SetAccountId sets the AccountId field's value. +func (s *BackupJobSummary) SetAccountId(v string) *BackupJobSummary { + s.AccountId = &v + return s +} + +// SetCount sets the Count field's value. +func (s *BackupJobSummary) SetCount(v int64) *BackupJobSummary { + s.Count = &v + return s +} + +// SetEndTime sets the EndTime field's value. +func (s *BackupJobSummary) SetEndTime(v time.Time) *BackupJobSummary { + s.EndTime = &v + return s +} + +// SetMessageCategory sets the MessageCategory field's value. +func (s *BackupJobSummary) SetMessageCategory(v string) *BackupJobSummary { + s.MessageCategory = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *BackupJobSummary) SetRegion(v string) *BackupJobSummary { + s.Region = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *BackupJobSummary) SetResourceType(v string) *BackupJobSummary { + s.ResourceType = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *BackupJobSummary) SetStartTime(v time.Time) *BackupJobSummary { + s.StartTime = &v + return s +} + +// SetState sets the State field's value. +func (s *BackupJobSummary) SetState(v string) *BackupJobSummary { + s.State = &v + return s +} + // Contains DeleteAt and MoveToColdStorageAt timestamps, which are used to specify // a lifecycle for a recovery point. // @@ -8648,6 +9199,18 @@ type CopyJob struct { // This is a boolean value indicating this is a parent (composite) copy job. IsParent *bool `type:"boolean"` + // This parameter is the job count for the specified message category. + // + // Example strings include AccessDenied, Success, and InvalidParameters. See + // Monitoring (https://docs.aws.amazon.com/aws-backup/latest/devguide/monitoring.html) + // for a list of MessageCategory strings. + // + // The the value ANY returns count of all message categories. + // + // AGGREGATE_ALL aggregates job counts for all message categories and returns + // the sum + MessageCategory *string `type:"string"` + // This is the number of child (nested) copy jobs. NumberOfChildJobs *int64 `type:"long"` @@ -8773,6 +9336,12 @@ func (s *CopyJob) SetIsParent(v bool) *CopyJob { return s } +// SetMessageCategory sets the MessageCategory field's value. +func (s *CopyJob) SetMessageCategory(v string) *CopyJob { + s.MessageCategory = &v + return s +} + // SetNumberOfChildJobs sets the NumberOfChildJobs field's value. func (s *CopyJob) SetNumberOfChildJobs(v int64) *CopyJob { s.NumberOfChildJobs = &v @@ -8827,31 +9396,55 @@ func (s *CopyJob) SetStatusMessage(v string) *CopyJob { return s } -type CreateBackupPlanInput struct { +// This is a summary of copy jobs created or running within the most recent +// 30 days. +// +// The returned summary may contain the following: Region, Account, State, RestourceType, +// MessageCategory, StartTime, EndTime, and Count of included jobs. +type CopyJobSummary struct { _ struct{} `type:"structure"` - // Specifies the body of a backup plan. Includes a BackupPlanName and one or - // more sets of Rules. + // The account ID that owns the jobs within the summary. + AccountId *string `type:"string"` + + // The value as a number of jobs in a job summary. + Count *int64 `type:"integer"` + + // The value of time in number format of a job end time. // - // BackupPlan is a required field - BackupPlan *PlanInput `type:"structure" required:"true"` + // This value is the time in Unix format, Coordinated Universal Time (UTC), + // and accurate to milliseconds. For example, the value 1516925490.087 represents + // Friday, January 26, 2018 12:11:30.087 AM. + EndTime *time.Time `type:"timestamp"` - // To help organize your resources, you can assign your own metadata to the - // resources that you create. Each tag is a key-value pair. The specified tags - // are assigned to all backups created with this plan. + // This parameter is the job count for the specified message category. // - // BackupPlanTags is a sensitive parameter and its value will be - // replaced with "sensitive" in string returned by CreateBackupPlanInput's - // String and GoString methods. - BackupPlanTags map[string]*string `type:"map" sensitive:"true"` + // Example strings include AccessDenied, Success, and InvalidParameters. See + // Monitoring (https://docs.aws.amazon.com/aws-backup/latest/devguide/monitoring.html) + // for a list of MessageCategory strings. + // + // The the value ANY returns count of all message categories. + // + // AGGREGATE_ALL aggregates job counts for all message categories and returns + // the sum. + MessageCategory *string `type:"string"` - // Identifies the request and allows failed requests to be retried without the - // risk of running the operation twice. If the request includes a CreatorRequestId - // that matches an existing backup plan, that plan is returned. This parameter - // is optional. + // This is the Amazon Web Services Regions within the job summary. + Region *string `type:"string"` + + // This value is the job count for the specified resource type. The request + // GetSupportedResourceTypes returns strings for supported resource types + ResourceType *string `type:"string"` + + // The value of time in number format of a job start time. // - // If used, this parameter must contain 1 to 50 alphanumeric or '-_.' characters. - CreatorRequestId *string `type:"string"` + // This value is the time in Unix format, Coordinated Universal Time (UTC), + // and accurate to milliseconds. For example, the value 1516925490.087 represents + // Friday, January 26, 2018 12:11:30.087 AM. + StartTime *time.Time `type:"timestamp"` + + // This value is job count for jobs with the specified state. + State *string `type:"string" enum:"CopyJobStatus"` } // String returns the string representation. @@ -8859,7 +9452,7 @@ type CreateBackupPlanInput struct { // API parameter values that are decorated as "sensitive" in the API will not // be included in the string output. The member name will be present, but the // value will be replaced with "sensitive". -func (s CreateBackupPlanInput) String() string { +func (s CopyJobSummary) String() string { return awsutil.Prettify(s) } @@ -8868,13 +9461,106 @@ func (s CreateBackupPlanInput) String() string { // API parameter values that are decorated as "sensitive" in the API will not // be included in the string output. The member name will be present, but the // value will be replaced with "sensitive". -func (s CreateBackupPlanInput) GoString() string { +func (s CopyJobSummary) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateBackupPlanInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateBackupPlanInput"} +// SetAccountId sets the AccountId field's value. +func (s *CopyJobSummary) SetAccountId(v string) *CopyJobSummary { + s.AccountId = &v + return s +} + +// SetCount sets the Count field's value. +func (s *CopyJobSummary) SetCount(v int64) *CopyJobSummary { + s.Count = &v + return s +} + +// SetEndTime sets the EndTime field's value. +func (s *CopyJobSummary) SetEndTime(v time.Time) *CopyJobSummary { + s.EndTime = &v + return s +} + +// SetMessageCategory sets the MessageCategory field's value. +func (s *CopyJobSummary) SetMessageCategory(v string) *CopyJobSummary { + s.MessageCategory = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *CopyJobSummary) SetRegion(v string) *CopyJobSummary { + s.Region = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *CopyJobSummary) SetResourceType(v string) *CopyJobSummary { + s.ResourceType = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *CopyJobSummary) SetStartTime(v time.Time) *CopyJobSummary { + s.StartTime = &v + return s +} + +// SetState sets the State field's value. +func (s *CopyJobSummary) SetState(v string) *CopyJobSummary { + s.State = &v + return s +} + +type CreateBackupPlanInput struct { + _ struct{} `type:"structure"` + + // Specifies the body of a backup plan. Includes a BackupPlanName and one or + // more sets of Rules. + // + // BackupPlan is a required field + BackupPlan *PlanInput `type:"structure" required:"true"` + + // To help organize your resources, you can assign your own metadata to the + // resources that you create. Each tag is a key-value pair. The specified tags + // are assigned to all backups created with this plan. + // + // BackupPlanTags is a sensitive parameter and its value will be + // replaced with "sensitive" in string returned by CreateBackupPlanInput's + // String and GoString methods. + BackupPlanTags map[string]*string `type:"map" sensitive:"true"` + + // Identifies the request and allows failed requests to be retried without the + // risk of running the operation twice. If the request includes a CreatorRequestId + // that matches an existing backup plan, that plan is returned. This parameter + // is optional. + // + // If used, this parameter must contain 1 to 50 alphanumeric or '-_.' characters. + CreatorRequestId *string `type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s CreateBackupPlanInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s CreateBackupPlanInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateBackupPlanInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateBackupPlanInput"} if s.BackupPlan == nil { invalidParams.Add(request.NewErrParamRequired("BackupPlan")) } @@ -10954,6 +11640,13 @@ type DescribeBackupJobOutput struct { // job. IsParent *bool `type:"boolean"` + // This is the job count for the specified message category. + // + // Example strings may include AccessDenied, Success, and InvalidParameters. + // See Monitoring (https://docs.aws.amazon.com/aws-backup/latest/devguide/monitoring.html) + // for a list of MessageCategory strings. + MessageCategory *string `type:"string"` + // This returns the number of child (nested) backup jobs. NumberOfChildJobs *int64 `type:"long"` @@ -11104,6 +11797,12 @@ func (s *DescribeBackupJobOutput) SetIsParent(v bool) *DescribeBackupJobOutput { return s } +// SetMessageCategory sets the MessageCategory field's value. +func (s *DescribeBackupJobOutput) SetMessageCategory(v string) *DescribeBackupJobOutput { + s.MessageCategory = &v + return s +} + // SetNumberOfChildJobs sets the NumberOfChildJobs field's value. func (s *DescribeBackupJobOutput) SetNumberOfChildJobs(v int64) *DescribeBackupJobOutput { s.NumberOfChildJobs = &v @@ -14407,6 +15106,18 @@ type Job struct { // This is a boolean value indicating this is a parent (composite) backup job. IsParent *bool `type:"boolean"` + // This parameter is the job count for the specified message category. + // + // Example strings include AccessDenied, Success, and InvalidParameters. See + // Monitoring (https://docs.aws.amazon.com/aws-backup/latest/devguide/monitoring.html) + // for a list of MessageCategory strings. + // + // The the value ANY returns count of all message categories. + // + // AGGREGATE_ALL aggregates job counts for all message categories and returns + // the sum. + MessageCategory *string `type:"string"` + // This uniquely identifies a request to Backup to back up a resource. The return // will be the parent (composite) job ID. ParentJobId *string `type:"string"` @@ -14550,6 +15261,12 @@ func (s *Job) SetIsParent(v bool) *Job { return s } +// SetMessageCategory sets the MessageCategory field's value. +func (s *Job) SetMessageCategory(v string) *Job { + s.MessageCategory = &v + return s +} + // SetParentJobId sets the ParentJobId field's value. func (s *Job) SetParentJobId(v string) *Job { s.ParentJobId = &v @@ -14824,6 +15541,208 @@ func (s *LimitExceededException) RequestID() string { return s.RespMetadata.RequestID } +type ListBackupJobSummariesInput struct { + _ struct{} `type:"structure" nopayload:"true"` + + // Returns the job count for the specified account. + // + // If the request is sent from a member account or an account not part of Amazon + // Web Services Organizations, jobs within requestor's account will be returned. + // + // Root, admin, and delegated administrator accounts can use the value ANY to + // return job counts from every account in the organization. + // + // AGGREGATE_ALL aggregates job counts from all accounts within the authenticated + // organization, then returns the sum. + AccountId *string `location:"querystring" locationName:"AccountId" type:"string"` + + // This is the period that sets the boundaries for returned results. + // + // Acceptable values include + // + // * ONE_DAY for daily job count for the prior 14 days. + // + // * SEVEN_DAYS for the aggregated job count for the prior 7 days. + // + // * FOURTEEN_DAYS for aggregated job count for prior 14 days. + AggregationPeriod *string `location:"querystring" locationName:"AggregationPeriod" type:"string" enum:"AggregationPeriod"` + + // This parameter sets the maximum number of items to be returned. + // + // The value is an integer. Range of accepted values is from 1 to 500. + MaxResults *int64 `location:"querystring" locationName:"MaxResults" min:"1" type:"integer"` + + // This parameter returns the job count for the specified message category. + // + // Example accepted strings include AccessDenied, Success, and InvalidParameters. + // See Monitoring (https://docs.aws.amazon.com/aws-backup/latest/devguide/monitoring.html) + // for a list of accepted MessageCategory strings. + // + // The the value ANY returns count of all message categories. + // + // AGGREGATE_ALL aggregates job counts for all message categories and returns + // the sum. + MessageCategory *string `location:"querystring" locationName:"MessageCategory" type:"string"` + + // The next item following a partial list of returned resources. For example, + // if a request is made to return maxResults number of resources, NextToken + // allows you to return more items in your list starting at the location pointed + // to by the next token. + NextToken *string `location:"querystring" locationName:"NextToken" type:"string"` + + // Returns the job count for the specified resource type. Use request GetSupportedResourceTypes + // to obtain strings for supported resource types. + // + // The the value ANY returns count of all resource types. + // + // AGGREGATE_ALL aggregates job counts for all resource types and returns the + // sum. + // + // The type of Amazon Web Services resource to be backed up; for example, an + // Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database + // Service (Amazon RDS) database. + ResourceType *string `location:"querystring" locationName:"ResourceType" type:"string"` + + // This parameter returns the job count for jobs with the specified state. + // + // The the value ANY returns count of all states. + // + // AGGREGATE_ALL aggregates job counts for all states and returns the sum. + State *string `location:"querystring" locationName:"State" type:"string" enum:"BackupJobStatus"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListBackupJobSummariesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListBackupJobSummariesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListBackupJobSummariesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListBackupJobSummariesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountId sets the AccountId field's value. +func (s *ListBackupJobSummariesInput) SetAccountId(v string) *ListBackupJobSummariesInput { + s.AccountId = &v + return s +} + +// SetAggregationPeriod sets the AggregationPeriod field's value. +func (s *ListBackupJobSummariesInput) SetAggregationPeriod(v string) *ListBackupJobSummariesInput { + s.AggregationPeriod = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListBackupJobSummariesInput) SetMaxResults(v int64) *ListBackupJobSummariesInput { + s.MaxResults = &v + return s +} + +// SetMessageCategory sets the MessageCategory field's value. +func (s *ListBackupJobSummariesInput) SetMessageCategory(v string) *ListBackupJobSummariesInput { + s.MessageCategory = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListBackupJobSummariesInput) SetNextToken(v string) *ListBackupJobSummariesInput { + s.NextToken = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ListBackupJobSummariesInput) SetResourceType(v string) *ListBackupJobSummariesInput { + s.ResourceType = &v + return s +} + +// SetState sets the State field's value. +func (s *ListBackupJobSummariesInput) SetState(v string) *ListBackupJobSummariesInput { + s.State = &v + return s +} + +type ListBackupJobSummariesOutput struct { + _ struct{} `type:"structure"` + + // This is the period that sets the boundaries for returned results. + // + // * ONE_DAY for daily job count for the prior 14 days. + // + // * SEVEN_DAYS for the aggregated job count for the prior 7 days. + // + // * FOURTEEN_DAYS for aggregated job count for prior 14 days. + AggregationPeriod *string `type:"string"` + + // This request returns a summary that contains Region, Account, State, ResourceType, + // MessageCategory, StartTime, EndTime, and Count of included jobs. + BackupJobSummaries []*BackupJobSummary `type:"list"` + + // The next item following a partial list of returned resources. For example, + // if a request is made to return maxResults number of resources, NextToken + // allows you to return more items in your list starting at the location pointed + // to by the next token. + NextToken *string `type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListBackupJobSummariesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListBackupJobSummariesOutput) GoString() string { + return s.String() +} + +// SetAggregationPeriod sets the AggregationPeriod field's value. +func (s *ListBackupJobSummariesOutput) SetAggregationPeriod(v string) *ListBackupJobSummariesOutput { + s.AggregationPeriod = &v + return s +} + +// SetBackupJobSummaries sets the BackupJobSummaries field's value. +func (s *ListBackupJobSummariesOutput) SetBackupJobSummaries(v []*BackupJobSummary) *ListBackupJobSummariesOutput { + s.BackupJobSummaries = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListBackupJobSummariesOutput) SetNextToken(v string) *ListBackupJobSummariesOutput { + s.NextToken = &v + return s +} + type ListBackupJobsInput struct { _ struct{} `type:"structure" nopayload:"true"` @@ -14854,6 +15773,13 @@ type ListBackupJobsInput struct { // Returns only backup jobs that were created before the specified date. ByCreatedBefore *time.Time `location:"querystring" locationName:"createdBefore" type:"timestamp"` + // This returns a list of backup jobs for the specified message category. + // + // Example strings may include AccessDenied, Success, and InvalidParameters. + // See Monitoring (https://docs.aws.amazon.com/aws-backup/latest/devguide/monitoring.html) + // for a list of MessageCategory strings. + ByMessageCategory *string `location:"querystring" locationName:"messageCategory" type:"string"` + // This is a filter to list child (nested) jobs based on parent job ID. ByParentJobId *string `location:"querystring" locationName:"parentJobId" type:"string"` @@ -14968,6 +15894,12 @@ func (s *ListBackupJobsInput) SetByCreatedBefore(v time.Time) *ListBackupJobsInp return s } +// SetByMessageCategory sets the ByMessageCategory field's value. +func (s *ListBackupJobsInput) SetByMessageCategory(v string) *ListBackupJobsInput { + s.ByMessageCategory = &v + return s +} + // SetByParentJobId sets the ByParentJobId field's value. func (s *ListBackupJobsInput) SetByParentJobId(v string) *ListBackupJobsInput { s.ByParentJobId = &v @@ -15598,14 +16530,214 @@ func (s ListBackupVaultsOutput) GoString() string { return s.String() } -// SetBackupVaultList sets the BackupVaultList field's value. -func (s *ListBackupVaultsOutput) SetBackupVaultList(v []*VaultListMember) *ListBackupVaultsOutput { - s.BackupVaultList = v +// SetBackupVaultList sets the BackupVaultList field's value. +func (s *ListBackupVaultsOutput) SetBackupVaultList(v []*VaultListMember) *ListBackupVaultsOutput { + s.BackupVaultList = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListBackupVaultsOutput) SetNextToken(v string) *ListBackupVaultsOutput { + s.NextToken = &v + return s +} + +type ListCopyJobSummariesInput struct { + _ struct{} `type:"structure" nopayload:"true"` + + // Returns the job count for the specified account. + // + // If the request is sent from a member account or an account not part of Amazon + // Web Services Organizations, jobs within requestor's account will be returned. + // + // Root, admin, and delegated administrator accounts can use the value ANY to + // return job counts from every account in the organization. + // + // AGGREGATE_ALL aggregates job counts from all accounts within the authenticated + // organization, then returns the sum. + AccountId *string `location:"querystring" locationName:"AccountId" type:"string"` + + // This is the period that sets the boundaries for returned results. + // + // * ONE_DAY for daily job count for the prior 14 days. + // + // * SEVEN_DAYS for the aggregated job count for the prior 7 days. + // + // * FOURTEEN_DAYS for aggregated job count for prior 14 days. + AggregationPeriod *string `location:"querystring" locationName:"AggregationPeriod" type:"string" enum:"AggregationPeriod"` + + // This parameter sets the maximum number of items to be returned. + // + // The value is an integer. Range of accepted values is from 1 to 500. + MaxResults *int64 `location:"querystring" locationName:"MaxResults" min:"1" type:"integer"` + + // This parameter returns the job count for the specified message category. + // + // Example accepted strings include AccessDenied, Success, and InvalidParameters. + // See Monitoring (https://docs.aws.amazon.com/aws-backup/latest/devguide/monitoring.html) + // for a list of accepted MessageCategory strings. + // + // The the value ANY returns count of all message categories. + // + // AGGREGATE_ALL aggregates job counts for all message categories and returns + // the sum. + MessageCategory *string `location:"querystring" locationName:"MessageCategory" type:"string"` + + // The next item following a partial list of returned resources. For example, + // if a request is made to return maxResults number of resources, NextToken + // allows you to return more items in your list starting at the location pointed + // to by the next token. + NextToken *string `location:"querystring" locationName:"NextToken" type:"string"` + + // Returns the job count for the specified resource type. Use request GetSupportedResourceTypes + // to obtain strings for supported resource types. + // + // The the value ANY returns count of all resource types. + // + // AGGREGATE_ALL aggregates job counts for all resource types and returns the + // sum. + // + // The type of Amazon Web Services resource to be backed up; for example, an + // Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database + // Service (Amazon RDS) database. + ResourceType *string `location:"querystring" locationName:"ResourceType" type:"string"` + + // This parameter returns the job count for jobs with the specified state. + // + // The the value ANY returns count of all states. + // + // AGGREGATE_ALL aggregates job counts for all states and returns the sum. + State *string `location:"querystring" locationName:"State" type:"string" enum:"CopyJobStatus"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListCopyJobSummariesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListCopyJobSummariesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListCopyJobSummariesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListCopyJobSummariesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountId sets the AccountId field's value. +func (s *ListCopyJobSummariesInput) SetAccountId(v string) *ListCopyJobSummariesInput { + s.AccountId = &v + return s +} + +// SetAggregationPeriod sets the AggregationPeriod field's value. +func (s *ListCopyJobSummariesInput) SetAggregationPeriod(v string) *ListCopyJobSummariesInput { + s.AggregationPeriod = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListCopyJobSummariesInput) SetMaxResults(v int64) *ListCopyJobSummariesInput { + s.MaxResults = &v + return s +} + +// SetMessageCategory sets the MessageCategory field's value. +func (s *ListCopyJobSummariesInput) SetMessageCategory(v string) *ListCopyJobSummariesInput { + s.MessageCategory = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListCopyJobSummariesInput) SetNextToken(v string) *ListCopyJobSummariesInput { + s.NextToken = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ListCopyJobSummariesInput) SetResourceType(v string) *ListCopyJobSummariesInput { + s.ResourceType = &v + return s +} + +// SetState sets the State field's value. +func (s *ListCopyJobSummariesInput) SetState(v string) *ListCopyJobSummariesInput { + s.State = &v + return s +} + +type ListCopyJobSummariesOutput struct { + _ struct{} `type:"structure"` + + // This is the period that sets the boundaries for returned results. + // + // * ONE_DAY for daily job count for the prior 14 days. + // + // * SEVEN_DAYS for the aggregated job count for the prior 7 days. + // + // * FOURTEEN_DAYS for aggregated job count for prior 14 days. + AggregationPeriod *string `type:"string"` + + // This return shows a summary that contains Region, Account, State, ResourceType, + // MessageCategory, StartTime, EndTime, and Count of included jobs. + CopyJobSummaries []*CopyJobSummary `type:"list"` + + // The next item following a partial list of returned resources. For example, + // if a request is made to return maxResults number of resources, NextToken + // allows you to return more items in your list starting at the location pointed + // to by the next token. + NextToken *string `type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListCopyJobSummariesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListCopyJobSummariesOutput) GoString() string { + return s.String() +} + +// SetAggregationPeriod sets the AggregationPeriod field's value. +func (s *ListCopyJobSummariesOutput) SetAggregationPeriod(v string) *ListCopyJobSummariesOutput { + s.AggregationPeriod = &v + return s +} + +// SetCopyJobSummaries sets the CopyJobSummaries field's value. +func (s *ListCopyJobSummariesOutput) SetCopyJobSummaries(v []*CopyJobSummary) *ListCopyJobSummariesOutput { + s.CopyJobSummaries = v return s } // SetNextToken sets the NextToken field's value. -func (s *ListBackupVaultsOutput) SetNextToken(v string) *ListBackupVaultsOutput { +func (s *ListCopyJobSummariesOutput) SetNextToken(v string) *ListCopyJobSummariesOutput { s.NextToken = &v return s } @@ -15635,6 +16767,18 @@ type ListCopyJobsInput struct { // to copy from; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault. ByDestinationVaultArn *string `location:"querystring" locationName:"destinationVaultArn" type:"string"` + // This parameter returns the job count for the specified message category. + // + // Example accepted strings include AccessDenied, Success, and InvalidParameters. + // See Monitoring (https://docs.aws.amazon.com/aws-backup/latest/devguide/monitoring.html) + // for a list of accepted MessageCategory strings. + // + // The the value ANY returns count of all message categories. + // + // AGGREGATE_ALL aggregates job counts for all message categories and returns + // the sum. + ByMessageCategory *string `location:"querystring" locationName:"messageCategory" type:"string"` + // This is a filter to list child (nested) jobs based on parent job ID. ByParentJobId *string `location:"querystring" locationName:"parentJobId" type:"string"` @@ -15749,6 +16893,12 @@ func (s *ListCopyJobsInput) SetByDestinationVaultArn(v string) *ListCopyJobsInpu return s } +// SetByMessageCategory sets the ByMessageCategory field's value. +func (s *ListCopyJobsInput) SetByMessageCategory(v string) *ListCopyJobsInput { + s.ByMessageCategory = &v + return s +} + // SetByParentJobId sets the ByParentJobId field's value. func (s *ListCopyJobsInput) SetByParentJobId(v string) *ListCopyJobsInput { s.ByParentJobId = &v @@ -16917,6 +18067,190 @@ func (s *ListReportPlansOutput) SetReportPlans(v []*ReportPlan) *ListReportPlans return s } +type ListRestoreJobSummariesInput struct { + _ struct{} `type:"structure" nopayload:"true"` + + // Returns the job count for the specified account. + // + // If the request is sent from a member account or an account not part of Amazon + // Web Services Organizations, jobs within requestor's account will be returned. + // + // Root, admin, and delegated administrator accounts can use the value ANY to + // return job counts from every account in the organization. + // + // AGGREGATE_ALL aggregates job counts from all accounts within the authenticated + // organization, then returns the sum. + AccountId *string `location:"querystring" locationName:"AccountId" type:"string"` + + // This is the period that sets the boundaries for returned results. + // + // Acceptable values include + // + // * ONE_DAY for daily job count for the prior 14 days. + // + // * SEVEN_DAYS for the aggregated job count for the prior 7 days. + // + // * FOURTEEN_DAYS for aggregated job count for prior 14 days. + AggregationPeriod *string `location:"querystring" locationName:"AggregationPeriod" type:"string" enum:"AggregationPeriod"` + + // This parameter sets the maximum number of items to be returned. + // + // The value is an integer. Range of accepted values is from 1 to 500. + MaxResults *int64 `location:"querystring" locationName:"MaxResults" min:"1" type:"integer"` + + // The next item following a partial list of returned resources. For example, + // if a request is made to return maxResults number of resources, NextToken + // allows you to return more items in your list starting at the location pointed + // to by the next token. + NextToken *string `location:"querystring" locationName:"NextToken" type:"string"` + + // Returns the job count for the specified resource type. Use request GetSupportedResourceTypes + // to obtain strings for supported resource types. + // + // The the value ANY returns count of all resource types. + // + // AGGREGATE_ALL aggregates job counts for all resource types and returns the + // sum. + // + // The type of Amazon Web Services resource to be backed up; for example, an + // Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database + // Service (Amazon RDS) database. + ResourceType *string `location:"querystring" locationName:"ResourceType" type:"string"` + + // This parameter returns the job count for jobs with the specified state. + // + // The the value ANY returns count of all states. + // + // AGGREGATE_ALL aggregates job counts for all states and returns the sum. + State *string `location:"querystring" locationName:"State" type:"string" enum:"RestoreJobState"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListRestoreJobSummariesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListRestoreJobSummariesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListRestoreJobSummariesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListRestoreJobSummariesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountId sets the AccountId field's value. +func (s *ListRestoreJobSummariesInput) SetAccountId(v string) *ListRestoreJobSummariesInput { + s.AccountId = &v + return s +} + +// SetAggregationPeriod sets the AggregationPeriod field's value. +func (s *ListRestoreJobSummariesInput) SetAggregationPeriod(v string) *ListRestoreJobSummariesInput { + s.AggregationPeriod = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListRestoreJobSummariesInput) SetMaxResults(v int64) *ListRestoreJobSummariesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListRestoreJobSummariesInput) SetNextToken(v string) *ListRestoreJobSummariesInput { + s.NextToken = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ListRestoreJobSummariesInput) SetResourceType(v string) *ListRestoreJobSummariesInput { + s.ResourceType = &v + return s +} + +// SetState sets the State field's value. +func (s *ListRestoreJobSummariesInput) SetState(v string) *ListRestoreJobSummariesInput { + s.State = &v + return s +} + +type ListRestoreJobSummariesOutput struct { + _ struct{} `type:"structure"` + + // This is the period that sets the boundaries for returned results. + // + // * ONE_DAY for daily job count for the prior 14 days. + // + // * SEVEN_DAYS for the aggregated job count for the prior 7 days. + // + // * FOURTEEN_DAYS for aggregated job count for prior 14 days. + AggregationPeriod *string `type:"string"` + + // The next item following a partial list of returned resources. For example, + // if a request is made to return maxResults number of resources, NextToken + // allows you to return more items in your list starting at the location pointed + // to by the next token. + NextToken *string `type:"string"` + + // This return contains a summary that contains Region, Account, State, ResourceType, + // MessageCategory, StartTime, EndTime, and Count of included jobs. + RestoreJobSummaries []*RestoreJobSummary `type:"list"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListRestoreJobSummariesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListRestoreJobSummariesOutput) GoString() string { + return s.String() +} + +// SetAggregationPeriod sets the AggregationPeriod field's value. +func (s *ListRestoreJobSummariesOutput) SetAggregationPeriod(v string) *ListRestoreJobSummariesOutput { + s.AggregationPeriod = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListRestoreJobSummariesOutput) SetNextToken(v string) *ListRestoreJobSummariesOutput { + s.NextToken = &v + return s +} + +// SetRestoreJobSummaries sets the RestoreJobSummaries field's value. +func (s *ListRestoreJobSummariesOutput) SetRestoreJobSummaries(v []*RestoreJobSummary) *ListRestoreJobSummariesOutput { + s.RestoreJobSummaries = v + return s +} + type ListRestoreJobsInput struct { _ struct{} `type:"structure" nopayload:"true"` @@ -19099,6 +20433,105 @@ func (s *ResourceNotFoundException) RequestID() string { return s.RespMetadata.RequestID } +// This is a summary of restore jobs created or running within the most recent +// 30 days. +// +// The returned summary may contain the following: Region, Account, State, ResourceType, +// MessageCategory, StartTime, EndTime, and Count of included jobs. +type RestoreJobSummary struct { + _ struct{} `type:"structure"` + + // The account ID that owns the jobs within the summary. + AccountId *string `type:"string"` + + // The value as a number of jobs in a job summary. + Count *int64 `type:"integer"` + + // The value of time in number format of a job end time. + // + // This value is the time in Unix format, Coordinated Universal Time (UTC), + // and accurate to milliseconds. For example, the value 1516925490.087 represents + // Friday, January 26, 2018 12:11:30.087 AM. + EndTime *time.Time `type:"timestamp"` + + // The Amazon Web Services Regions within the job summary. + Region *string `type:"string"` + + // This value is the job count for the specified resource type. The request + // GetSupportedResourceTypes returns strings for supported resource types. + ResourceType *string `type:"string"` + + // The value of time in number format of a job start time. + // + // This value is the time in Unix format, Coordinated Universal Time (UTC), + // and accurate to milliseconds. For example, the value 1516925490.087 represents + // Friday, January 26, 2018 12:11:30.087 AM. + StartTime *time.Time `type:"timestamp"` + + // This value is job count for jobs with the specified state. + State *string `type:"string" enum:"RestoreJobState"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s RestoreJobSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s RestoreJobSummary) GoString() string { + return s.String() +} + +// SetAccountId sets the AccountId field's value. +func (s *RestoreJobSummary) SetAccountId(v string) *RestoreJobSummary { + s.AccountId = &v + return s +} + +// SetCount sets the Count field's value. +func (s *RestoreJobSummary) SetCount(v int64) *RestoreJobSummary { + s.Count = &v + return s +} + +// SetEndTime sets the EndTime field's value. +func (s *RestoreJobSummary) SetEndTime(v time.Time) *RestoreJobSummary { + s.EndTime = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *RestoreJobSummary) SetRegion(v string) *RestoreJobSummary { + s.Region = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *RestoreJobSummary) SetResourceType(v string) *RestoreJobSummary { + s.ResourceType = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *RestoreJobSummary) SetStartTime(v time.Time) *RestoreJobSummary { + s.StartTime = &v + return s +} + +// SetState sets the State field's value. +func (s *RestoreJobSummary) SetState(v string) *RestoreJobSummary { + s.State = &v + return s +} + // Contains metadata about a restore job. type RestoreJobsListMember struct { _ struct{} `type:"structure"` @@ -21763,6 +23196,78 @@ func (s *VaultListMember) SetNumberOfRecoveryPoints(v int64) *VaultListMember { return s } +const ( + // AggregationPeriodOneDay is a AggregationPeriod enum value + AggregationPeriodOneDay = "ONE_DAY" + + // AggregationPeriodSevenDays is a AggregationPeriod enum value + AggregationPeriodSevenDays = "SEVEN_DAYS" + + // AggregationPeriodFourteenDays is a AggregationPeriod enum value + AggregationPeriodFourteenDays = "FOURTEEN_DAYS" +) + +// AggregationPeriod_Values returns all elements of the AggregationPeriod enum +func AggregationPeriod_Values() []string { + return []string{ + AggregationPeriodOneDay, + AggregationPeriodSevenDays, + AggregationPeriodFourteenDays, + } +} + +const ( + // BackupJobStatusCreated is a BackupJobStatus enum value + BackupJobStatusCreated = "CREATED" + + // BackupJobStatusPending is a BackupJobStatus enum value + BackupJobStatusPending = "PENDING" + + // BackupJobStatusRunning is a BackupJobStatus enum value + BackupJobStatusRunning = "RUNNING" + + // BackupJobStatusAborting is a BackupJobStatus enum value + BackupJobStatusAborting = "ABORTING" + + // BackupJobStatusAborted is a BackupJobStatus enum value + BackupJobStatusAborted = "ABORTED" + + // BackupJobStatusCompleted is a BackupJobStatus enum value + BackupJobStatusCompleted = "COMPLETED" + + // BackupJobStatusFailed is a BackupJobStatus enum value + BackupJobStatusFailed = "FAILED" + + // BackupJobStatusExpired is a BackupJobStatus enum value + BackupJobStatusExpired = "EXPIRED" + + // BackupJobStatusPartial is a BackupJobStatus enum value + BackupJobStatusPartial = "PARTIAL" + + // BackupJobStatusAggregateAll is a BackupJobStatus enum value + BackupJobStatusAggregateAll = "AGGREGATE_ALL" + + // BackupJobStatusAny is a BackupJobStatus enum value + BackupJobStatusAny = "ANY" +) + +// BackupJobStatus_Values returns all elements of the BackupJobStatus enum +func BackupJobStatus_Values() []string { + return []string{ + BackupJobStatusCreated, + BackupJobStatusPending, + BackupJobStatusRunning, + BackupJobStatusAborting, + BackupJobStatusAborted, + BackupJobStatusCompleted, + BackupJobStatusFailed, + BackupJobStatusExpired, + BackupJobStatusPartial, + BackupJobStatusAggregateAll, + BackupJobStatusAny, + } +} + const ( // ConditionTypeStringequals is a ConditionType enum value ConditionTypeStringequals = "STRINGEQUALS" @@ -21803,6 +23308,58 @@ func CopyJobState_Values() []string { } } +const ( + // CopyJobStatusCreated is a CopyJobStatus enum value + CopyJobStatusCreated = "CREATED" + + // CopyJobStatusRunning is a CopyJobStatus enum value + CopyJobStatusRunning = "RUNNING" + + // CopyJobStatusAborting is a CopyJobStatus enum value + CopyJobStatusAborting = "ABORTING" + + // CopyJobStatusAborted is a CopyJobStatus enum value + CopyJobStatusAborted = "ABORTED" + + // CopyJobStatusCompleting is a CopyJobStatus enum value + CopyJobStatusCompleting = "COMPLETING" + + // CopyJobStatusCompleted is a CopyJobStatus enum value + CopyJobStatusCompleted = "COMPLETED" + + // CopyJobStatusFailing is a CopyJobStatus enum value + CopyJobStatusFailing = "FAILING" + + // CopyJobStatusFailed is a CopyJobStatus enum value + CopyJobStatusFailed = "FAILED" + + // CopyJobStatusPartial is a CopyJobStatus enum value + CopyJobStatusPartial = "PARTIAL" + + // CopyJobStatusAggregateAll is a CopyJobStatus enum value + CopyJobStatusAggregateAll = "AGGREGATE_ALL" + + // CopyJobStatusAny is a CopyJobStatus enum value + CopyJobStatusAny = "ANY" +) + +// CopyJobStatus_Values returns all elements of the CopyJobStatus enum +func CopyJobStatus_Values() []string { + return []string{ + CopyJobStatusCreated, + CopyJobStatusRunning, + CopyJobStatusAborting, + CopyJobStatusAborted, + CopyJobStatusCompleting, + CopyJobStatusCompleted, + CopyJobStatusFailing, + CopyJobStatusFailed, + CopyJobStatusPartial, + CopyJobStatusAggregateAll, + CopyJobStatusAny, + } +} + const ( // JobStateCreated is a JobState enum value JobStateCreated = "CREATED" @@ -21895,6 +23452,46 @@ func RecoveryPointStatus_Values() []string { } } +const ( + // RestoreJobStateCreated is a RestoreJobState enum value + RestoreJobStateCreated = "CREATED" + + // RestoreJobStatePending is a RestoreJobState enum value + RestoreJobStatePending = "PENDING" + + // RestoreJobStateRunning is a RestoreJobState enum value + RestoreJobStateRunning = "RUNNING" + + // RestoreJobStateAborted is a RestoreJobState enum value + RestoreJobStateAborted = "ABORTED" + + // RestoreJobStateCompleted is a RestoreJobState enum value + RestoreJobStateCompleted = "COMPLETED" + + // RestoreJobStateFailed is a RestoreJobState enum value + RestoreJobStateFailed = "FAILED" + + // RestoreJobStateAggregateAll is a RestoreJobState enum value + RestoreJobStateAggregateAll = "AGGREGATE_ALL" + + // RestoreJobStateAny is a RestoreJobState enum value + RestoreJobStateAny = "ANY" +) + +// RestoreJobState_Values returns all elements of the RestoreJobState enum +func RestoreJobState_Values() []string { + return []string{ + RestoreJobStateCreated, + RestoreJobStatePending, + RestoreJobStateRunning, + RestoreJobStateAborted, + RestoreJobStateCompleted, + RestoreJobStateFailed, + RestoreJobStateAggregateAll, + RestoreJobStateAny, + } +} + const ( // RestoreJobStatusPending is a RestoreJobStatus enum value RestoreJobStatusPending = "PENDING" diff --git a/service/backup/backupiface/interface.go b/service/backup/backupiface/interface.go index 4acbf3c56c4..764a6a3796f 100644 --- a/service/backup/backupiface/interface.go +++ b/service/backup/backupiface/interface.go @@ -220,6 +220,13 @@ type BackupAPI interface { GetSupportedResourceTypesWithContext(aws.Context, *backup.GetSupportedResourceTypesInput, ...request.Option) (*backup.GetSupportedResourceTypesOutput, error) GetSupportedResourceTypesRequest(*backup.GetSupportedResourceTypesInput) (*request.Request, *backup.GetSupportedResourceTypesOutput) + ListBackupJobSummaries(*backup.ListBackupJobSummariesInput) (*backup.ListBackupJobSummariesOutput, error) + ListBackupJobSummariesWithContext(aws.Context, *backup.ListBackupJobSummariesInput, ...request.Option) (*backup.ListBackupJobSummariesOutput, error) + ListBackupJobSummariesRequest(*backup.ListBackupJobSummariesInput) (*request.Request, *backup.ListBackupJobSummariesOutput) + + ListBackupJobSummariesPages(*backup.ListBackupJobSummariesInput, func(*backup.ListBackupJobSummariesOutput, bool) bool) error + ListBackupJobSummariesPagesWithContext(aws.Context, *backup.ListBackupJobSummariesInput, func(*backup.ListBackupJobSummariesOutput, bool) bool, ...request.Option) error + ListBackupJobs(*backup.ListBackupJobsInput) (*backup.ListBackupJobsOutput, error) ListBackupJobsWithContext(aws.Context, *backup.ListBackupJobsInput, ...request.Option) (*backup.ListBackupJobsOutput, error) ListBackupJobsRequest(*backup.ListBackupJobsInput) (*request.Request, *backup.ListBackupJobsOutput) @@ -262,6 +269,13 @@ type BackupAPI interface { ListBackupVaultsPages(*backup.ListBackupVaultsInput, func(*backup.ListBackupVaultsOutput, bool) bool) error ListBackupVaultsPagesWithContext(aws.Context, *backup.ListBackupVaultsInput, func(*backup.ListBackupVaultsOutput, bool) bool, ...request.Option) error + ListCopyJobSummaries(*backup.ListCopyJobSummariesInput) (*backup.ListCopyJobSummariesOutput, error) + ListCopyJobSummariesWithContext(aws.Context, *backup.ListCopyJobSummariesInput, ...request.Option) (*backup.ListCopyJobSummariesOutput, error) + ListCopyJobSummariesRequest(*backup.ListCopyJobSummariesInput) (*request.Request, *backup.ListCopyJobSummariesOutput) + + ListCopyJobSummariesPages(*backup.ListCopyJobSummariesInput, func(*backup.ListCopyJobSummariesOutput, bool) bool) error + ListCopyJobSummariesPagesWithContext(aws.Context, *backup.ListCopyJobSummariesInput, func(*backup.ListCopyJobSummariesOutput, bool) bool, ...request.Option) error + ListCopyJobs(*backup.ListCopyJobsInput) (*backup.ListCopyJobsOutput, error) ListCopyJobsWithContext(aws.Context, *backup.ListCopyJobsInput, ...request.Option) (*backup.ListCopyJobsOutput, error) ListCopyJobsRequest(*backup.ListCopyJobsInput) (*request.Request, *backup.ListCopyJobsOutput) @@ -332,6 +346,13 @@ type BackupAPI interface { ListReportPlansPages(*backup.ListReportPlansInput, func(*backup.ListReportPlansOutput, bool) bool) error ListReportPlansPagesWithContext(aws.Context, *backup.ListReportPlansInput, func(*backup.ListReportPlansOutput, bool) bool, ...request.Option) error + ListRestoreJobSummaries(*backup.ListRestoreJobSummariesInput) (*backup.ListRestoreJobSummariesOutput, error) + ListRestoreJobSummariesWithContext(aws.Context, *backup.ListRestoreJobSummariesInput, ...request.Option) (*backup.ListRestoreJobSummariesOutput, error) + ListRestoreJobSummariesRequest(*backup.ListRestoreJobSummariesInput) (*request.Request, *backup.ListRestoreJobSummariesOutput) + + ListRestoreJobSummariesPages(*backup.ListRestoreJobSummariesInput, func(*backup.ListRestoreJobSummariesOutput, bool) bool) error + ListRestoreJobSummariesPagesWithContext(aws.Context, *backup.ListRestoreJobSummariesInput, func(*backup.ListRestoreJobSummariesOutput, bool) bool, ...request.Option) error + ListRestoreJobs(*backup.ListRestoreJobsInput) (*backup.ListRestoreJobsOutput, error) ListRestoreJobsWithContext(aws.Context, *backup.ListRestoreJobsInput, ...request.Option) (*backup.ListRestoreJobsOutput, error) ListRestoreJobsRequest(*backup.ListRestoreJobsInput) (*request.Request, *backup.ListRestoreJobsOutput) diff --git a/service/cleanrooms/api.go b/service/cleanrooms/api.go index a73e08a64aa..5bd9e0d90a5 100644 --- a/service/cleanrooms/api.go +++ b/service/cleanrooms/api.go @@ -7714,6 +7714,13 @@ type CreateCollaborationInput struct { // CreatorMemberAbilities is a required field CreatorMemberAbilities []*string `locationName:"creatorMemberAbilities" type:"list" required:"true" enum:"MemberAbility"` + // The collaboration creator's payment responsibilities set by the collaboration + // creator. + // + // If the collaboration creator hasn't specified anyone as the member paying + // for query compute costs, then the member who can query is the default payer. + CreatorPaymentConfiguration *PaymentConfiguration `locationName:"creatorPaymentConfiguration" type:"structure"` + // The settings for client-side encryption with Cryptographic Computing for // Clean Rooms. DataEncryptionMetadata *DataEncryptionMetadata `locationName:"dataEncryptionMetadata" type:"structure"` @@ -7794,6 +7801,11 @@ func (s *CreateCollaborationInput) Validate() error { if s.QueryLogStatus == nil { invalidParams.Add(request.NewErrParamRequired("QueryLogStatus")) } + if s.CreatorPaymentConfiguration != nil { + if err := s.CreatorPaymentConfiguration.Validate(); err != nil { + invalidParams.AddNested("CreatorPaymentConfiguration", err.(request.ErrInvalidParams)) + } + } if s.DataEncryptionMetadata != nil { if err := s.DataEncryptionMetadata.Validate(); err != nil { invalidParams.AddNested("DataEncryptionMetadata", err.(request.ErrInvalidParams)) @@ -7828,6 +7840,12 @@ func (s *CreateCollaborationInput) SetCreatorMemberAbilities(v []*string) *Creat return s } +// SetCreatorPaymentConfiguration sets the CreatorPaymentConfiguration field's value. +func (s *CreateCollaborationInput) SetCreatorPaymentConfiguration(v *PaymentConfiguration) *CreateCollaborationInput { + s.CreatorPaymentConfiguration = v + return s +} + // SetDataEncryptionMetadata sets the DataEncryptionMetadata field's value. func (s *CreateCollaborationInput) SetDataEncryptionMetadata(v *DataEncryptionMetadata) *CreateCollaborationInput { s.DataEncryptionMetadata = v @@ -8336,8 +8354,16 @@ type CreateMembershipInput struct { // who can receive results. DefaultResultConfiguration *MembershipProtectedQueryResultConfiguration `locationName:"defaultResultConfiguration" type:"structure"` + // The payment responsibilities accepted by the collaboration member. + // + // Not required if the collaboration member has the member ability to run queries. + // + // Required if the collaboration member doesn't have the member ability to run + // queries but is configured as a payer by the collaboration creator. + PaymentConfiguration *MembershipPaymentConfiguration `locationName:"paymentConfiguration" type:"structure"` + // An indicator as to whether query logging has been enabled or disabled for - // the collaboration. + // the membership. // // QueryLogStatus is a required field QueryLogStatus *string `locationName:"queryLogStatus" type:"string" required:"true" enum:"MembershipQueryLogStatus"` @@ -8384,6 +8410,11 @@ func (s *CreateMembershipInput) Validate() error { invalidParams.AddNested("DefaultResultConfiguration", err.(request.ErrInvalidParams)) } } + if s.PaymentConfiguration != nil { + if err := s.PaymentConfiguration.Validate(); err != nil { + invalidParams.AddNested("PaymentConfiguration", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -8403,6 +8434,12 @@ func (s *CreateMembershipInput) SetDefaultResultConfiguration(v *MembershipProte return s } +// SetPaymentConfiguration sets the PaymentConfiguration field's value. +func (s *CreateMembershipInput) SetPaymentConfiguration(v *MembershipPaymentConfiguration) *CreateMembershipInput { + s.PaymentConfiguration = v + return s +} + // SetQueryLogStatus sets the QueryLogStatus field's value. func (s *CreateMembershipInput) SetQueryLogStatus(v string) *CreateMembershipInput { s.QueryLogStatus = &v @@ -8452,27 +8489,27 @@ func (s *CreateMembershipOutput) SetMembership(v *Membership) *CreateMembershipO type DataEncryptionMetadata struct { _ struct{} `type:"structure"` - // Indicates whether encrypted tables can contain cleartext data (true) or are - // to cryptographically process every column (false). + // Indicates whether encrypted tables can contain cleartext data (TRUE) or are + // to cryptographically process every column (FALSE). // // AllowCleartext is a required field AllowCleartext *bool `locationName:"allowCleartext" type:"boolean" required:"true"` - // Indicates whether Fingerprint columns can contain duplicate entries (true) - // or are to contain only non-repeated values (false). + // Indicates whether Fingerprint columns can contain duplicate entries (TRUE) + // or are to contain only non-repeated values (FALSE). // // AllowDuplicates is a required field AllowDuplicates *bool `locationName:"allowDuplicates" type:"boolean" required:"true"` // Indicates whether Fingerprint columns can be joined on any other Fingerprint - // column with a different name (true) or can only be joined on Fingerprint - // columns of the same name (false). + // column with a different name (TRUE) or can only be joined on Fingerprint + // columns of the same name (FALSE). // // AllowJoinsOnColumnsWithDifferentNames is a required field AllowJoinsOnColumnsWithDifferentNames *bool `locationName:"allowJoinsOnColumnsWithDifferentNames" type:"boolean" required:"true"` // Indicates whether NULL values are to be copied as NULL to encrypted tables - // (true) or cryptographically processed (false). + // (TRUE) or cryptographically processed (FALSE). // // PreserveNulls is a required field PreserveNulls *bool `locationName:"preserveNulls" type:"boolean" required:"true"` @@ -9773,7 +9810,7 @@ type GetProtectedQueryInput struct { // The identifier for a protected query instance. // // ProtectedQueryIdentifier is a required field - ProtectedQueryIdentifier *string `location:"uri" locationName:"protectedQueryIdentifier" min:"1" type:"string" required:"true"` + ProtectedQueryIdentifier *string `location:"uri" locationName:"protectedQueryIdentifier" min:"36" type:"string" required:"true"` } // String returns the string representation. @@ -9806,8 +9843,8 @@ func (s *GetProtectedQueryInput) Validate() error { if s.ProtectedQueryIdentifier == nil { invalidParams.Add(request.NewErrParamRequired("ProtectedQueryIdentifier")) } - if s.ProtectedQueryIdentifier != nil && len(*s.ProtectedQueryIdentifier) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ProtectedQueryIdentifier", 1)) + if s.ProtectedQueryIdentifier != nil && len(*s.ProtectedQueryIdentifier) < 36 { + invalidParams.Add(request.NewErrParamMinLen("ProtectedQueryIdentifier", 36)) } if invalidParams.Len() > 0 { @@ -11326,6 +11363,13 @@ type MemberSpecification struct { // // MemberAbilities is a required field MemberAbilities []*string `locationName:"memberAbilities" type:"list" required:"true" enum:"MemberAbility"` + + // The collaboration member's payment responsibilities set by the collaboration + // creator. + // + // If the collaboration creator hasn't specified anyone as the member paying + // for query compute costs, then the member who can query is the default payer. + PaymentConfiguration *PaymentConfiguration `locationName:"paymentConfiguration" type:"structure"` } // String returns the string representation. @@ -11364,6 +11408,11 @@ func (s *MemberSpecification) Validate() error { if s.MemberAbilities == nil { invalidParams.Add(request.NewErrParamRequired("MemberAbilities")) } + if s.PaymentConfiguration != nil { + if err := s.PaymentConfiguration.Validate(); err != nil { + invalidParams.AddNested("PaymentConfiguration", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -11389,6 +11438,12 @@ func (s *MemberSpecification) SetMemberAbilities(v []*string) *MemberSpecificati return s } +// SetPaymentConfiguration sets the PaymentConfiguration field's value. +func (s *MemberSpecification) SetPaymentConfiguration(v *PaymentConfiguration) *MemberSpecification { + s.PaymentConfiguration = v + return s +} + // The member object listed by the request. type MemberSummary struct { _ struct{} `type:"structure"` @@ -11420,8 +11475,13 @@ type MemberSummary struct { // The unique ID for the member's associated membership, if present. MembershipId *string `locationName:"membershipId" min:"36" type:"string"` - // The status of the member. Valid values are `INVITED`, `ACTIVE`, `LEFT`, and - // `REMOVED`. + // The collaboration member's payment responsibilities set by the collaboration + // creator. + // + // PaymentConfiguration is a required field + PaymentConfiguration *PaymentConfiguration `locationName:"paymentConfiguration" type:"structure" required:"true"` + + // The status of the member. // // Status is a required field Status *string `locationName:"status" type:"string" required:"true" enum:"MemberStatus"` @@ -11486,6 +11546,12 @@ func (s *MemberSummary) SetMembershipId(v string) *MemberSummary { return s } +// SetPaymentConfiguration sets the PaymentConfiguration field's value. +func (s *MemberSummary) SetPaymentConfiguration(v *PaymentConfiguration) *MemberSummary { + s.PaymentConfiguration = v + return s +} + // SetStatus sets the Status field's value. func (s *MemberSummary) SetStatus(v string) *MemberSummary { s.Status = &v @@ -11552,13 +11618,18 @@ type Membership struct { // MemberAbilities is a required field MemberAbilities []*string `locationName:"memberAbilities" type:"list" required:"true" enum:"MemberAbility"` + // The payment responsibilities accepted by the collaboration member. + // + // PaymentConfiguration is a required field + PaymentConfiguration *MembershipPaymentConfiguration `locationName:"paymentConfiguration" type:"structure" required:"true"` + // An indicator as to whether query logging has been enabled or disabled for - // the collaboration. + // the membership. // // QueryLogStatus is a required field QueryLogStatus *string `locationName:"queryLogStatus" type:"string" required:"true" enum:"MembershipQueryLogStatus"` - // The status of the membership. Valid values are `ACTIVE`, `REMOVED`, and `COLLABORATION_DELETED`. + // The status of the membership. // // Status is a required field Status *string `locationName:"status" type:"string" required:"true" enum:"MembershipStatus"` @@ -11647,6 +11718,12 @@ func (s *Membership) SetMemberAbilities(v []*string) *Membership { return s } +// SetPaymentConfiguration sets the PaymentConfiguration field's value. +func (s *Membership) SetPaymentConfiguration(v *MembershipPaymentConfiguration) *Membership { + s.PaymentConfiguration = v + return s +} + // SetQueryLogStatus sets the QueryLogStatus field's value. func (s *Membership) SetQueryLogStatus(v string) *Membership { s.QueryLogStatus = &v @@ -11665,6 +11742,60 @@ func (s *Membership) SetUpdateTime(v time.Time) *Membership { return s } +// An object representing the payment responsibilities accepted by the collaboration +// member. +type MembershipPaymentConfiguration struct { + _ struct{} `type:"structure"` + + // The payment responsibilities accepted by the collaboration member for query + // compute costs. + // + // QueryCompute is a required field + QueryCompute *MembershipQueryComputePaymentConfig `locationName:"queryCompute" type:"structure" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s MembershipPaymentConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s MembershipPaymentConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MembershipPaymentConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MembershipPaymentConfiguration"} + if s.QueryCompute == nil { + invalidParams.Add(request.NewErrParamRequired("QueryCompute")) + } + if s.QueryCompute != nil { + if err := s.QueryCompute.Validate(); err != nil { + invalidParams.AddNested("QueryCompute", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetQueryCompute sets the QueryCompute field's value. +func (s *MembershipPaymentConfiguration) SetQueryCompute(v *MembershipQueryComputePaymentConfig) *MembershipPaymentConfiguration { + s.QueryCompute = v + return s +} + // Contains configurations for protected query results. type MembershipProtectedQueryOutputConfiguration struct { _ struct{} `type:"structure"` @@ -11778,6 +11909,66 @@ func (s *MembershipProtectedQueryResultConfiguration) SetRoleArn(v string) *Memb return s } +// An object representing the payment responsibilities accepted by the collaboration +// member for query compute costs. +type MembershipQueryComputePaymentConfig struct { + _ struct{} `type:"structure"` + + // Indicates whether the collaboration member has accepted to pay for query + // compute costs (TRUE) or has not accepted to pay for query compute costs (FALSE). + // + // If the collaboration creator has not specified anyone to pay for query compute + // costs, then the member who can query is the default payer. + // + // An error message is returned for the following reasons: + // + // * If you set the value to FALSE but you are responsible to pay for query + // compute costs. + // + // * If you set the value to TRUE but you are not responsible to pay for + // query compute costs. + // + // IsResponsible is a required field + IsResponsible *bool `locationName:"isResponsible" type:"boolean" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s MembershipQueryComputePaymentConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s MembershipQueryComputePaymentConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MembershipQueryComputePaymentConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MembershipQueryComputePaymentConfig"} + if s.IsResponsible == nil { + invalidParams.Add(request.NewErrParamRequired("IsResponsible")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIsResponsible sets the IsResponsible field's value. +func (s *MembershipQueryComputePaymentConfig) SetIsResponsible(v bool) *MembershipQueryComputePaymentConfig { + s.IsResponsible = &v + return s +} + // The membership object listed by the request. type MembershipSummary struct { _ struct{} `type:"structure"` @@ -11828,7 +12019,12 @@ type MembershipSummary struct { // MemberAbilities is a required field MemberAbilities []*string `locationName:"memberAbilities" type:"list" required:"true" enum:"MemberAbility"` - // The status of the membership. Valid values are `ACTIVE`, `REMOVED`, and `COLLABORATION_DELETED`. + // The payment responsibilities accepted by the collaboration member. + // + // PaymentConfiguration is a required field + PaymentConfiguration *MembershipPaymentConfiguration `locationName:"paymentConfiguration" type:"structure" required:"true"` + + // The status of the membership. // // Status is a required field Status *string `locationName:"status" type:"string" required:"true" enum:"MembershipStatus"` @@ -11911,6 +12107,12 @@ func (s *MembershipSummary) SetMemberAbilities(v []*string) *MembershipSummary { return s } +// SetPaymentConfiguration sets the PaymentConfiguration field's value. +func (s *MembershipSummary) SetPaymentConfiguration(v *MembershipPaymentConfiguration) *MembershipSummary { + s.PaymentConfiguration = v + return s +} + // SetStatus sets the Status field's value. func (s *MembershipSummary) SetStatus(v string) *MembershipSummary { s.Status = &v @@ -11923,6 +12125,60 @@ func (s *MembershipSummary) SetUpdateTime(v time.Time) *MembershipSummary { return s } +// An object representing the collaboration member's payment responsibilities +// set by the collaboration creator. +type PaymentConfiguration struct { + _ struct{} `type:"structure"` + + // The collaboration member's payment responsibilities set by the collaboration + // creator for query compute costs. + // + // QueryCompute is a required field + QueryCompute *QueryComputePaymentConfig `locationName:"queryCompute" type:"structure" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s PaymentConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s PaymentConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PaymentConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PaymentConfiguration"} + if s.QueryCompute == nil { + invalidParams.Add(request.NewErrParamRequired("QueryCompute")) + } + if s.QueryCompute != nil { + if err := s.QueryCompute.Validate(); err != nil { + invalidParams.AddNested("QueryCompute", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetQueryCompute sets the QueryCompute field's value. +func (s *PaymentConfiguration) SetQueryCompute(v *QueryComputePaymentConfig) *PaymentConfiguration { + s.QueryCompute = v + return s +} + // The parameters for an Clean Rooms protected query. type ProtectedQuery struct { _ struct{} `type:"structure"` @@ -12574,6 +12830,65 @@ func (s *ProtectedQuerySummary) SetStatus(v string) *ProtectedQuerySummary { return s } +// An object representing the collaboration member's payment responsibilities +// set by the collaboration creator for query compute costs. +type QueryComputePaymentConfig struct { + _ struct{} `type:"structure"` + + // Indicates whether the collaboration creator has configured the collaboration + // member to pay for query compute costs (TRUE) or has not configured the collaboration + // member to pay for query compute costs (FALSE). + // + // Exactly one member can be configured to pay for query compute costs. An error + // is returned if the collaboration creator sets a TRUE value for more than + // one member in the collaboration. + // + // If the collaboration creator hasn't specified anyone as the member paying + // for query compute costs, then the member who can query is the default payer. + // An error is returned if the collaboration creator sets a FALSE value for + // the member who can query. + // + // IsResponsible is a required field + IsResponsible *bool `locationName:"isResponsible" type:"boolean" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s QueryComputePaymentConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s QueryComputePaymentConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *QueryComputePaymentConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "QueryComputePaymentConfig"} + if s.IsResponsible == nil { + invalidParams.Add(request.NewErrParamRequired("IsResponsible")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIsResponsible sets the IsResponsible field's value. +func (s *QueryComputePaymentConfig) SetIsResponsible(v bool) *QueryComputePaymentConfig { + s.IsResponsible = &v + return s +} + // Request references a resource which does not exist. type ResourceNotFoundException struct { _ struct{} `type:"structure"` @@ -13986,7 +14301,7 @@ type UpdateMembershipInput struct { MembershipIdentifier *string `location:"uri" locationName:"membershipIdentifier" min:"36" type:"string" required:"true"` // An indicator as to whether query logging has been enabled or disabled for - // the collaboration. + // the membership. QueryLogStatus *string `locationName:"queryLogStatus" type:"string" enum:"MembershipQueryLogStatus"` } @@ -14091,7 +14406,7 @@ type UpdateProtectedQueryInput struct { // The identifier for a protected query instance. // // ProtectedQueryIdentifier is a required field - ProtectedQueryIdentifier *string `location:"uri" locationName:"protectedQueryIdentifier" min:"1" type:"string" required:"true"` + ProtectedQueryIdentifier *string `location:"uri" locationName:"protectedQueryIdentifier" min:"36" type:"string" required:"true"` // The target status of a query. Used to update the execution status of a currently // running query. @@ -14130,8 +14445,8 @@ func (s *UpdateProtectedQueryInput) Validate() error { if s.ProtectedQueryIdentifier == nil { invalidParams.Add(request.NewErrParamRequired("ProtectedQueryIdentifier")) } - if s.ProtectedQueryIdentifier != nil && len(*s.ProtectedQueryIdentifier) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ProtectedQueryIdentifier", 1)) + if s.ProtectedQueryIdentifier != nil && len(*s.ProtectedQueryIdentifier) < 36 { + invalidParams.Add(request.NewErrParamMinLen("ProtectedQueryIdentifier", 36)) } if s.TargetStatus == nil { invalidParams.Add(request.NewErrParamRequired("TargetStatus")) diff --git a/service/connect/api.go b/service/connect/api.go index 478777f3cad..d86debd9381 100644 --- a/service/connect/api.go +++ b/service/connect/api.go @@ -55963,6 +55963,52 @@ func (s *SecurityProfilesSearchFilter) SetTagFilter(v *ControlPlaneTagFilter) *S return s } +// A value for a segment attribute. This is structured as a map where the key +// is valueString and the value is a string. +type SegmentAttributeValue struct { + _ struct{} `type:"structure"` + + // The value of a segment attribute. + ValueString *string `min:"1" type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s SegmentAttributeValue) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s SegmentAttributeValue) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SegmentAttributeValue) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SegmentAttributeValue"} + if s.ValueString != nil && len(*s.ValueString) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ValueString", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetValueString sets the ValueString field's value. +func (s *SegmentAttributeValue) SetValueString(v string) *SegmentAttributeValue { + s.ValueString = &v + return s +} + // Information about the send notification action. type SendNotificationActionDefinition struct { _ struct{} `type:"structure"` @@ -56401,6 +56447,20 @@ type StartChatContactInput struct { // You cannot provide data for both RelatedContactId and PersistentChat. RelatedContactId *string `min:"1" type:"string"` + // A set of system defined key-value pairs stored on individual contact segments + // using an attribute map. The attributes are standard Amazon Connect attributes. + // They can be accessed in flows. + // + // Attribute keys can include only alphanumeric, -, and _. + // + // This field can be used to show channel subtype, such as connect:Guide. + // + // The types application/vnd.amazonaws.connect.message.interactive and application/vnd.amazonaws.connect.message.interactive.response + // must be present in the SupportedMessagingContentTypes field of this API in + // order to set SegmentAttributes as {"connect:Subtype": {"valueString" : "connect:Guide" + // }}. + SegmentAttributes map[string]*SegmentAttributeValue `type:"map"` + // The supported chat message content types. Supported types are text/plain, // text/markdown, application/json, application/vnd.amazonaws.connect.message.interactive, // and application/vnd.amazonaws.connect.message.interactive.response. @@ -56470,6 +56530,16 @@ func (s *StartChatContactInput) Validate() error { invalidParams.AddNested("PersistentChat", err.(request.ErrInvalidParams)) } } + if s.SegmentAttributes != nil { + for i, v := range s.SegmentAttributes { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "SegmentAttributes", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -56531,6 +56601,12 @@ func (s *StartChatContactInput) SetRelatedContactId(v string) *StartChatContactI return s } +// SetSegmentAttributes sets the SegmentAttributes field's value. +func (s *StartChatContactInput) SetSegmentAttributes(v map[string]*SegmentAttributeValue) *StartChatContactInput { + s.SegmentAttributes = v + return s +} + // SetSupportedMessagingContentTypes sets the SupportedMessagingContentTypes field's value. func (s *StartChatContactInput) SetSupportedMessagingContentTypes(v []*string) *StartChatContactInput { s.SupportedMessagingContentTypes = v diff --git a/service/glue/api.go b/service/glue/api.go index 31095548ccf..ecff23b795e 100644 --- a/service/glue/api.go +++ b/service/glue/api.go @@ -1089,6 +1089,84 @@ func (c *Glue) BatchGetPartitionWithContext(ctx aws.Context, input *BatchGetPart return out, req.Send() } +const opBatchGetTableOptimizer = "BatchGetTableOptimizer" + +// BatchGetTableOptimizerRequest generates a "aws/request.Request" representing the +// client's request for the BatchGetTableOptimizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchGetTableOptimizer for more information on using the BatchGetTableOptimizer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the BatchGetTableOptimizerRequest method. +// req, resp := client.BatchGetTableOptimizerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/BatchGetTableOptimizer +func (c *Glue) BatchGetTableOptimizerRequest(input *BatchGetTableOptimizerInput) (req *request.Request, output *BatchGetTableOptimizerOutput) { + op := &request.Operation{ + Name: opBatchGetTableOptimizer, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BatchGetTableOptimizerInput{} + } + + output = &BatchGetTableOptimizerOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchGetTableOptimizer API operation for AWS Glue. +// +// Returns the configuration for the specified table optimizers. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation BatchGetTableOptimizer for usage and error information. +// +// Returned Error Types: +// - InternalServiceException +// An internal service error occurred. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/BatchGetTableOptimizer +func (c *Glue) BatchGetTableOptimizer(input *BatchGetTableOptimizerInput) (*BatchGetTableOptimizerOutput, error) { + req, out := c.BatchGetTableOptimizerRequest(input) + return out, req.Send() +} + +// BatchGetTableOptimizerWithContext is the same as BatchGetTableOptimizer with the addition of +// the ability to pass a context and additional request options. +// +// See BatchGetTableOptimizer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) BatchGetTableOptimizerWithContext(ctx aws.Context, input *BatchGetTableOptimizerInput, opts ...request.Option) (*BatchGetTableOptimizerOutput, error) { + req, out := c.BatchGetTableOptimizerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opBatchGetTriggers = "BatchGetTriggers" // BatchGetTriggersRequest generates a "aws/request.Request" representing the @@ -3641,6 +3719,99 @@ func (c *Glue) CreateTableWithContext(ctx aws.Context, input *CreateTableInput, return out, req.Send() } +const opCreateTableOptimizer = "CreateTableOptimizer" + +// CreateTableOptimizerRequest generates a "aws/request.Request" representing the +// client's request for the CreateTableOptimizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateTableOptimizer for more information on using the CreateTableOptimizer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the CreateTableOptimizerRequest method. +// req, resp := client.CreateTableOptimizerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/CreateTableOptimizer +func (c *Glue) CreateTableOptimizerRequest(input *CreateTableOptimizerInput) (req *request.Request, output *CreateTableOptimizerOutput) { + op := &request.Operation{ + Name: opCreateTableOptimizer, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateTableOptimizerInput{} + } + + output = &CreateTableOptimizerOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Swap(jsonrpc.UnmarshalHandler.Name, protocol.UnmarshalDiscardBodyHandler) + return +} + +// CreateTableOptimizer API operation for AWS Glue. +// +// Creates a new table optimizer for a specific function. compaction is the +// only currently supported optimizer type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation CreateTableOptimizer for usage and error information. +// +// Returned Error Types: +// +// - EntityNotFoundException +// A specified entity does not exist +// +// - InvalidInputException +// The input provided was not valid. +// +// - AccessDeniedException +// Access to a resource was denied. +// +// - AlreadyExistsException +// A resource to be created or added already exists. +// +// - InternalServiceException +// An internal service error occurred. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/CreateTableOptimizer +func (c *Glue) CreateTableOptimizer(input *CreateTableOptimizerInput) (*CreateTableOptimizerOutput, error) { + req, out := c.CreateTableOptimizerRequest(input) + return out, req.Send() +} + +// CreateTableOptimizerWithContext is the same as CreateTableOptimizer with the addition of +// the ability to pass a context and additional request options. +// +// See CreateTableOptimizer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) CreateTableOptimizerWithContext(ctx aws.Context, input *CreateTableOptimizerInput, opts ...request.Option) (*CreateTableOptimizerOutput, error) { + req, out := c.CreateTableOptimizerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateTrigger = "CreateTrigger" // CreateTriggerRequest generates a "aws/request.Request" representing the @@ -5864,6 +6035,96 @@ func (c *Glue) DeleteTableWithContext(ctx aws.Context, input *DeleteTableInput, return out, req.Send() } +const opDeleteTableOptimizer = "DeleteTableOptimizer" + +// DeleteTableOptimizerRequest generates a "aws/request.Request" representing the +// client's request for the DeleteTableOptimizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteTableOptimizer for more information on using the DeleteTableOptimizer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the DeleteTableOptimizerRequest method. +// req, resp := client.DeleteTableOptimizerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/DeleteTableOptimizer +func (c *Glue) DeleteTableOptimizerRequest(input *DeleteTableOptimizerInput) (req *request.Request, output *DeleteTableOptimizerOutput) { + op := &request.Operation{ + Name: opDeleteTableOptimizer, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteTableOptimizerInput{} + } + + output = &DeleteTableOptimizerOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Swap(jsonrpc.UnmarshalHandler.Name, protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteTableOptimizer API operation for AWS Glue. +// +// Deletes an optimizer and all associated metadata for a table. The optimization +// will no longer be performed on the table. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation DeleteTableOptimizer for usage and error information. +// +// Returned Error Types: +// +// - EntityNotFoundException +// A specified entity does not exist +// +// - InvalidInputException +// The input provided was not valid. +// +// - AccessDeniedException +// Access to a resource was denied. +// +// - InternalServiceException +// An internal service error occurred. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/DeleteTableOptimizer +func (c *Glue) DeleteTableOptimizer(input *DeleteTableOptimizerInput) (*DeleteTableOptimizerOutput, error) { + req, out := c.DeleteTableOptimizerRequest(input) + return out, req.Send() +} + +// DeleteTableOptimizerWithContext is the same as DeleteTableOptimizer with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteTableOptimizer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) DeleteTableOptimizerWithContext(ctx aws.Context, input *DeleteTableOptimizerInput, opts ...request.Option) (*DeleteTableOptimizerOutput, error) { + req, out := c.DeleteTableOptimizerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteTableVersion = "DeleteTableVersion" // DeleteTableVersionRequest generates a "aws/request.Request" representing the @@ -11536,6 +11797,94 @@ func (c *Glue) GetTableWithContext(ctx aws.Context, input *GetTableInput, opts . return out, req.Send() } +const opGetTableOptimizer = "GetTableOptimizer" + +// GetTableOptimizerRequest generates a "aws/request.Request" representing the +// client's request for the GetTableOptimizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetTableOptimizer for more information on using the GetTableOptimizer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the GetTableOptimizerRequest method. +// req, resp := client.GetTableOptimizerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetTableOptimizer +func (c *Glue) GetTableOptimizerRequest(input *GetTableOptimizerInput) (req *request.Request, output *GetTableOptimizerOutput) { + op := &request.Operation{ + Name: opGetTableOptimizer, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetTableOptimizerInput{} + } + + output = &GetTableOptimizerOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetTableOptimizer API operation for AWS Glue. +// +// Returns the configuration of all optimizers associated with a specified table. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation GetTableOptimizer for usage and error information. +// +// Returned Error Types: +// +// - EntityNotFoundException +// A specified entity does not exist +// +// - InvalidInputException +// The input provided was not valid. +// +// - AccessDeniedException +// Access to a resource was denied. +// +// - InternalServiceException +// An internal service error occurred. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetTableOptimizer +func (c *Glue) GetTableOptimizer(input *GetTableOptimizerInput) (*GetTableOptimizerOutput, error) { + req, out := c.GetTableOptimizerRequest(input) + return out, req.Send() +} + +// GetTableOptimizerWithContext is the same as GetTableOptimizer with the addition of +// the ability to pass a context and additional request options. +// +// See GetTableOptimizer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) GetTableOptimizerWithContext(ctx aws.Context, input *GetTableOptimizerInput, opts ...request.Option) (*GetTableOptimizerOutput, error) { + req, out := c.GetTableOptimizerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetTableVersion = "GetTableVersion" // GetTableVersionRequest generates a "aws/request.Request" representing the @@ -15572,6 +15921,151 @@ func (c *Glue) ListStatementsWithContext(ctx aws.Context, input *ListStatementsI return out, req.Send() } +const opListTableOptimizerRuns = "ListTableOptimizerRuns" + +// ListTableOptimizerRunsRequest generates a "aws/request.Request" representing the +// client's request for the ListTableOptimizerRuns operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTableOptimizerRuns for more information on using the ListTableOptimizerRuns +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the ListTableOptimizerRunsRequest method. +// req, resp := client.ListTableOptimizerRunsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/ListTableOptimizerRuns +func (c *Glue) ListTableOptimizerRunsRequest(input *ListTableOptimizerRunsInput) (req *request.Request, output *ListTableOptimizerRunsOutput) { + op := &request.Operation{ + Name: opListTableOptimizerRuns, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListTableOptimizerRunsInput{} + } + + output = &ListTableOptimizerRunsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTableOptimizerRuns API operation for AWS Glue. +// +// Lists the history of previous optimizer runs for a specific table. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation ListTableOptimizerRuns for usage and error information. +// +// Returned Error Types: +// +// - EntityNotFoundException +// A specified entity does not exist +// +// - AccessDeniedException +// Access to a resource was denied. +// +// - InvalidInputException +// The input provided was not valid. +// +// - InternalServiceException +// An internal service error occurred. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/ListTableOptimizerRuns +func (c *Glue) ListTableOptimizerRuns(input *ListTableOptimizerRunsInput) (*ListTableOptimizerRunsOutput, error) { + req, out := c.ListTableOptimizerRunsRequest(input) + return out, req.Send() +} + +// ListTableOptimizerRunsWithContext is the same as ListTableOptimizerRuns with the addition of +// the ability to pass a context and additional request options. +// +// See ListTableOptimizerRuns for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) ListTableOptimizerRunsWithContext(ctx aws.Context, input *ListTableOptimizerRunsInput, opts ...request.Option) (*ListTableOptimizerRunsOutput, error) { + req, out := c.ListTableOptimizerRunsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListTableOptimizerRunsPages iterates over the pages of a ListTableOptimizerRuns operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListTableOptimizerRuns method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListTableOptimizerRuns operation. +// pageNum := 0 +// err := client.ListTableOptimizerRunsPages(params, +// func(page *glue.ListTableOptimizerRunsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +func (c *Glue) ListTableOptimizerRunsPages(input *ListTableOptimizerRunsInput, fn func(*ListTableOptimizerRunsOutput, bool) bool) error { + return c.ListTableOptimizerRunsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListTableOptimizerRunsPagesWithContext same as ListTableOptimizerRunsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) ListTableOptimizerRunsPagesWithContext(ctx aws.Context, input *ListTableOptimizerRunsInput, fn func(*ListTableOptimizerRunsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListTableOptimizerRunsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListTableOptimizerRunsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + for p.Next() { + if !fn(p.Page().(*ListTableOptimizerRunsOutput), !p.HasNextPage()) { + break + } + } + + return p.Err() +} + const opListTriggers = "ListTriggers" // ListTriggersRequest generates a "aws/request.Request" representing the @@ -17279,6 +17773,8 @@ func (c *Glue) StartDataQualityRuleRecommendationRunRequest(input *StartDataQual // with recommendations for a potential ruleset. You can then triage the ruleset // and modify the generated ruleset to your liking. // +// Recommendation runs are automatically deleted after 90 days. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -20464,6 +20960,95 @@ func (c *Glue) UpdateTableWithContext(ctx aws.Context, input *UpdateTableInput, return out, req.Send() } +const opUpdateTableOptimizer = "UpdateTableOptimizer" + +// UpdateTableOptimizerRequest generates a "aws/request.Request" representing the +// client's request for the UpdateTableOptimizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateTableOptimizer for more information on using the UpdateTableOptimizer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the UpdateTableOptimizerRequest method. +// req, resp := client.UpdateTableOptimizerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/UpdateTableOptimizer +func (c *Glue) UpdateTableOptimizerRequest(input *UpdateTableOptimizerInput) (req *request.Request, output *UpdateTableOptimizerOutput) { + op := &request.Operation{ + Name: opUpdateTableOptimizer, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateTableOptimizerInput{} + } + + output = &UpdateTableOptimizerOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Swap(jsonrpc.UnmarshalHandler.Name, protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateTableOptimizer API operation for AWS Glue. +// +// Updates the configuration for an existing table optimizer. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation UpdateTableOptimizer for usage and error information. +// +// Returned Error Types: +// +// - EntityNotFoundException +// A specified entity does not exist +// +// - InvalidInputException +// The input provided was not valid. +// +// - AccessDeniedException +// Access to a resource was denied. +// +// - InternalServiceException +// An internal service error occurred. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/UpdateTableOptimizer +func (c *Glue) UpdateTableOptimizer(input *UpdateTableOptimizerInput) (*UpdateTableOptimizerOutput, error) { + req, out := c.UpdateTableOptimizerRequest(input) + return out, req.Send() +} + +// UpdateTableOptimizerWithContext is the same as UpdateTableOptimizer with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateTableOptimizer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) UpdateTableOptimizerWithContext(ctx aws.Context, input *UpdateTableOptimizerInput, opts ...request.Option) (*UpdateTableOptimizerOutput, error) { + req, out := c.UpdateTableOptimizerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateTrigger = "UpdateTrigger" // UpdateTriggerRequest generates a "aws/request.Request" representing the @@ -23301,6 +23886,250 @@ func (s *BatchGetPartitionOutput) SetUnprocessedKeys(v []*PartitionValueList) *B return s } +// Represents a table optimizer to retrieve in the BatchGetTableOptimizer operation. +type BatchGetTableOptimizerEntry struct { + _ struct{} `type:"structure"` + + // The Catalog ID of the table. + CatalogId *string `locationName:"catalogId" min:"1" type:"string"` + + // The name of the database in the catalog in which the table resides. + DatabaseName *string `locationName:"databaseName" min:"1" type:"string"` + + // The name of the table. + TableName *string `locationName:"tableName" min:"1" type:"string"` + + // The type of table optimizer. + Type *string `locationName:"type" type:"string" enum:"TableOptimizerType"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BatchGetTableOptimizerEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BatchGetTableOptimizerEntry) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchGetTableOptimizerEntry) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchGetTableOptimizerEntry"} + if s.CatalogId != nil && len(*s.CatalogId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CatalogId", 1)) + } + if s.DatabaseName != nil && len(*s.DatabaseName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DatabaseName", 1)) + } + if s.TableName != nil && len(*s.TableName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCatalogId sets the CatalogId field's value. +func (s *BatchGetTableOptimizerEntry) SetCatalogId(v string) *BatchGetTableOptimizerEntry { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *BatchGetTableOptimizerEntry) SetDatabaseName(v string) *BatchGetTableOptimizerEntry { + s.DatabaseName = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *BatchGetTableOptimizerEntry) SetTableName(v string) *BatchGetTableOptimizerEntry { + s.TableName = &v + return s +} + +// SetType sets the Type field's value. +func (s *BatchGetTableOptimizerEntry) SetType(v string) *BatchGetTableOptimizerEntry { + s.Type = &v + return s +} + +// Contains details on one of the errors in the error list returned by the BatchGetTableOptimizer +// operation. +type BatchGetTableOptimizerError struct { + _ struct{} `type:"structure"` + + // The Catalog ID of the table. + CatalogId *string `locationName:"catalogId" min:"1" type:"string"` + + // The name of the database in the catalog in which the table resides. + DatabaseName *string `locationName:"databaseName" min:"1" type:"string"` + + // An ErrorDetail object containing code and message details about the error. + Error *ErrorDetail `locationName:"error" type:"structure"` + + // The name of the table. + TableName *string `locationName:"tableName" min:"1" type:"string"` + + // The type of table optimizer. + Type *string `locationName:"type" type:"string" enum:"TableOptimizerType"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BatchGetTableOptimizerError) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BatchGetTableOptimizerError) GoString() string { + return s.String() +} + +// SetCatalogId sets the CatalogId field's value. +func (s *BatchGetTableOptimizerError) SetCatalogId(v string) *BatchGetTableOptimizerError { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *BatchGetTableOptimizerError) SetDatabaseName(v string) *BatchGetTableOptimizerError { + s.DatabaseName = &v + return s +} + +// SetError sets the Error field's value. +func (s *BatchGetTableOptimizerError) SetError(v *ErrorDetail) *BatchGetTableOptimizerError { + s.Error = v + return s +} + +// SetTableName sets the TableName field's value. +func (s *BatchGetTableOptimizerError) SetTableName(v string) *BatchGetTableOptimizerError { + s.TableName = &v + return s +} + +// SetType sets the Type field's value. +func (s *BatchGetTableOptimizerError) SetType(v string) *BatchGetTableOptimizerError { + s.Type = &v + return s +} + +type BatchGetTableOptimizerInput struct { + _ struct{} `type:"structure"` + + // A list of BatchGetTableOptimizerEntry objects specifying the table optimizers + // to retrieve. + // + // Entries is a required field + Entries []*BatchGetTableOptimizerEntry `type:"list" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BatchGetTableOptimizerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BatchGetTableOptimizerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchGetTableOptimizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchGetTableOptimizerInput"} + if s.Entries == nil { + invalidParams.Add(request.NewErrParamRequired("Entries")) + } + if s.Entries != nil { + for i, v := range s.Entries { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Entries", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEntries sets the Entries field's value. +func (s *BatchGetTableOptimizerInput) SetEntries(v []*BatchGetTableOptimizerEntry) *BatchGetTableOptimizerInput { + s.Entries = v + return s +} + +type BatchGetTableOptimizerOutput struct { + _ struct{} `type:"structure"` + + // A list of errors from the operation. + Failures []*BatchGetTableOptimizerError `type:"list"` + + // A list of BatchTableOptimizer objects. + TableOptimizers []*BatchTableOptimizer `type:"list"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BatchGetTableOptimizerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BatchGetTableOptimizerOutput) GoString() string { + return s.String() +} + +// SetFailures sets the Failures field's value. +func (s *BatchGetTableOptimizerOutput) SetFailures(v []*BatchGetTableOptimizerError) *BatchGetTableOptimizerOutput { + s.Failures = v + return s +} + +// SetTableOptimizers sets the TableOptimizers field's value. +func (s *BatchGetTableOptimizerOutput) SetTableOptimizers(v []*BatchTableOptimizer) *BatchGetTableOptimizerOutput { + s.TableOptimizers = v + return s +} + type BatchGetTriggersInput struct { _ struct{} `type:"structure"` @@ -23686,6 +24515,67 @@ func (s *BatchStopJobRunSuccessfulSubmission) SetJobRunId(v string) *BatchStopJo return s } +// Contains details for one of the table optimizers returned by the BatchGetTableOptimizer +// operation. +type BatchTableOptimizer struct { + _ struct{} `type:"structure"` + + // The Catalog ID of the table. + CatalogId *string `locationName:"catalogId" min:"1" type:"string"` + + // The name of the database in the catalog in which the table resides. + DatabaseName *string `locationName:"databaseName" min:"1" type:"string"` + + // The name of the table. + TableName *string `locationName:"tableName" min:"1" type:"string"` + + // A TableOptimizer object that contains details on the configuration and last + // run of a table optimzer. + TableOptimizer *TableOptimizer `locationName:"tableOptimizer" type:"structure"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BatchTableOptimizer) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s BatchTableOptimizer) GoString() string { + return s.String() +} + +// SetCatalogId sets the CatalogId field's value. +func (s *BatchTableOptimizer) SetCatalogId(v string) *BatchTableOptimizer { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *BatchTableOptimizer) SetDatabaseName(v string) *BatchTableOptimizer { + s.DatabaseName = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *BatchTableOptimizer) SetTableName(v string) *BatchTableOptimizer { + s.TableName = &v + return s +} + +// SetTableOptimizer sets the TableOptimizer field's value. +func (s *BatchTableOptimizer) SetTableOptimizer(v *TableOptimizer) *BatchTableOptimizer { + s.TableOptimizer = v + return s +} + // Contains information about a batch update partition error. type BatchUpdatePartitionFailureEntry struct { _ struct{} `type:"structure"` @@ -33224,6 +34114,145 @@ func (s *CreateTableInput) SetTransactionId(v string) *CreateTableInput { return s } +type CreateTableOptimizerInput struct { + _ struct{} `type:"structure"` + + // The Catalog ID of the table. + // + // CatalogId is a required field + CatalogId *string `min:"1" type:"string" required:"true"` + + // The name of the database in the catalog in which the table resides. + // + // DatabaseName is a required field + DatabaseName *string `min:"1" type:"string" required:"true"` + + // The name of the table. + // + // TableName is a required field + TableName *string `min:"1" type:"string" required:"true"` + + // A TableOptimizerConfiguration object representing the configuration of a + // table optimizer. + // + // TableOptimizerConfiguration is a required field + TableOptimizerConfiguration *TableOptimizerConfiguration `type:"structure" required:"true"` + + // The type of table optimizer. Currently, the only valid value is compaction. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"TableOptimizerType"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s CreateTableOptimizerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s CreateTableOptimizerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateTableOptimizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateTableOptimizerInput"} + if s.CatalogId == nil { + invalidParams.Add(request.NewErrParamRequired("CatalogId")) + } + if s.CatalogId != nil && len(*s.CatalogId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CatalogId", 1)) + } + if s.DatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("DatabaseName")) + } + if s.DatabaseName != nil && len(*s.DatabaseName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DatabaseName", 1)) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 1)) + } + if s.TableOptimizerConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("TableOptimizerConfiguration")) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + if s.TableOptimizerConfiguration != nil { + if err := s.TableOptimizerConfiguration.Validate(); err != nil { + invalidParams.AddNested("TableOptimizerConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCatalogId sets the CatalogId field's value. +func (s *CreateTableOptimizerInput) SetCatalogId(v string) *CreateTableOptimizerInput { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *CreateTableOptimizerInput) SetDatabaseName(v string) *CreateTableOptimizerInput { + s.DatabaseName = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *CreateTableOptimizerInput) SetTableName(v string) *CreateTableOptimizerInput { + s.TableName = &v + return s +} + +// SetTableOptimizerConfiguration sets the TableOptimizerConfiguration field's value. +func (s *CreateTableOptimizerInput) SetTableOptimizerConfiguration(v *TableOptimizerConfiguration) *CreateTableOptimizerInput { + s.TableOptimizerConfiguration = v + return s +} + +// SetType sets the Type field's value. +func (s *CreateTableOptimizerInput) SetType(v string) *CreateTableOptimizerInput { + s.Type = &v + return s +} + +type CreateTableOptimizerOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s CreateTableOptimizerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s CreateTableOptimizerOutput) GoString() string { + return s.String() +} + type CreateTableOutput struct { _ struct{} `type:"structure"` } @@ -37747,6 +38776,125 @@ func (s *DeleteTableInput) SetTransactionId(v string) *DeleteTableInput { return s } +type DeleteTableOptimizerInput struct { + _ struct{} `type:"structure"` + + // The Catalog ID of the table. + // + // CatalogId is a required field + CatalogId *string `min:"1" type:"string" required:"true"` + + // The name of the database in the catalog in which the table resides. + // + // DatabaseName is a required field + DatabaseName *string `min:"1" type:"string" required:"true"` + + // The name of the table. + // + // TableName is a required field + TableName *string `min:"1" type:"string" required:"true"` + + // The type of table optimizer. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"TableOptimizerType"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s DeleteTableOptimizerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s DeleteTableOptimizerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteTableOptimizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteTableOptimizerInput"} + if s.CatalogId == nil { + invalidParams.Add(request.NewErrParamRequired("CatalogId")) + } + if s.CatalogId != nil && len(*s.CatalogId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CatalogId", 1)) + } + if s.DatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("DatabaseName")) + } + if s.DatabaseName != nil && len(*s.DatabaseName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DatabaseName", 1)) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 1)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCatalogId sets the CatalogId field's value. +func (s *DeleteTableOptimizerInput) SetCatalogId(v string) *DeleteTableOptimizerInput { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *DeleteTableOptimizerInput) SetDatabaseName(v string) *DeleteTableOptimizerInput { + s.DatabaseName = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *DeleteTableOptimizerInput) SetTableName(v string) *DeleteTableOptimizerInput { + s.TableName = &v + return s +} + +// SetType sets the Type field's value. +func (s *DeleteTableOptimizerInput) SetType(v string) *DeleteTableOptimizerInput { + s.Type = &v + return s +} + +type DeleteTableOptimizerOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s DeleteTableOptimizerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s DeleteTableOptimizerOutput) GoString() string { + return s.String() +} + type DeleteTableOutput struct { _ struct{} `type:"structure"` } @@ -47407,6 +48555,161 @@ func (s *GetTableInput) SetTransactionId(v string) *GetTableInput { return s } +type GetTableOptimizerInput struct { + _ struct{} `type:"structure"` + + // The Catalog ID of the table. + // + // CatalogId is a required field + CatalogId *string `min:"1" type:"string" required:"true"` + + // The name of the database in the catalog in which the table resides. + // + // DatabaseName is a required field + DatabaseName *string `min:"1" type:"string" required:"true"` + + // The name of the table. + // + // TableName is a required field + TableName *string `min:"1" type:"string" required:"true"` + + // The type of table optimizer. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"TableOptimizerType"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s GetTableOptimizerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s GetTableOptimizerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetTableOptimizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetTableOptimizerInput"} + if s.CatalogId == nil { + invalidParams.Add(request.NewErrParamRequired("CatalogId")) + } + if s.CatalogId != nil && len(*s.CatalogId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CatalogId", 1)) + } + if s.DatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("DatabaseName")) + } + if s.DatabaseName != nil && len(*s.DatabaseName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DatabaseName", 1)) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 1)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCatalogId sets the CatalogId field's value. +func (s *GetTableOptimizerInput) SetCatalogId(v string) *GetTableOptimizerInput { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *GetTableOptimizerInput) SetDatabaseName(v string) *GetTableOptimizerInput { + s.DatabaseName = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *GetTableOptimizerInput) SetTableName(v string) *GetTableOptimizerInput { + s.TableName = &v + return s +} + +// SetType sets the Type field's value. +func (s *GetTableOptimizerInput) SetType(v string) *GetTableOptimizerInput { + s.Type = &v + return s +} + +type GetTableOptimizerOutput struct { + _ struct{} `type:"structure"` + + // The Catalog ID of the table. + CatalogId *string `min:"1" type:"string"` + + // The name of the database in the catalog in which the table resides. + DatabaseName *string `min:"1" type:"string"` + + // The name of the table. + TableName *string `min:"1" type:"string"` + + // The optimizer associated with the specified table. + TableOptimizer *TableOptimizer `type:"structure"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s GetTableOptimizerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s GetTableOptimizerOutput) GoString() string { + return s.String() +} + +// SetCatalogId sets the CatalogId field's value. +func (s *GetTableOptimizerOutput) SetCatalogId(v string) *GetTableOptimizerOutput { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *GetTableOptimizerOutput) SetDatabaseName(v string) *GetTableOptimizerOutput { + s.DatabaseName = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *GetTableOptimizerOutput) SetTableName(v string) *GetTableOptimizerOutput { + s.TableName = &v + return s +} + +// SetTableOptimizer sets the TableOptimizer field's value. +func (s *GetTableOptimizerOutput) SetTableOptimizer(v *TableOptimizer) *GetTableOptimizerOutput { + s.TableOptimizer = v + return s +} + type GetTableOutput struct { _ struct{} `type:"structure"` @@ -55194,6 +56497,189 @@ func (s *ListStatementsOutput) SetStatements(v []*Statement) *ListStatementsOutp return s } +type ListTableOptimizerRunsInput struct { + _ struct{} `type:"structure"` + + // The Catalog ID of the table. + // + // CatalogId is a required field + CatalogId *string `min:"1" type:"string" required:"true"` + + // The name of the database in the catalog in which the table resides. + // + // DatabaseName is a required field + DatabaseName *string `min:"1" type:"string" required:"true"` + + // The maximum number of optimizer runs to return on each call. + MaxResults *int64 `type:"integer"` + + // A continuation token, if this is a continuation call. + NextToken *string `type:"string"` + + // The name of the table. + // + // TableName is a required field + TableName *string `min:"1" type:"string" required:"true"` + + // The type of table optimizer. Currently, the only valid value is compaction. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"TableOptimizerType"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListTableOptimizerRunsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListTableOptimizerRunsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTableOptimizerRunsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTableOptimizerRunsInput"} + if s.CatalogId == nil { + invalidParams.Add(request.NewErrParamRequired("CatalogId")) + } + if s.CatalogId != nil && len(*s.CatalogId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CatalogId", 1)) + } + if s.DatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("DatabaseName")) + } + if s.DatabaseName != nil && len(*s.DatabaseName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DatabaseName", 1)) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 1)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCatalogId sets the CatalogId field's value. +func (s *ListTableOptimizerRunsInput) SetCatalogId(v string) *ListTableOptimizerRunsInput { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *ListTableOptimizerRunsInput) SetDatabaseName(v string) *ListTableOptimizerRunsInput { + s.DatabaseName = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListTableOptimizerRunsInput) SetMaxResults(v int64) *ListTableOptimizerRunsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListTableOptimizerRunsInput) SetNextToken(v string) *ListTableOptimizerRunsInput { + s.NextToken = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *ListTableOptimizerRunsInput) SetTableName(v string) *ListTableOptimizerRunsInput { + s.TableName = &v + return s +} + +// SetType sets the Type field's value. +func (s *ListTableOptimizerRunsInput) SetType(v string) *ListTableOptimizerRunsInput { + s.Type = &v + return s +} + +type ListTableOptimizerRunsOutput struct { + _ struct{} `type:"structure"` + + // The Catalog ID of the table. + CatalogId *string `min:"1" type:"string"` + + // The name of the database in the catalog in which the table resides. + DatabaseName *string `min:"1" type:"string"` + + // A continuation token for paginating the returned list of optimizer runs, + // returned if the current segment of the list is not the last. + NextToken *string `type:"string"` + + // The name of the table. + TableName *string `min:"1" type:"string"` + + // A list of the optimizer runs associated with a table. + TableOptimizerRuns []*TableOptimizerRun `type:"list"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListTableOptimizerRunsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListTableOptimizerRunsOutput) GoString() string { + return s.String() +} + +// SetCatalogId sets the CatalogId field's value. +func (s *ListTableOptimizerRunsOutput) SetCatalogId(v string) *ListTableOptimizerRunsOutput { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *ListTableOptimizerRunsOutput) SetDatabaseName(v string) *ListTableOptimizerRunsOutput { + s.DatabaseName = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListTableOptimizerRunsOutput) SetNextToken(v string) *ListTableOptimizerRunsOutput { + s.NextToken = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *ListTableOptimizerRunsOutput) SetTableName(v string) *ListTableOptimizerRunsOutput { + s.TableName = &v + return s +} + +// SetTableOptimizerRuns sets the TableOptimizerRuns field's value. +func (s *ListTableOptimizerRunsOutput) SetTableOptimizerRuns(v []*TableOptimizerRun) *ListTableOptimizerRunsOutput { + s.TableOptimizerRuns = v + return s +} + type ListTriggersInput struct { _ struct{} `type:"structure"` @@ -60636,6 +62122,65 @@ func (s *ResumeWorkflowRunOutput) SetRunId(v string) *ResumeWorkflowRunOutput { return s } +// Metrics for the optimizer run. +type RunMetrics struct { + _ struct{} `type:"structure"` + + // The duration of the job in hours. + JobDurationInHour *string `type:"string"` + + // The number of bytes removed by the compaction job run. + NumberOfBytesCompacted *string `type:"string"` + + // The number of DPU hours consumed by the job. + NumberOfDpus *string `type:"string"` + + // The number of files removed by the compaction job run. + NumberOfFilesCompacted *string `type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s RunMetrics) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s RunMetrics) GoString() string { + return s.String() +} + +// SetJobDurationInHour sets the JobDurationInHour field's value. +func (s *RunMetrics) SetJobDurationInHour(v string) *RunMetrics { + s.JobDurationInHour = &v + return s +} + +// SetNumberOfBytesCompacted sets the NumberOfBytesCompacted field's value. +func (s *RunMetrics) SetNumberOfBytesCompacted(v string) *RunMetrics { + s.NumberOfBytesCompacted = &v + return s +} + +// SetNumberOfDpus sets the NumberOfDpus field's value. +func (s *RunMetrics) SetNumberOfDpus(v string) *RunMetrics { + s.NumberOfDpus = &v + return s +} + +// SetNumberOfFilesCompacted sets the NumberOfFilesCompacted field's value. +func (s *RunMetrics) SetNumberOfFilesCompacted(v string) *RunMetrics { + s.NumberOfFilesCompacted = &v + return s +} + type RunStatementInput struct { _ struct{} `type:"structure"` @@ -68706,6 +70251,182 @@ func (s *TableInput) SetViewOriginalText(v string) *TableInput { return s } +// Contains details about an optimizer associated with a table. +type TableOptimizer struct { + _ struct{} `type:"structure"` + + // A TableOptimizerConfiguration object that was specified when creating or + // updating a table optimizer. + Configuration *TableOptimizerConfiguration `locationName:"configuration" type:"structure"` + + // A TableOptimizerRun object representing the last run of the table optimizer. + LastRun *TableOptimizerRun `locationName:"lastRun" type:"structure"` + + // The type of table optimizer. Currently, the only valid value is compaction. + Type *string `locationName:"type" type:"string" enum:"TableOptimizerType"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s TableOptimizer) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s TableOptimizer) GoString() string { + return s.String() +} + +// SetConfiguration sets the Configuration field's value. +func (s *TableOptimizer) SetConfiguration(v *TableOptimizerConfiguration) *TableOptimizer { + s.Configuration = v + return s +} + +// SetLastRun sets the LastRun field's value. +func (s *TableOptimizer) SetLastRun(v *TableOptimizerRun) *TableOptimizer { + s.LastRun = v + return s +} + +// SetType sets the Type field's value. +func (s *TableOptimizer) SetType(v string) *TableOptimizer { + s.Type = &v + return s +} + +// Contains details on the configuration of a table optimizer. You pass this +// configuration when creating or updating a table optimizer. +type TableOptimizerConfiguration struct { + _ struct{} `type:"structure"` + + // Whether table optimization is enabled. + Enabled *bool `locationName:"enabled" type:"boolean"` + + // A role passed by the caller which gives the service permission to update + // the resources associated with the optimizer on the caller's behalf. + RoleArn *string `locationName:"roleArn" min:"20" type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s TableOptimizerConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s TableOptimizerConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TableOptimizerConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TableOptimizerConfiguration"} + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEnabled sets the Enabled field's value. +func (s *TableOptimizerConfiguration) SetEnabled(v bool) *TableOptimizerConfiguration { + s.Enabled = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *TableOptimizerConfiguration) SetRoleArn(v string) *TableOptimizerConfiguration { + s.RoleArn = &v + return s +} + +// Contains details for a table optimizer run. +type TableOptimizerRun struct { + _ struct{} `type:"structure"` + + // Represents the epoch timestamp at which the compaction job ended. + EndTimestamp *time.Time `locationName:"endTimestamp" type:"timestamp"` + + // An error that occured during the optimizer run. + Error *string `locationName:"error" type:"string"` + + // An event type representing the status of the table optimizer run. + EventType *string `locationName:"eventType" type:"string" enum:"TableOptimizerEventType"` + + // A RunMetrics object containing metrics for the optimizer run. + Metrics *RunMetrics `locationName:"metrics" type:"structure"` + + // Represents the epoch timestamp at which the compaction job was started within + // Lake Formation. + StartTimestamp *time.Time `locationName:"startTimestamp" type:"timestamp"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s TableOptimizerRun) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s TableOptimizerRun) GoString() string { + return s.String() +} + +// SetEndTimestamp sets the EndTimestamp field's value. +func (s *TableOptimizerRun) SetEndTimestamp(v time.Time) *TableOptimizerRun { + s.EndTimestamp = &v + return s +} + +// SetError sets the Error field's value. +func (s *TableOptimizerRun) SetError(v string) *TableOptimizerRun { + s.Error = &v + return s +} + +// SetEventType sets the EventType field's value. +func (s *TableOptimizerRun) SetEventType(v string) *TableOptimizerRun { + s.EventType = &v + return s +} + +// SetMetrics sets the Metrics field's value. +func (s *TableOptimizerRun) SetMetrics(v *RunMetrics) *TableOptimizerRun { + s.Metrics = v + return s +} + +// SetStartTimestamp sets the StartTimestamp field's value. +func (s *TableOptimizerRun) SetStartTimestamp(v time.Time) *TableOptimizerRun { + s.StartTimestamp = &v + return s +} + // Specifies a version of a table. type TableVersion struct { _ struct{} `type:"structure"` @@ -72804,6 +74525,145 @@ func (s *UpdateTableInput) SetVersionId(v string) *UpdateTableInput { return s } +type UpdateTableOptimizerInput struct { + _ struct{} `type:"structure"` + + // The Catalog ID of the table. + // + // CatalogId is a required field + CatalogId *string `min:"1" type:"string" required:"true"` + + // The name of the database in the catalog in which the table resides. + // + // DatabaseName is a required field + DatabaseName *string `min:"1" type:"string" required:"true"` + + // The name of the table. + // + // TableName is a required field + TableName *string `min:"1" type:"string" required:"true"` + + // A TableOptimizerConfiguration object representing the configuration of a + // table optimizer. + // + // TableOptimizerConfiguration is a required field + TableOptimizerConfiguration *TableOptimizerConfiguration `type:"structure" required:"true"` + + // The type of table optimizer. Currently, the only valid value is compaction. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"TableOptimizerType"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s UpdateTableOptimizerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s UpdateTableOptimizerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateTableOptimizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateTableOptimizerInput"} + if s.CatalogId == nil { + invalidParams.Add(request.NewErrParamRequired("CatalogId")) + } + if s.CatalogId != nil && len(*s.CatalogId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CatalogId", 1)) + } + if s.DatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("DatabaseName")) + } + if s.DatabaseName != nil && len(*s.DatabaseName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DatabaseName", 1)) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 1)) + } + if s.TableOptimizerConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("TableOptimizerConfiguration")) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + if s.TableOptimizerConfiguration != nil { + if err := s.TableOptimizerConfiguration.Validate(); err != nil { + invalidParams.AddNested("TableOptimizerConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCatalogId sets the CatalogId field's value. +func (s *UpdateTableOptimizerInput) SetCatalogId(v string) *UpdateTableOptimizerInput { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *UpdateTableOptimizerInput) SetDatabaseName(v string) *UpdateTableOptimizerInput { + s.DatabaseName = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *UpdateTableOptimizerInput) SetTableName(v string) *UpdateTableOptimizerInput { + s.TableName = &v + return s +} + +// SetTableOptimizerConfiguration sets the TableOptimizerConfiguration field's value. +func (s *UpdateTableOptimizerInput) SetTableOptimizerConfiguration(v *TableOptimizerConfiguration) *UpdateTableOptimizerInput { + s.TableOptimizerConfiguration = v + return s +} + +// SetType sets the Type field's value. +func (s *UpdateTableOptimizerInput) SetType(v string) *UpdateTableOptimizerInput { + s.Type = &v + return s +} + +type UpdateTableOptimizerOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s UpdateTableOptimizerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s UpdateTableOptimizerOutput) GoString() string { + return s.String() +} + type UpdateTableOutput struct { _ struct{} `type:"structure"` } @@ -76085,6 +77945,42 @@ func StatementState_Values() []string { } } +const ( + // TableOptimizerEventTypeStarting is a TableOptimizerEventType enum value + TableOptimizerEventTypeStarting = "starting" + + // TableOptimizerEventTypeCompleted is a TableOptimizerEventType enum value + TableOptimizerEventTypeCompleted = "completed" + + // TableOptimizerEventTypeFailed is a TableOptimizerEventType enum value + TableOptimizerEventTypeFailed = "failed" + + // TableOptimizerEventTypeInProgress is a TableOptimizerEventType enum value + TableOptimizerEventTypeInProgress = "in_progress" +) + +// TableOptimizerEventType_Values returns all elements of the TableOptimizerEventType enum +func TableOptimizerEventType_Values() []string { + return []string{ + TableOptimizerEventTypeStarting, + TableOptimizerEventTypeCompleted, + TableOptimizerEventTypeFailed, + TableOptimizerEventTypeInProgress, + } +} + +const ( + // TableOptimizerTypeCompaction is a TableOptimizerType enum value + TableOptimizerTypeCompaction = "compaction" +) + +// TableOptimizerType_Values returns all elements of the TableOptimizerType enum +func TableOptimizerType_Values() []string { + return []string{ + TableOptimizerTypeCompaction, + } +} + const ( // TargetFormatJson is a TargetFormat enum value TargetFormatJson = "json" diff --git a/service/glue/glueiface/interface.go b/service/glue/glueiface/interface.go index edafbd629cd..99aa7bfc192 100644 --- a/service/glue/glueiface/interface.go +++ b/service/glue/glueiface/interface.go @@ -108,6 +108,10 @@ type GlueAPI interface { BatchGetPartitionWithContext(aws.Context, *glue.BatchGetPartitionInput, ...request.Option) (*glue.BatchGetPartitionOutput, error) BatchGetPartitionRequest(*glue.BatchGetPartitionInput) (*request.Request, *glue.BatchGetPartitionOutput) + BatchGetTableOptimizer(*glue.BatchGetTableOptimizerInput) (*glue.BatchGetTableOptimizerOutput, error) + BatchGetTableOptimizerWithContext(aws.Context, *glue.BatchGetTableOptimizerInput, ...request.Option) (*glue.BatchGetTableOptimizerOutput, error) + BatchGetTableOptimizerRequest(*glue.BatchGetTableOptimizerInput) (*request.Request, *glue.BatchGetTableOptimizerOutput) + BatchGetTriggers(*glue.BatchGetTriggersInput) (*glue.BatchGetTriggersOutput, error) BatchGetTriggersWithContext(aws.Context, *glue.BatchGetTriggersInput, ...request.Option) (*glue.BatchGetTriggersOutput, error) BatchGetTriggersRequest(*glue.BatchGetTriggersInput) (*request.Request, *glue.BatchGetTriggersOutput) @@ -216,6 +220,10 @@ type GlueAPI interface { CreateTableWithContext(aws.Context, *glue.CreateTableInput, ...request.Option) (*glue.CreateTableOutput, error) CreateTableRequest(*glue.CreateTableInput) (*request.Request, *glue.CreateTableOutput) + CreateTableOptimizer(*glue.CreateTableOptimizerInput) (*glue.CreateTableOptimizerOutput, error) + CreateTableOptimizerWithContext(aws.Context, *glue.CreateTableOptimizerInput, ...request.Option) (*glue.CreateTableOptimizerOutput, error) + CreateTableOptimizerRequest(*glue.CreateTableOptimizerInput) (*request.Request, *glue.CreateTableOptimizerOutput) + CreateTrigger(*glue.CreateTriggerInput) (*glue.CreateTriggerOutput, error) CreateTriggerWithContext(aws.Context, *glue.CreateTriggerInput, ...request.Option) (*glue.CreateTriggerOutput, error) CreateTriggerRequest(*glue.CreateTriggerInput) (*request.Request, *glue.CreateTriggerOutput) @@ -312,6 +320,10 @@ type GlueAPI interface { DeleteTableWithContext(aws.Context, *glue.DeleteTableInput, ...request.Option) (*glue.DeleteTableOutput, error) DeleteTableRequest(*glue.DeleteTableInput) (*request.Request, *glue.DeleteTableOutput) + DeleteTableOptimizer(*glue.DeleteTableOptimizerInput) (*glue.DeleteTableOptimizerOutput, error) + DeleteTableOptimizerWithContext(aws.Context, *glue.DeleteTableOptimizerInput, ...request.Option) (*glue.DeleteTableOptimizerOutput, error) + DeleteTableOptimizerRequest(*glue.DeleteTableOptimizerInput) (*request.Request, *glue.DeleteTableOptimizerOutput) + DeleteTableVersion(*glue.DeleteTableVersionInput) (*glue.DeleteTableVersionOutput, error) DeleteTableVersionWithContext(aws.Context, *glue.DeleteTableVersionInput, ...request.Option) (*glue.DeleteTableVersionOutput, error) DeleteTableVersionRequest(*glue.DeleteTableVersionInput) (*request.Request, *glue.DeleteTableVersionOutput) @@ -573,6 +585,10 @@ type GlueAPI interface { GetTableWithContext(aws.Context, *glue.GetTableInput, ...request.Option) (*glue.GetTableOutput, error) GetTableRequest(*glue.GetTableInput) (*request.Request, *glue.GetTableOutput) + GetTableOptimizer(*glue.GetTableOptimizerInput) (*glue.GetTableOptimizerOutput, error) + GetTableOptimizerWithContext(aws.Context, *glue.GetTableOptimizerInput, ...request.Option) (*glue.GetTableOptimizerOutput, error) + GetTableOptimizerRequest(*glue.GetTableOptimizerInput) (*request.Request, *glue.GetTableOptimizerOutput) + GetTableVersion(*glue.GetTableVersionInput) (*glue.GetTableVersionOutput, error) GetTableVersionWithContext(aws.Context, *glue.GetTableVersionInput, ...request.Option) (*glue.GetTableVersionOutput, error) GetTableVersionRequest(*glue.GetTableVersionInput) (*request.Request, *glue.GetTableVersionOutput) @@ -761,6 +777,13 @@ type GlueAPI interface { ListStatementsWithContext(aws.Context, *glue.ListStatementsInput, ...request.Option) (*glue.ListStatementsOutput, error) ListStatementsRequest(*glue.ListStatementsInput) (*request.Request, *glue.ListStatementsOutput) + ListTableOptimizerRuns(*glue.ListTableOptimizerRunsInput) (*glue.ListTableOptimizerRunsOutput, error) + ListTableOptimizerRunsWithContext(aws.Context, *glue.ListTableOptimizerRunsInput, ...request.Option) (*glue.ListTableOptimizerRunsOutput, error) + ListTableOptimizerRunsRequest(*glue.ListTableOptimizerRunsInput) (*request.Request, *glue.ListTableOptimizerRunsOutput) + + ListTableOptimizerRunsPages(*glue.ListTableOptimizerRunsInput, func(*glue.ListTableOptimizerRunsOutput, bool) bool) error + ListTableOptimizerRunsPagesWithContext(aws.Context, *glue.ListTableOptimizerRunsInput, func(*glue.ListTableOptimizerRunsOutput, bool) bool, ...request.Option) error + ListTriggers(*glue.ListTriggersInput) (*glue.ListTriggersOutput, error) ListTriggersWithContext(aws.Context, *glue.ListTriggersInput, ...request.Option) (*glue.ListTriggersOutput, error) ListTriggersRequest(*glue.ListTriggersInput) (*request.Request, *glue.ListTriggersOutput) @@ -970,6 +993,10 @@ type GlueAPI interface { UpdateTableWithContext(aws.Context, *glue.UpdateTableInput, ...request.Option) (*glue.UpdateTableOutput, error) UpdateTableRequest(*glue.UpdateTableInput) (*request.Request, *glue.UpdateTableOutput) + UpdateTableOptimizer(*glue.UpdateTableOptimizerInput) (*glue.UpdateTableOptimizerOutput, error) + UpdateTableOptimizerWithContext(aws.Context, *glue.UpdateTableOptimizerInput, ...request.Option) (*glue.UpdateTableOptimizerOutput, error) + UpdateTableOptimizerRequest(*glue.UpdateTableOptimizerInput) (*request.Request, *glue.UpdateTableOptimizerOutput) + UpdateTrigger(*glue.UpdateTriggerInput) (*glue.UpdateTriggerOutput, error) UpdateTriggerWithContext(aws.Context, *glue.UpdateTriggerInput, ...request.Option) (*glue.UpdateTriggerOutput, error) UpdateTriggerRequest(*glue.UpdateTriggerInput) (*request.Request, *glue.UpdateTriggerOutput) diff --git a/service/iot/api.go b/service/iot/api.go index ec893f741c5..0b41061f7fa 100644 --- a/service/iot/api.go +++ b/service/iot/api.go @@ -4098,6 +4098,9 @@ func (c *IoT) CreateThingGroupRequest(input *CreateThingGroupInput) (req *reques // This is a control plane operation. See Authorization (https://docs.aws.amazon.com/iot/latest/developerguide/iot-authorization.html) // for information about authorizing control plane actions. // +// If the ThingGroup that you create has the exact same attributes as an existing +// ThingGroup, you will get a 200 success response. +// // Requires permission to access the CreateThingGroup (https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions) // action. // @@ -29758,6 +29761,9 @@ type Behavior struct { // when IoT Device Defender detects that a device is behaving anomalously. Criteria *BehaviorCriteria `locationName:"criteria" type:"structure"` + // Value indicates exporting metrics related to the behavior when it is true. + ExportMetric *bool `locationName:"exportMetric" type:"boolean"` + // What is measured by the behavior. Metric *string `locationName:"metric" type:"string"` @@ -29826,6 +29832,12 @@ func (s *Behavior) SetCriteria(v *BehaviorCriteria) *Behavior { return s } +// SetExportMetric sets the ExportMetric field's value. +func (s *Behavior) SetExportMetric(v bool) *Behavior { + s.ExportMetric = &v + return s +} + // SetMetric sets the Metric field's value. func (s *Behavior) SetMetric(v string) *Behavior { s.Metric = &v @@ -36130,6 +36142,9 @@ type CreateSecurityProfileInput struct { // alert. Behaviors []*Behavior `locationName:"behaviors" type:"list"` + // Specifies the MQTT topic and role ARN required for metric export. + MetricsExportConfig *MetricsExportConfig `locationName:"metricsExportConfig" type:"structure"` + // A description of the security profile. SecurityProfileDescription *string `locationName:"securityProfileDescription" type:"string"` @@ -36199,6 +36214,11 @@ func (s *CreateSecurityProfileInput) Validate() error { } } } + if s.MetricsExportConfig != nil { + if err := s.MetricsExportConfig.Validate(); err != nil { + invalidParams.AddNested("MetricsExportConfig", err.(request.ErrInvalidParams)) + } + } if s.Tags != nil { for i, v := range s.Tags { if v == nil { @@ -36240,6 +36260,12 @@ func (s *CreateSecurityProfileInput) SetBehaviors(v []*Behavior) *CreateSecurity return s } +// SetMetricsExportConfig sets the MetricsExportConfig field's value. +func (s *CreateSecurityProfileInput) SetMetricsExportConfig(v *MetricsExportConfig) *CreateSecurityProfileInput { + s.MetricsExportConfig = v + return s +} + // SetSecurityProfileDescription sets the SecurityProfileDescription field's value. func (s *CreateSecurityProfileInput) SetSecurityProfileDescription(v string) *CreateSecurityProfileInput { s.SecurityProfileDescription = &v @@ -43157,6 +43183,9 @@ type DescribeSecurityProfileOutput struct { // The time the security profile was last modified. LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + // Specifies the MQTT topic and role ARN required for metric export. + MetricsExportConfig *MetricsExportConfig `locationName:"metricsExportConfig" type:"structure"` + // The ARN of the security profile. SecurityProfileArn *string `locationName:"securityProfileArn" type:"string"` @@ -43226,6 +43255,12 @@ func (s *DescribeSecurityProfileOutput) SetLastModifiedDate(v time.Time) *Descri return s } +// SetMetricsExportConfig sets the MetricsExportConfig field's value. +func (s *DescribeSecurityProfileOutput) SetMetricsExportConfig(v *MetricsExportConfig) *DescribeSecurityProfileOutput { + s.MetricsExportConfig = v + return s +} + // SetSecurityProfileArn sets the SecurityProfileArn field's value. func (s *DescribeSecurityProfileOutput) SetSecurityProfileArn(v string) *DescribeSecurityProfileOutput { s.SecurityProfileArn = &v @@ -58169,6 +58204,10 @@ func (s *MetricDimension) SetOperator(v string) *MetricDimension { type MetricToRetain struct { _ struct{} `type:"structure"` + // Value added in both Behavior and AdditionalMetricsToRetainV2 to indicate + // if Device Defender Detect should export the corresponding metrics. + ExportMetric *bool `locationName:"exportMetric" type:"boolean"` + // What is measured by the behavior. // // Metric is a required field @@ -58214,6 +58253,12 @@ func (s *MetricToRetain) Validate() error { return nil } +// SetExportMetric sets the ExportMetric field's value. +func (s *MetricToRetain) SetExportMetric(v bool) *MetricToRetain { + s.ExportMetric = &v + return s +} + // SetMetric sets the Metric field's value. func (s *MetricToRetain) SetMetric(v string) *MetricToRetain { s.Metric = &v @@ -58306,6 +58351,75 @@ func (s *MetricValue) SetStrings(v []*string) *MetricValue { return s } +// Set configurations for metrics export. +type MetricsExportConfig struct { + _ struct{} `type:"structure"` + + // The MQTT topic that Device Defender Detect should publish messages to for + // metrics export. + // + // MqttTopic is a required field + MqttTopic *string `locationName:"mqttTopic" min:"1" type:"string" required:"true"` + + // This role ARN has permission to publish MQTT messages, after which Device + // Defender Detect can assume the role and publish messages on your behalf. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" min:"20" type:"string" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s MetricsExportConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s MetricsExportConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MetricsExportConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MetricsExportConfig"} + if s.MqttTopic == nil { + invalidParams.Add(request.NewErrParamRequired("MqttTopic")) + } + if s.MqttTopic != nil && len(*s.MqttTopic) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MqttTopic", 1)) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMqttTopic sets the MqttTopic field's value. +func (s *MetricsExportConfig) SetMqttTopic(v string) *MetricsExportConfig { + s.MqttTopic = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *MetricsExportConfig) SetRoleArn(v string) *MetricsExportConfig { + s.RoleArn = &v + return s +} + // Describes which changes should be applied as part of a mitigation action. type MitigationAction struct { _ struct{} `type:"structure"` @@ -62474,7 +62588,8 @@ type SearchIndexInput struct { // The search index name. IndexName *string `locationName:"indexName" min:"1" type:"string"` - // The maximum number of results to return at one time. + // The maximum number of results to return at one time. The response might contain + // fewer results but will never contain more. MaxResults *int64 `locationName:"maxResults" min:"1" type:"integer"` // The token used to get the next set of results, or null if there are no additional @@ -65820,6 +65935,8 @@ type ThingGroupIndexingConfiguration struct { // Fleet Indexing service. This is an optional field. For more information, // see Managed fields (https://docs.aws.amazon.com/iot/latest/developerguide/managing-fleet-index.html#managed-field) // in the Amazon Web Services IoT Core Developer Guide. + // + // You can't modify managed fields by updating fleet indexing configuration. ManagedFields []*Field `locationName:"managedFields" type:"list"` // Thing group indexing mode. @@ -65995,7 +66112,11 @@ type ThingIndexingConfiguration struct { Filter *IndexingFilter `locationName:"filter" type:"structure"` // Contains fields that are indexed and whose types are already known by the - // Fleet Indexing service. + // Fleet Indexing service. This is an optional field. For more information, + // see Managed fields (https://docs.aws.amazon.com/iot/latest/developerguide/managing-fleet-index.html#managed-field) + // in the Amazon Web Services IoT Core Developer Guide. + // + // You can't modify managed fields by updating fleet indexing configuration. ManagedFields []*Field `locationName:"managedFields" type:"list"` // Named shadow indexing mode. Valid values are: @@ -70427,11 +70548,17 @@ type UpdateSecurityProfileInput struct { // are defined in the current invocation, an exception occurs. DeleteBehaviors *bool `locationName:"deleteBehaviors" type:"boolean"` + // Set the value as true to delete metrics export related configurations. + DeleteMetricsExportConfig *bool `locationName:"deleteMetricsExportConfig" type:"boolean"` + // The expected version of the security profile. A new version is generated // whenever the security profile is updated. If you specify a value that is // different from the actual version, a VersionConflictException is thrown. ExpectedVersion *int64 `location:"querystring" locationName:"expectedVersion" type:"long"` + // Specifies the MQTT topic and role ARN required for metric export. + MetricsExportConfig *MetricsExportConfig `locationName:"metricsExportConfig" type:"structure"` + // A description of the security profile. SecurityProfileDescription *string `locationName:"securityProfileDescription" type:"string"` @@ -70498,6 +70625,11 @@ func (s *UpdateSecurityProfileInput) Validate() error { } } } + if s.MetricsExportConfig != nil { + if err := s.MetricsExportConfig.Validate(); err != nil { + invalidParams.AddNested("MetricsExportConfig", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -70547,12 +70679,24 @@ func (s *UpdateSecurityProfileInput) SetDeleteBehaviors(v bool) *UpdateSecurityP return s } +// SetDeleteMetricsExportConfig sets the DeleteMetricsExportConfig field's value. +func (s *UpdateSecurityProfileInput) SetDeleteMetricsExportConfig(v bool) *UpdateSecurityProfileInput { + s.DeleteMetricsExportConfig = &v + return s +} + // SetExpectedVersion sets the ExpectedVersion field's value. func (s *UpdateSecurityProfileInput) SetExpectedVersion(v int64) *UpdateSecurityProfileInput { s.ExpectedVersion = &v return s } +// SetMetricsExportConfig sets the MetricsExportConfig field's value. +func (s *UpdateSecurityProfileInput) SetMetricsExportConfig(v *MetricsExportConfig) *UpdateSecurityProfileInput { + s.MetricsExportConfig = v + return s +} + // SetSecurityProfileDescription sets the SecurityProfileDescription field's value. func (s *UpdateSecurityProfileInput) SetSecurityProfileDescription(v string) *UpdateSecurityProfileInput { s.SecurityProfileDescription = &v @@ -70596,6 +70740,9 @@ type UpdateSecurityProfileOutput struct { // The time the security profile was last modified. LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + // Specifies the MQTT topic and role ARN required for metric export. + MetricsExportConfig *MetricsExportConfig `locationName:"metricsExportConfig" type:"structure"` + // The ARN of the security profile that was updated. SecurityProfileArn *string `locationName:"securityProfileArn" type:"string"` @@ -70663,6 +70810,12 @@ func (s *UpdateSecurityProfileOutput) SetLastModifiedDate(v time.Time) *UpdateSe return s } +// SetMetricsExportConfig sets the MetricsExportConfig field's value. +func (s *UpdateSecurityProfileOutput) SetMetricsExportConfig(v *MetricsExportConfig) *UpdateSecurityProfileOutput { + s.MetricsExportConfig = v + return s +} + // SetSecurityProfileArn sets the SecurityProfileArn field's value. func (s *UpdateSecurityProfileOutput) SetSecurityProfileArn(v string) *UpdateSecurityProfileOutput { s.SecurityProfileArn = &v diff --git a/service/lambda/api.go b/service/lambda/api.go index 35a667e037c..79c1d25b94e 100644 --- a/service/lambda/api.go +++ b/service/lambda/api.go @@ -23014,6 +23014,9 @@ const ( // RuntimeProvidedAl2023 is a Runtime enum value RuntimeProvidedAl2023 = "provided.al2023" + + // RuntimePython312 is a Runtime enum value + RuntimePython312 = "python3.12" ) // Runtime_Values returns all elements of the Runtime enum @@ -23053,6 +23056,7 @@ func Runtime_Values() []string { RuntimePython311, RuntimeNodejs20X, RuntimeProvidedAl2023, + RuntimePython312, } } diff --git a/service/pipes/api.go b/service/pipes/api.go index b21f10b7643..02c5daf7eeb 100644 --- a/service/pipes/api.go +++ b/service/pipes/api.go @@ -931,14 +931,15 @@ func (c *Pipes) UpdatePipeRequest(input *UpdatePipeInput) (req *request.Request, // UpdatePipe API operation for Amazon EventBridge Pipes. // -// Update an existing pipe. When you call UpdatePipe, only the fields that are -// included in the request are changed, the rest are unchanged. The exception -// to this is if you modify any Amazon Web Services-service specific fields -// in the SourceParameters, EnrichmentParameters, or TargetParameters objects. -// The fields in these objects are updated atomically as one and override existing -// values. This is by design and means that if you don't specify an optional -// field in one of these Parameters objects, that field will be set to its system-default -// value after the update. +// Update an existing pipe. When you call UpdatePipe, EventBridge only the updates +// fields you have specified in the request; the rest remain unchanged. The +// exception to this is if you modify any Amazon Web Services-service specific +// fields in the SourceParameters, EnrichmentParameters, or TargetParameters +// objects. For example, DynamoDBStreamParameters or EventBridgeEventBusParameters. +// EventBridge updates the fields in these objects atomically as one and overrides +// existing values. This is by design, and means that if you don't specify an +// optional field in one of these Parameters objects, EventBridge sets that +// field to its system-default value during the update. // // For more information about pipes, see Amazon EventBridge Pipes (https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html) // in the Amazon EventBridge User Guide. @@ -1623,6 +1624,90 @@ func (s *CapacityProviderStrategyItem) SetWeight(v int64) *CapacityProviderStrat return s } +// The Amazon CloudWatch Logs logging configuration settings for the pipe. +type CloudwatchLogsLogDestination struct { + _ struct{} `type:"structure"` + + // The Amazon Web Services Resource Name (ARN) for the CloudWatch log group + // to which EventBridge sends the log records. + LogGroupArn *string `min:"1" type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s CloudwatchLogsLogDestination) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s CloudwatchLogsLogDestination) GoString() string { + return s.String() +} + +// SetLogGroupArn sets the LogGroupArn field's value. +func (s *CloudwatchLogsLogDestination) SetLogGroupArn(v string) *CloudwatchLogsLogDestination { + s.LogGroupArn = &v + return s +} + +// The Amazon CloudWatch Logs logging configuration settings for the pipe. +type CloudwatchLogsLogDestinationParameters struct { + _ struct{} `type:"structure"` + + // The Amazon Web Services Resource Name (ARN) for the CloudWatch log group + // to which EventBridge sends the log records. + // + // LogGroupArn is a required field + LogGroupArn *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s CloudwatchLogsLogDestinationParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s CloudwatchLogsLogDestinationParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CloudwatchLogsLogDestinationParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CloudwatchLogsLogDestinationParameters"} + if s.LogGroupArn == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupArn")) + } + if s.LogGroupArn != nil && len(*s.LogGroupArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupArn", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogGroupArn sets the LogGroupArn field's value. +func (s *CloudwatchLogsLogDestinationParameters) SetLogGroupArn(v string) *CloudwatchLogsLogDestinationParameters { + s.LogGroupArn = &v + return s +} + // An action you attempted resulted in an exception. type ConflictException struct { _ struct{} `type:"structure"` @@ -1716,6 +1801,9 @@ type CreatePipeInput struct { // The parameters required to set up enrichment on your pipe. EnrichmentParameters *PipeEnrichmentParameters `type:"structure"` + // The logging configuration settings for the pipe. + LogConfiguration *PipeLogConfigurationParameters `type:"structure"` + // The name of the pipe. // // Name is a required field @@ -1743,6 +1831,10 @@ type CreatePipeInput struct { Target *string `min:"1" type:"string" required:"true"` // The parameters required to set up a target for your pipe. + // + // For more information about pipe target parameters, including how to use dynamic + // path parameters, see Target parameters (https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html) + // in the Amazon EventBridge User Guide. TargetParameters *PipeTargetParameters `type:"structure"` } @@ -1794,6 +1886,11 @@ func (s *CreatePipeInput) Validate() error { if s.Target != nil && len(*s.Target) < 1 { invalidParams.Add(request.NewErrParamMinLen("Target", 1)) } + if s.LogConfiguration != nil { + if err := s.LogConfiguration.Validate(); err != nil { + invalidParams.AddNested("LogConfiguration", err.(request.ErrInvalidParams)) + } + } if s.SourceParameters != nil { if err := s.SourceParameters.Validate(); err != nil { invalidParams.AddNested("SourceParameters", err.(request.ErrInvalidParams)) @@ -1835,6 +1932,12 @@ func (s *CreatePipeInput) SetEnrichmentParameters(v *PipeEnrichmentParameters) * return s } +// SetLogConfiguration sets the LogConfiguration field's value. +func (s *CreatePipeInput) SetLogConfiguration(v *PipeLogConfigurationParameters) *CreatePipeInput { + s.LogConfiguration = v + return s +} + // SetName sets the Name field's value. func (s *CreatePipeInput) SetName(v string) *CreatePipeInput { s.Name = &v @@ -1959,8 +2062,10 @@ func (s *CreatePipeOutput) SetName(v string) *CreatePipeOutput { type DeadLetterConfig struct { _ struct{} `type:"structure"` - // The ARN of the Amazon SQS queue specified as the target for the dead-letter - // queue. + // The ARN of the specified target for the dead-letter queue. + // + // For Amazon Kinesis stream and Amazon DynamoDB stream sources, specify either + // an Amazon SNS topic or Amazon SQS queue ARN. Arn *string `min:"1" type:"string"` } @@ -2208,6 +2313,9 @@ type DescribePipeOutput struct { // (YYYY-MM-DDThh:mm:ss.sTZD). LastModifiedTime *time.Time `type:"timestamp"` + // The logging configuration settings for the pipe. + LogConfiguration *PipeLogConfiguration `type:"structure"` + // The name of the pipe. Name *string `min:"1" type:"string"` @@ -2230,6 +2338,10 @@ type DescribePipeOutput struct { Target *string `min:"1" type:"string"` // The parameters required to set up a target for your pipe. + // + // For more information about pipe target parameters, including how to use dynamic + // path parameters, see Target parameters (https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html) + // in the Amazon EventBridge User Guide. TargetParameters *PipeTargetParameters `type:"structure"` } @@ -2299,6 +2411,12 @@ func (s *DescribePipeOutput) SetLastModifiedTime(v time.Time) *DescribePipeOutpu return s } +// SetLogConfiguration sets the LogConfiguration field's value. +func (s *DescribePipeOutput) SetLogConfiguration(v *PipeLogConfiguration) *DescribePipeOutput { + s.LogConfiguration = v + return s +} + // SetName sets the Name field's value. func (s *DescribePipeOutput) SetName(v string) *DescribePipeOutput { s.Name = &v @@ -2965,8 +3083,12 @@ func (s *Filter) SetPattern(v string) *Filter { return s } -// The collection of event patterns used to filter events. For more information, -// see Events and Event Patterns (https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html) +// The collection of event patterns used to filter events. +// +// To remove a filter, specify a FilterCriteria object with an empty array of +// Filter objects. +// +// For more information, see Events and Event Patterns (https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html) // in the Amazon EventBridge User Guide. type FilterCriteria struct { _ struct{} `type:"structure"` @@ -2999,6 +3121,90 @@ func (s *FilterCriteria) SetFilters(v []*Filter) *FilterCriteria { return s } +// The Amazon Kinesis Data Firehose logging configuration settings for the pipe. +type FirehoseLogDestination struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the Kinesis Data Firehose delivery stream + // to which EventBridge delivers the pipe log records. + DeliveryStreamArn *string `min:"1" type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s FirehoseLogDestination) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s FirehoseLogDestination) GoString() string { + return s.String() +} + +// SetDeliveryStreamArn sets the DeliveryStreamArn field's value. +func (s *FirehoseLogDestination) SetDeliveryStreamArn(v string) *FirehoseLogDestination { + s.DeliveryStreamArn = &v + return s +} + +// The Amazon Kinesis Data Firehose logging configuration settings for the pipe. +type FirehoseLogDestinationParameters struct { + _ struct{} `type:"structure"` + + // Specifies the Amazon Resource Name (ARN) of the Kinesis Data Firehose delivery + // stream to which EventBridge delivers the pipe log records. + // + // DeliveryStreamArn is a required field + DeliveryStreamArn *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s FirehoseLogDestinationParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s FirehoseLogDestinationParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *FirehoseLogDestinationParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FirehoseLogDestinationParameters"} + if s.DeliveryStreamArn == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryStreamArn")) + } + if s.DeliveryStreamArn != nil && len(*s.DeliveryStreamArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamArn", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeliveryStreamArn sets the DeliveryStreamArn field's value. +func (s *FirehoseLogDestinationParameters) SetDeliveryStreamArn(v string) *FirehoseLogDestinationParameters { + s.DeliveryStreamArn = &v + return s +} + // This exception occurs due to unexpected causes. type InternalException struct { _ struct{} `type:"structure"` @@ -3721,6 +3927,8 @@ type PipeEnrichmentParameters struct { // event itself is passed to the enrichment. For more information, see The JavaScript // Object Notation (JSON) Data Interchange Format (http://www.rfc-editor.org/rfc/rfc7159.txt). // + // To remove an input template, specify an empty string. + // // InputTemplate is a sensitive parameter and its value will be // replaced with "sensitive" in string returned by PipeEnrichmentParameters's // String and GoString methods. @@ -3757,6 +3965,208 @@ func (s *PipeEnrichmentParameters) SetInputTemplate(v string) *PipeEnrichmentPar return s } +// The logging configuration settings for the pipe. +type PipeLogConfiguration struct { + _ struct{} `type:"structure"` + + // The Amazon CloudWatch Logs logging configuration settings for the pipe. + CloudwatchLogsLogDestination *CloudwatchLogsLogDestination `type:"structure"` + + // The Amazon Kinesis Data Firehose logging configuration settings for the pipe. + FirehoseLogDestination *FirehoseLogDestination `type:"structure"` + + // Whether the execution data (specifically, the payload, awsRequest, and awsResponse + // fields) is included in the log messages for this pipe. + // + // This applies to all log destinations for the pipe. + // + // For more information, see Including execution data in logs (https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-logs.html#eb-pipes-logs-execution-data) + // in the Amazon EventBridge User Guide. + IncludeExecutionData []*string `type:"list" enum:"IncludeExecutionDataOption"` + + // The level of logging detail to include. This applies to all log destinations + // for the pipe. + Level *string `type:"string" enum:"LogLevel"` + + // The Amazon S3 logging configuration settings for the pipe. + S3LogDestination *S3LogDestination `type:"structure"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s PipeLogConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s PipeLogConfiguration) GoString() string { + return s.String() +} + +// SetCloudwatchLogsLogDestination sets the CloudwatchLogsLogDestination field's value. +func (s *PipeLogConfiguration) SetCloudwatchLogsLogDestination(v *CloudwatchLogsLogDestination) *PipeLogConfiguration { + s.CloudwatchLogsLogDestination = v + return s +} + +// SetFirehoseLogDestination sets the FirehoseLogDestination field's value. +func (s *PipeLogConfiguration) SetFirehoseLogDestination(v *FirehoseLogDestination) *PipeLogConfiguration { + s.FirehoseLogDestination = v + return s +} + +// SetIncludeExecutionData sets the IncludeExecutionData field's value. +func (s *PipeLogConfiguration) SetIncludeExecutionData(v []*string) *PipeLogConfiguration { + s.IncludeExecutionData = v + return s +} + +// SetLevel sets the Level field's value. +func (s *PipeLogConfiguration) SetLevel(v string) *PipeLogConfiguration { + s.Level = &v + return s +} + +// SetS3LogDestination sets the S3LogDestination field's value. +func (s *PipeLogConfiguration) SetS3LogDestination(v *S3LogDestination) *PipeLogConfiguration { + s.S3LogDestination = v + return s +} + +// Specifies the logging configuration settings for the pipe. +// +// When you call UpdatePipe, EventBridge updates the fields in the PipeLogConfigurationParameters +// object atomically as one and overrides existing values. This is by design. +// If you don't specify an optional field in any of the Amazon Web Services +// service parameters objects (CloudwatchLogsLogDestinationParameters, FirehoseLogDestinationParameters, +// or S3LogDestinationParameters), EventBridge sets that field to its system-default +// value during the update. +// +// For example, suppose when you created the pipe you specified a Kinesis Data +// Firehose stream log destination. You then update the pipe to add an Amazon +// S3 log destination. In addition to specifying the S3LogDestinationParameters +// for the new log destination, you must also specify the fields in the FirehoseLogDestinationParameters +// object in order to retain the Kinesis Data Firehose stream log destination. +// +// For more information on generating pipe log records, see Log EventBridge +// Pipes (eventbridge/latest/userguide/eb-pipes-logs.html) in the Amazon EventBridge +// User Guide. +type PipeLogConfigurationParameters struct { + _ struct{} `type:"structure"` + + // The Amazon CloudWatch Logs logging configuration settings for the pipe. + CloudwatchLogsLogDestination *CloudwatchLogsLogDestinationParameters `type:"structure"` + + // The Amazon Kinesis Data Firehose logging configuration settings for the pipe. + FirehoseLogDestination *FirehoseLogDestinationParameters `type:"structure"` + + // Specify ON to include the execution data (specifically, the payload and awsRequest + // fields) in the log messages for this pipe. + // + // This applies to all log destinations for the pipe. + // + // For more information, see Including execution data in logs (https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-logs.html#eb-pipes-logs-execution-data) + // in the Amazon EventBridge User Guide. + // + // The default is OFF. + IncludeExecutionData []*string `type:"list" enum:"IncludeExecutionDataOption"` + + // The level of logging detail to include. This applies to all log destinations + // for the pipe. + // + // For more information, see Specifying EventBridge Pipes log level (https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-logs.html#eb-pipes-logs-level) + // in the Amazon EventBridge User Guide. + // + // Level is a required field + Level *string `type:"string" required:"true" enum:"LogLevel"` + + // The Amazon S3 logging configuration settings for the pipe. + S3LogDestination *S3LogDestinationParameters `type:"structure"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s PipeLogConfigurationParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s PipeLogConfigurationParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PipeLogConfigurationParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PipeLogConfigurationParameters"} + if s.Level == nil { + invalidParams.Add(request.NewErrParamRequired("Level")) + } + if s.CloudwatchLogsLogDestination != nil { + if err := s.CloudwatchLogsLogDestination.Validate(); err != nil { + invalidParams.AddNested("CloudwatchLogsLogDestination", err.(request.ErrInvalidParams)) + } + } + if s.FirehoseLogDestination != nil { + if err := s.FirehoseLogDestination.Validate(); err != nil { + invalidParams.AddNested("FirehoseLogDestination", err.(request.ErrInvalidParams)) + } + } + if s.S3LogDestination != nil { + if err := s.S3LogDestination.Validate(); err != nil { + invalidParams.AddNested("S3LogDestination", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCloudwatchLogsLogDestination sets the CloudwatchLogsLogDestination field's value. +func (s *PipeLogConfigurationParameters) SetCloudwatchLogsLogDestination(v *CloudwatchLogsLogDestinationParameters) *PipeLogConfigurationParameters { + s.CloudwatchLogsLogDestination = v + return s +} + +// SetFirehoseLogDestination sets the FirehoseLogDestination field's value. +func (s *PipeLogConfigurationParameters) SetFirehoseLogDestination(v *FirehoseLogDestinationParameters) *PipeLogConfigurationParameters { + s.FirehoseLogDestination = v + return s +} + +// SetIncludeExecutionData sets the IncludeExecutionData field's value. +func (s *PipeLogConfigurationParameters) SetIncludeExecutionData(v []*string) *PipeLogConfigurationParameters { + s.IncludeExecutionData = v + return s +} + +// SetLevel sets the Level field's value. +func (s *PipeLogConfigurationParameters) SetLevel(v string) *PipeLogConfigurationParameters { + s.Level = &v + return s +} + +// SetS3LogDestination sets the S3LogDestination field's value. +func (s *PipeLogConfigurationParameters) SetS3LogDestination(v *S3LogDestinationParameters) *PipeLogConfigurationParameters { + s.S3LogDestination = v + return s +} + // The parameters for using an Active MQ broker as a source. type PipeSourceActiveMQBrokerParameters struct { _ struct{} `type:"structure"` @@ -4255,8 +4665,12 @@ type PipeSourceParameters struct { // The parameters for using a DynamoDB stream as a source. DynamoDBStreamParameters *PipeSourceDynamoDBStreamParameters `type:"structure"` - // The collection of event patterns used to filter events. For more information, - // see Events and Event Patterns (https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html) + // The collection of event patterns used to filter events. + // + // To remove a filter, specify a FilterCriteria object with an empty array of + // Filter objects. + // + // For more information, see Events and Event Patterns (https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html) // in the Amazon EventBridge User Guide. FilterCriteria *FilterCriteria `type:"structure"` @@ -5156,8 +5570,6 @@ type PipeTargetEventBridgeEventBusParameters struct { // The URL subdomain of the endpoint. For example, if the URL for Endpoint is // https://abcde.veo.endpoints.event.amazonaws.com, then the EndpointId is abcde.veo. // - // When using Java, you must include auth-crt on the class path. - // // EndpointId is a sensitive parameter and its value will be // replaced with "sensitive" in string returned by PipeTargetEventBridgeEventBusParameters's // String and GoString methods. @@ -5304,7 +5716,7 @@ func (s *PipeTargetHttpParameters) SetQueryStringParameters(v map[string]*string return s } -// The parameters for using a Kinesis stream as a source. +// The parameters for using a Kinesis stream as a target. type PipeTargetKinesisStreamParameters struct { _ struct{} `type:"structure"` @@ -5366,18 +5778,19 @@ func (s *PipeTargetKinesisStreamParameters) SetPartitionKey(v string) *PipeTarge type PipeTargetLambdaFunctionParameters struct { _ struct{} `type:"structure"` - // Choose from the following options. + // Specify whether to invoke the function synchronously or asynchronously. // - // * RequestResponse (default) - Invoke the function synchronously. Keep - // the connection open until the function returns a response or times out. - // The API response includes the function response and additional data. + // * REQUEST_RESPONSE (default) - Invoke synchronously. This corresponds + // to the RequestResponse option in the InvocationType parameter for the + // Lambda Invoke (https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax) + // API. // - // * Event - Invoke the function asynchronously. Send events that fail multiple - // times to the function's dead-letter queue (if it's configured). The API - // response only includes a status code. + // * FIRE_AND_FORGET - Invoke asynchronously. This corresponds to the Event + // option in the InvocationType parameter for the Lambda Invoke (https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax) + // API. // - // * DryRun - Validate parameter values and verify that the user or role - // has permission to invoke the function. + // For more information, see Invocation types (https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html#pipes-invocation) + // in the Amazon EventBridge User Guide. InvocationType *string `type:"string" enum:"PipeTargetInvocationType"` } @@ -5406,6 +5819,10 @@ func (s *PipeTargetLambdaFunctionParameters) SetInvocationType(v string) *PipeTa } // The parameters required to set up a target for your pipe. +// +// For more information about pipe target parameters, including how to use dynamic +// path parameters, see Target parameters (https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html) +// in the Amazon EventBridge User Guide. type PipeTargetParameters struct { _ struct{} `type:"structure"` @@ -5429,25 +5846,27 @@ type PipeTargetParameters struct { // itself is passed to the target. For more information, see The JavaScript // Object Notation (JSON) Data Interchange Format (http://www.rfc-editor.org/rfc/rfc7159.txt). // + // To remove an input template, specify an empty string. + // // InputTemplate is a sensitive parameter and its value will be // replaced with "sensitive" in string returned by PipeTargetParameters's // String and GoString methods. InputTemplate *string `type:"string" sensitive:"true"` - // The parameters for using a Kinesis stream as a source. + // The parameters for using a Kinesis stream as a target. KinesisStreamParameters *PipeTargetKinesisStreamParameters `type:"structure"` // The parameters for using a Lambda function as a target. LambdaFunctionParameters *PipeTargetLambdaFunctionParameters `type:"structure"` // These are custom parameters to be used when the target is a Amazon Redshift - // cluster to invoke the Amazon Redshift Data API ExecuteStatement. + // cluster to invoke the Amazon Redshift Data API BatchExecuteStatement. RedshiftDataParameters *PipeTargetRedshiftDataParameters `type:"structure"` // The parameters for using a SageMaker pipeline as a target. SageMakerPipelineParameters *PipeTargetSageMakerPipelineParameters `type:"structure"` - // The parameters for using a Amazon SQS stream as a source. + // The parameters for using a Amazon SQS stream as a target. SqsQueueParameters *PipeTargetSqsQueueParameters `type:"structure"` // The parameters for using a Step Functions state machine as a target. @@ -5590,7 +6009,7 @@ func (s *PipeTargetParameters) SetStepFunctionStateMachineParameters(v *PipeTarg } // These are custom parameters to be used when the target is a Amazon Redshift -// cluster to invoke the Amazon Redshift Data API ExecuteStatement. +// cluster to invoke the Amazon Redshift Data API BatchExecuteStatement. type PipeTargetRedshiftDataParameters struct { _ struct{} `type:"structure"` @@ -5611,7 +6030,7 @@ type PipeTargetRedshiftDataParameters struct { DbUser *string `min:"1" type:"string" sensitive:"true"` // The name or ARN of the secret that enables access to the database. Required - // when authenticating using SageMaker. + // when authenticating using Secrets Manager. SecretManagerArn *string `min:"1" type:"string"` // The SQL statement text to run. @@ -5770,7 +6189,7 @@ func (s *PipeTargetSageMakerPipelineParameters) SetPipelineParameterList(v []*Sa return s } -// The parameters for using a Amazon SQS stream as a source. +// The parameters for using a Amazon SQS stream as a target. type PipeTargetSqsQueueParameters struct { _ struct{} `type:"structure"` @@ -5825,7 +6244,20 @@ func (s *PipeTargetSqsQueueParameters) SetMessageGroupId(v string) *PipeTargetSq type PipeTargetStateMachineParameters struct { _ struct{} `type:"structure"` - // Specify whether to wait for the state machine to finish or not. + // Specify whether to invoke the Step Functions state machine synchronously + // or asynchronously. + // + // * REQUEST_RESPONSE (default) - Invoke synchronously. For more information, + // see StartSyncExecution (https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartSyncExecution.html) + // in the Step Functions API Reference. REQUEST_RESPONSE is not supported + // for STANDARD state machine workflows. + // + // * FIRE_AND_FORGET - Invoke asynchronously. For more information, see StartExecution + // (https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html) + // in the Step Functions API Reference. + // + // For more information, see Invocation types (https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html#pipes-invocation) + // in the Amazon EventBridge User Guide. InvocationType *string `type:"string" enum:"PipeTargetInvocationType"` } @@ -5963,6 +6395,173 @@ func (s *PlacementStrategy) SetType(v string) *PlacementStrategy { return s } +// The Amazon S3 logging configuration settings for the pipe. +type S3LogDestination struct { + _ struct{} `type:"structure"` + + // The name of the Amazon S3 bucket to which EventBridge delivers the log records + // for the pipe. + BucketName *string `type:"string"` + + // The Amazon Web Services account that owns the Amazon S3 bucket to which EventBridge + // delivers the log records for the pipe. + BucketOwner *string `type:"string"` + + // The format EventBridge uses for the log records. + // + // * json: JSON + // + // * plain: Plain text + // + // * w3c: W3C extended logging file format (https://www.w3.org/TR/WD-logfile) + OutputFormat *string `type:"string" enum:"S3OutputFormat"` + + // The prefix text with which to begin Amazon S3 log object names. + // + // For more information, see Organizing objects using prefixes (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html) + // in the Amazon Simple Storage Service User Guide. + Prefix *string `type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s S3LogDestination) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s S3LogDestination) GoString() string { + return s.String() +} + +// SetBucketName sets the BucketName field's value. +func (s *S3LogDestination) SetBucketName(v string) *S3LogDestination { + s.BucketName = &v + return s +} + +// SetBucketOwner sets the BucketOwner field's value. +func (s *S3LogDestination) SetBucketOwner(v string) *S3LogDestination { + s.BucketOwner = &v + return s +} + +// SetOutputFormat sets the OutputFormat field's value. +func (s *S3LogDestination) SetOutputFormat(v string) *S3LogDestination { + s.OutputFormat = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *S3LogDestination) SetPrefix(v string) *S3LogDestination { + s.Prefix = &v + return s +} + +// The Amazon S3 logging configuration settings for the pipe. +type S3LogDestinationParameters struct { + _ struct{} `type:"structure"` + + // Specifies the name of the Amazon S3 bucket to which EventBridge delivers + // the log records for the pipe. + // + // BucketName is a required field + BucketName *string `min:"3" type:"string" required:"true"` + + // Specifies the Amazon Web Services account that owns the Amazon S3 bucket + // to which EventBridge delivers the log records for the pipe. + // + // BucketOwner is a required field + BucketOwner *string `type:"string" required:"true"` + + // How EventBridge should format the log records. + // + // * json: JSON + // + // * plain: Plain text + // + // * w3c: W3C extended logging file format (https://www.w3.org/TR/WD-logfile) + OutputFormat *string `type:"string" enum:"S3OutputFormat"` + + // Specifies any prefix text with which to begin Amazon S3 log object names. + // + // You can use prefixes to organize the data that you store in Amazon S3 buckets. + // A prefix is a string of characters at the beginning of the object key name. + // A prefix can be any length, subject to the maximum length of the object key + // name (1,024 bytes). For more information, see Organizing objects using prefixes + // (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html) + // in the Amazon Simple Storage Service User Guide. + Prefix *string `type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s S3LogDestinationParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s S3LogDestinationParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *S3LogDestinationParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3LogDestinationParameters"} + if s.BucketName == nil { + invalidParams.Add(request.NewErrParamRequired("BucketName")) + } + if s.BucketName != nil && len(*s.BucketName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("BucketName", 3)) + } + if s.BucketOwner == nil { + invalidParams.Add(request.NewErrParamRequired("BucketOwner")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucketName sets the BucketName field's value. +func (s *S3LogDestinationParameters) SetBucketName(v string) *S3LogDestinationParameters { + s.BucketName = &v + return s +} + +// SetBucketOwner sets the BucketOwner field's value. +func (s *S3LogDestinationParameters) SetBucketOwner(v string) *S3LogDestinationParameters { + s.BucketOwner = &v + return s +} + +// SetOutputFormat sets the OutputFormat field's value. +func (s *S3LogDestinationParameters) SetOutputFormat(v string) *S3LogDestinationParameters { + s.OutputFormat = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *S3LogDestinationParameters) SetPrefix(v string) *S3LogDestinationParameters { + s.Prefix = &v + return s +} + // Name/Value pair of a parameter to start execution of a SageMaker Model Building // Pipeline. type SageMakerPipelineParameter struct { @@ -6835,6 +7434,9 @@ type UpdatePipeInput struct { // The parameters required to set up enrichment on your pipe. EnrichmentParameters *PipeEnrichmentParameters `type:"structure"` + // The logging configuration settings for the pipe. + LogConfiguration *PipeLogConfigurationParameters `type:"structure"` + // The name of the pipe. // // Name is a required field @@ -6852,6 +7454,10 @@ type UpdatePipeInput struct { Target *string `min:"1" type:"string"` // The parameters required to set up a target for your pipe. + // + // For more information about pipe target parameters, including how to use dynamic + // path parameters, see Target parameters (https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html) + // in the Amazon EventBridge User Guide. TargetParameters *PipeTargetParameters `type:"structure"` } @@ -6891,6 +7497,11 @@ func (s *UpdatePipeInput) Validate() error { if s.Target != nil && len(*s.Target) < 1 { invalidParams.Add(request.NewErrParamMinLen("Target", 1)) } + if s.LogConfiguration != nil { + if err := s.LogConfiguration.Validate(); err != nil { + invalidParams.AddNested("LogConfiguration", err.(request.ErrInvalidParams)) + } + } if s.SourceParameters != nil { if err := s.SourceParameters.Validate(); err != nil { invalidParams.AddNested("SourceParameters", err.(request.ErrInvalidParams)) @@ -6932,6 +7543,12 @@ func (s *UpdatePipeInput) SetEnrichmentParameters(v *PipeEnrichmentParameters) * return s } +// SetLogConfiguration sets the LogConfiguration field's value. +func (s *UpdatePipeInput) SetLogConfiguration(v *PipeLogConfigurationParameters) *UpdatePipeInput { + s.LogConfiguration = v + return s +} + // SetName sets the Name field's value. func (s *UpdatePipeInput) SetName(v string) *UpdatePipeInput { s.Name = &v @@ -7432,8 +8049,12 @@ type UpdatePipeSourceParameters struct { // The parameters for using a DynamoDB stream as a source. DynamoDBStreamParameters *UpdatePipeSourceDynamoDBStreamParameters `type:"structure"` - // The collection of event patterns used to filter events. For more information, - // see Events and Event Patterns (https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html) + // The collection of event patterns used to filter events. + // + // To remove a filter, specify a FilterCriteria object with an empty array of + // Filter objects. + // + // For more information, see Events and Event Patterns (https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html) // in the Amazon EventBridge User Guide. FilterCriteria *FilterCriteria `type:"structure"` @@ -7990,6 +8611,18 @@ func EcsResourceRequirementType_Values() []string { } } +const ( + // IncludeExecutionDataOptionAll is a IncludeExecutionDataOption enum value + IncludeExecutionDataOptionAll = "ALL" +) + +// IncludeExecutionDataOption_Values returns all elements of the IncludeExecutionDataOption enum +func IncludeExecutionDataOption_Values() []string { + return []string{ + IncludeExecutionDataOptionAll, + } +} + const ( // KinesisStreamStartPositionTrimHorizon is a KinesisStreamStartPosition enum value KinesisStreamStartPositionTrimHorizon = "TRIM_HORIZON" @@ -8030,6 +8663,30 @@ func LaunchType_Values() []string { } } +const ( + // LogLevelOff is a LogLevel enum value + LogLevelOff = "OFF" + + // LogLevelError is a LogLevel enum value + LogLevelError = "ERROR" + + // LogLevelInfo is a LogLevel enum value + LogLevelInfo = "INFO" + + // LogLevelTrace is a LogLevel enum value + LogLevelTrace = "TRACE" +) + +// LogLevel_Values returns all elements of the LogLevel enum +func LogLevel_Values() []string { + return []string{ + LogLevelOff, + LogLevelError, + LogLevelInfo, + LogLevelTrace, + } +} + const ( // MSKStartPositionTrimHorizon is a MSKStartPosition enum value MSKStartPositionTrimHorizon = "TRIM_HORIZON" @@ -8091,6 +8748,18 @@ const ( // PipeStateStopFailed is a PipeState enum value PipeStateStopFailed = "STOP_FAILED" + + // PipeStateDeleteFailed is a PipeState enum value + PipeStateDeleteFailed = "DELETE_FAILED" + + // PipeStateCreateRollbackFailed is a PipeState enum value + PipeStateCreateRollbackFailed = "CREATE_ROLLBACK_FAILED" + + // PipeStateDeleteRollbackFailed is a PipeState enum value + PipeStateDeleteRollbackFailed = "DELETE_ROLLBACK_FAILED" + + // PipeStateUpdateRollbackFailed is a PipeState enum value + PipeStateUpdateRollbackFailed = "UPDATE_ROLLBACK_FAILED" ) // PipeState_Values returns all elements of the PipeState enum @@ -8107,6 +8776,10 @@ func PipeState_Values() []string { PipeStateUpdateFailed, PipeStateStartFailed, PipeStateStopFailed, + PipeStateDeleteFailed, + PipeStateCreateRollbackFailed, + PipeStateDeleteRollbackFailed, + PipeStateUpdateRollbackFailed, } } @@ -8210,6 +8883,26 @@ func RequestedPipeStateDescribeResponse_Values() []string { } } +const ( + // S3OutputFormatJson is a S3OutputFormat enum value + S3OutputFormatJson = "json" + + // S3OutputFormatPlain is a S3OutputFormat enum value + S3OutputFormatPlain = "plain" + + // S3OutputFormatW3c is a S3OutputFormat enum value + S3OutputFormatW3c = "w3c" +) + +// S3OutputFormat_Values returns all elements of the S3OutputFormat enum +func S3OutputFormat_Values() []string { + return []string{ + S3OutputFormatJson, + S3OutputFormatPlain, + S3OutputFormatW3c, + } +} + const ( // SelfManagedKafkaStartPositionTrimHorizon is a SelfManagedKafkaStartPosition enum value SelfManagedKafkaStartPositionTrimHorizon = "TRIM_HORIZON" diff --git a/service/resourceexplorer2/api.go b/service/resourceexplorer2/api.go index a4cb9c412f0..265d2b88c35 100644 --- a/service/resourceexplorer2/api.go +++ b/service/resourceexplorer2/api.go @@ -89,7 +89,7 @@ func (c *ResourceExplorer2) AssociateDefaultViewRequest(input *AssociateDefaultV // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -184,7 +184,7 @@ func (c *ResourceExplorer2) BatchGetViewRequest(input *BatchGetViewInput) (req * // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -314,13 +314,22 @@ func (c *ResourceExplorer2) CreateIndexRequest(input *CreateIndexInput) (req *re // the syntax for the operation, and try again. // // - ConflictException -// The request failed because either you specified parameters that didn’t -// match the original request, or you attempted to create a view with a name -// that already exists in this Amazon Web Services Region. +// If you attempted to create a view, then the request failed because either +// you specified parameters that didn’t match the original request, or you +// attempted to create a view with a name that already exists in this Amazon +// Web Services Region. +// +// If you attempted to create an index, then the request failed because either +// you specified parameters that didn't match the original request, or an index +// already exists in the current Amazon Web Services Region. +// +// If you attempted to update an index type to AGGREGATOR, then the request +// failed because you already have an AGGREGATOR index in a different Amazon +// Web Services Region. // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -419,9 +428,18 @@ func (c *ResourceExplorer2) CreateViewRequest(input *CreateViewInput) (req *requ // the syntax for the operation, and try again. // // - ConflictException -// The request failed because either you specified parameters that didn’t -// match the original request, or you attempted to create a view with a name -// that already exists in this Amazon Web Services Region. +// If you attempted to create a view, then the request failed because either +// you specified parameters that didn’t match the original request, or you +// attempted to create a view with a name that already exists in this Amazon +// Web Services Region. +// +// If you attempted to create an index, then the request failed because either +// you specified parameters that didn't match the original request, or an index +// already exists in the current Amazon Web Services Region. +// +// If you attempted to update an index type to AGGREGATOR, then the request +// failed because you already have an AGGREGATOR index in a different Amazon +// Web Services Region. // // - ServiceQuotaExceededException // The request failed because it exceeds a service quota. @@ -431,7 +449,7 @@ func (c *ResourceExplorer2) CreateViewRequest(input *CreateViewInput) (req *requ // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -537,7 +555,7 @@ func (c *ResourceExplorer2) DeleteIndexRequest(input *DeleteIndexInput) (req *re // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -641,7 +659,7 @@ func (c *ResourceExplorer2) DeleteViewRequest(input *DeleteViewInput) (req *requ // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -731,6 +749,10 @@ func (c *ResourceExplorer2) DisassociateDefaultViewRequest(input *DisassociateDe // // Returned Error Types: // +// - ResourceNotFoundException +// You specified a resource that doesn't exist. Check the ID or ARN that you +// used to identity the resource, and try again. +// // - InternalServerException // The request failed because of internal service error. Try your request again // later. @@ -741,7 +763,7 @@ func (c *ResourceExplorer2) DisassociateDefaultViewRequest(input *DisassociateDe // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -769,6 +791,101 @@ func (c *ResourceExplorer2) DisassociateDefaultViewWithContext(ctx aws.Context, return out, req.Send() } +const opGetAccountLevelServiceConfiguration = "GetAccountLevelServiceConfiguration" + +// GetAccountLevelServiceConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetAccountLevelServiceConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetAccountLevelServiceConfiguration for more information on using the GetAccountLevelServiceConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the GetAccountLevelServiceConfigurationRequest method. +// req, resp := client.GetAccountLevelServiceConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-explorer-2-2022-07-28/GetAccountLevelServiceConfiguration +func (c *ResourceExplorer2) GetAccountLevelServiceConfigurationRequest(input *GetAccountLevelServiceConfigurationInput) (req *request.Request, output *GetAccountLevelServiceConfigurationOutput) { + op := &request.Operation{ + Name: opGetAccountLevelServiceConfiguration, + HTTPMethod: "POST", + HTTPPath: "/GetAccountLevelServiceConfiguration", + } + + if input == nil { + input = &GetAccountLevelServiceConfigurationInput{} + } + + output = &GetAccountLevelServiceConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetAccountLevelServiceConfiguration API operation for AWS Resource Explorer. +// +// Retrieves the status of your account's Amazon Web Services service access, +// and validates the service linked role required to access the multi-account +// search feature. Only the management account or a delegated administrator +// with service access enabled can invoke this API call. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Explorer's +// API operation GetAccountLevelServiceConfiguration for usage and error information. +// +// Returned Error Types: +// +// - ResourceNotFoundException +// You specified a resource that doesn't exist. Check the ID or ARN that you +// used to identity the resource, and try again. +// +// - InternalServerException +// The request failed because of internal service error. Try your request again +// later. +// +// - ThrottlingException +// The request failed because you exceeded a rate limit for this operation. +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). +// +// - AccessDeniedException +// The credentials that you used to call this operation don't have the minimum +// required permissions. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-explorer-2-2022-07-28/GetAccountLevelServiceConfiguration +func (c *ResourceExplorer2) GetAccountLevelServiceConfiguration(input *GetAccountLevelServiceConfigurationInput) (*GetAccountLevelServiceConfigurationOutput, error) { + req, out := c.GetAccountLevelServiceConfigurationRequest(input) + return out, req.Send() +} + +// GetAccountLevelServiceConfigurationWithContext is the same as GetAccountLevelServiceConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetAccountLevelServiceConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceExplorer2) GetAccountLevelServiceConfigurationWithContext(ctx aws.Context, input *GetAccountLevelServiceConfigurationInput, opts ...request.Option) (*GetAccountLevelServiceConfigurationOutput, error) { + req, out := c.GetAccountLevelServiceConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetDefaultView = "GetDefaultView" // GetDefaultViewRequest generates a "aws/request.Request" representing the @@ -839,7 +956,7 @@ func (c *ResourceExplorer2) GetDefaultViewRequest(input *GetDefaultViewInput) (r // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -936,7 +1053,7 @@ func (c *ResourceExplorer2) GetIndexRequest(input *GetIndexInput) (req *request. // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -1035,7 +1152,7 @@ func (c *ResourceExplorer2) GetViewRequest(input *GetViewInput) (req *request.Re // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -1135,7 +1252,7 @@ func (c *ResourceExplorer2) ListIndexesRequest(input *ListIndexesInput) (req *re // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -1214,6 +1331,158 @@ func (c *ResourceExplorer2) ListIndexesPagesWithContext(ctx aws.Context, input * return p.Err() } +const opListIndexesForMembers = "ListIndexesForMembers" + +// ListIndexesForMembersRequest generates a "aws/request.Request" representing the +// client's request for the ListIndexesForMembers operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListIndexesForMembers for more information on using the ListIndexesForMembers +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the ListIndexesForMembersRequest method. +// req, resp := client.ListIndexesForMembersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-explorer-2-2022-07-28/ListIndexesForMembers +func (c *ResourceExplorer2) ListIndexesForMembersRequest(input *ListIndexesForMembersInput) (req *request.Request, output *ListIndexesForMembersOutput) { + op := &request.Operation{ + Name: opListIndexesForMembers, + HTTPMethod: "POST", + HTTPPath: "/ListIndexesForMembers", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListIndexesForMembersInput{} + } + + output = &ListIndexesForMembersOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListIndexesForMembers API operation for AWS Resource Explorer. +// +// Retrieves a list of a member's indexes in all Amazon Web Services Regions +// that are currently collecting resource information for Amazon Web Services +// Resource Explorer. Only the management account or a delegated administrator +// with service access enabled can invoke this API call. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Explorer's +// API operation ListIndexesForMembers for usage and error information. +// +// Returned Error Types: +// +// - InternalServerException +// The request failed because of internal service error. Try your request again +// later. +// +// - ValidationException +// You provided an invalid value for one of the operation's parameters. Check +// the syntax for the operation, and try again. +// +// - ThrottlingException +// The request failed because you exceeded a rate limit for this operation. +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). +// +// - AccessDeniedException +// The credentials that you used to call this operation don't have the minimum +// required permissions. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-explorer-2-2022-07-28/ListIndexesForMembers +func (c *ResourceExplorer2) ListIndexesForMembers(input *ListIndexesForMembersInput) (*ListIndexesForMembersOutput, error) { + req, out := c.ListIndexesForMembersRequest(input) + return out, req.Send() +} + +// ListIndexesForMembersWithContext is the same as ListIndexesForMembers with the addition of +// the ability to pass a context and additional request options. +// +// See ListIndexesForMembers for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceExplorer2) ListIndexesForMembersWithContext(ctx aws.Context, input *ListIndexesForMembersInput, opts ...request.Option) (*ListIndexesForMembersOutput, error) { + req, out := c.ListIndexesForMembersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListIndexesForMembersPages iterates over the pages of a ListIndexesForMembers operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListIndexesForMembers method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListIndexesForMembers operation. +// pageNum := 0 +// err := client.ListIndexesForMembersPages(params, +// func(page *resourceexplorer2.ListIndexesForMembersOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +func (c *ResourceExplorer2) ListIndexesForMembersPages(input *ListIndexesForMembersInput, fn func(*ListIndexesForMembersOutput, bool) bool) error { + return c.ListIndexesForMembersPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListIndexesForMembersPagesWithContext same as ListIndexesForMembersPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceExplorer2) ListIndexesForMembersPagesWithContext(ctx aws.Context, input *ListIndexesForMembersInput, fn func(*ListIndexesForMembersOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListIndexesForMembersInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListIndexesForMembersRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + for p.Next() { + if !fn(p.Page().(*ListIndexesForMembersOutput), !p.HasNextPage()) { + break + } + } + + return p.Err() +} + const opListSupportedResourceTypes = "ListSupportedResourceTypes" // ListSupportedResourceTypesRequest generates a "aws/request.Request" representing the @@ -1285,7 +1554,7 @@ func (c *ResourceExplorer2) ListSupportedResourceTypesRequest(input *ListSupport // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -1435,7 +1704,7 @@ func (c *ResourceExplorer2) ListTagsForResourceRequest(input *ListTagsForResourc // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -1540,7 +1809,7 @@ func (c *ResourceExplorer2) ListViewsRequest(input *ListViewsInput) (req *reques // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -1710,7 +1979,7 @@ func (c *ResourceExplorer2) SearchRequest(input *SearchInput) (req *request.Requ // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -1854,16 +2123,25 @@ func (c *ResourceExplorer2) TagResourceRequest(input *TagResourceInput) (req *re // the syntax for the operation, and try again. // // - ConflictException -// The request failed because either you specified parameters that didn’t -// match the original request, or you attempted to create a view with a name -// that already exists in this Amazon Web Services Region. +// If you attempted to create a view, then the request failed because either +// you specified parameters that didn’t match the original request, or you +// attempted to create a view with a name that already exists in this Amazon +// Web Services Region. +// +// If you attempted to create an index, then the request failed because either +// you specified parameters that didn't match the original request, or an index +// already exists in the current Amazon Web Services Region. +// +// If you attempted to update an index type to AGGREGATOR, then the request +// failed because you already have an AGGREGATOR index in a different Amazon +// Web Services Region. // // - UnauthorizedException // The principal making the request isn't permitted to perform the operation. // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -1964,7 +2242,7 @@ func (c *ResourceExplorer2) UntagResourceRequest(input *UntagResourceInput) (req // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -2101,16 +2379,25 @@ func (c *ResourceExplorer2) UpdateIndexTypeRequest(input *UpdateIndexTypeInput) // the syntax for the operation, and try again. // // - ConflictException -// The request failed because either you specified parameters that didn’t -// match the original request, or you attempted to create a view with a name -// that already exists in this Amazon Web Services Region. +// If you attempted to create a view, then the request failed because either +// you specified parameters that didn’t match the original request, or you +// attempted to create a view with a name that already exists in this Amazon +// Web Services Region. +// +// If you attempted to create an index, then the request failed because either +// you specified parameters that didn't match the original request, or an index +// already exists in the current Amazon Web Services Region. +// +// If you attempted to update an index type to AGGREGATOR, then the request +// failed because you already have an AGGREGATOR index in a different Amazon +// Web Services Region. // // - ServiceQuotaExceededException // The request failed because it exceeds a service quota. // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -2209,7 +2496,7 @@ func (c *ResourceExplorer2) UpdateViewRequest(input *UpdateViewInput) (req *requ // // - ThrottlingException // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). // // - AccessDeniedException // The credentials that you used to call this operation don't have the minimum @@ -2522,9 +2809,18 @@ func (s *BatchGetViewOutput) SetViews(v []*View) *BatchGetViewOutput { return s } -// The request failed because either you specified parameters that didn’t -// match the original request, or you attempted to create a view with a name -// that already exists in this Amazon Web Services Region. +// If you attempted to create a view, then the request failed because either +// you specified parameters that didn’t match the original request, or you +// attempted to create a view with a name that already exists in this Amazon +// Web Services Region. +// +// If you attempted to create an index, then the request failed because either +// you specified parameters that didn't match the original request, or an index +// already exists in the current Amazon Web Services Region. +// +// If you attempted to update an index type to AGGREGATOR, then the request +// failed because you already have an AGGREGATOR index in a different Amazon +// Web Services Region. type ConflictException struct { _ struct{} `type:"structure"` RespMetadata protocol.ResponseMetadata `json:"-" xml:"-"` @@ -2594,13 +2890,17 @@ type CreateIndexInput struct { // This value helps ensure idempotency. Resource Explorer uses this value to // prevent the accidental creation of duplicate versions. We recommend that // you generate a UUID-type value (https://wikipedia.org/wiki/Universally_unique_identifier) - // to ensure the uniqueness of your views. + // to ensure the uniqueness of your index. ClientToken *string `type:"string" idempotencyToken:"true"` // The specified tags are attached only to the index created in this Amazon // Web Services Region. The tags aren't attached to any of the resources listed // in the index. - Tags map[string]*string `type:"map"` + // + // Tags is a sensitive parameter and its value will be + // replaced with "sensitive" in string returned by CreateIndexInput's + // String and GoString methods. + Tags map[string]*string `type:"map" sensitive:"true"` } // String returns the string representation. @@ -2726,8 +3026,16 @@ type CreateViewInput struct { // The default is an empty list, with no optional fields included in the results. IncludedProperties []*IncludedProperty `type:"list"` + // The root ARN of the account, an organizational unit (OU), or an organization + // ARN. If left empty, the default is account. + Scope *string `min:"1" type:"string"` + // Tag key and value pairs that are attached to the view. - Tags map[string]*string `type:"map"` + // + // Tags is a sensitive parameter and its value will be + // replaced with "sensitive" in string returned by CreateViewInput's + // String and GoString methods. + Tags map[string]*string `type:"map" sensitive:"true"` // The name of the new view. This name appears in the list of views in Resource // Explorer. @@ -2764,6 +3072,9 @@ func (s *CreateViewInput) Validate() error { if s.ClientToken != nil && len(*s.ClientToken) < 1 { invalidParams.Add(request.NewErrParamMinLen("ClientToken", 1)) } + if s.Scope != nil && len(*s.Scope) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Scope", 1)) + } if s.ViewName == nil { invalidParams.Add(request.NewErrParamRequired("ViewName")) } @@ -2807,6 +3118,12 @@ func (s *CreateViewInput) SetIncludedProperties(v []*IncludedProperty) *CreateVi return s } +// SetScope sets the Scope field's value. +func (s *CreateViewInput) SetScope(v string) *CreateViewInput { + s.Scope = &v + return s +} + // SetTags sets the Tags field's value. func (s *CreateViewInput) SetTags(v map[string]*string) *CreateViewInput { s.Tags = v @@ -3075,6 +3392,59 @@ func (s DisassociateDefaultViewOutput) GoString() string { return s.String() } +type GetAccountLevelServiceConfigurationInput struct { + _ struct{} `type:"structure" nopayload:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s GetAccountLevelServiceConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s GetAccountLevelServiceConfigurationInput) GoString() string { + return s.String() +} + +type GetAccountLevelServiceConfigurationOutput struct { + _ struct{} `type:"structure"` + + // Details about the organization, and whether configuration is ENABLED or DISABLED. + OrgConfiguration *OrgConfiguration `type:"structure"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s GetAccountLevelServiceConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s GetAccountLevelServiceConfigurationOutput) GoString() string { + return s.String() +} + +// SetOrgConfiguration sets the OrgConfiguration field's value. +func (s *GetAccountLevelServiceConfigurationOutput) SetOrgConfiguration(v *OrgConfiguration) *GetAccountLevelServiceConfigurationOutput { + s.OrgConfiguration = v + return s +} + type GetDefaultViewInput struct { _ struct{} `type:"structure" nopayload:"true"` } @@ -3183,7 +3553,11 @@ type GetIndexOutput struct { State *string `type:"string" enum:"IndexState"` // Tag key and value pairs that are attached to the index. - Tags map[string]*string `type:"map"` + // + // Tags is a sensitive parameter and its value will be + // replaced with "sensitive" in string returned by GetIndexOutput's + // String and GoString methods. + Tags map[string]*string `type:"map" sensitive:"true"` // The type of the index in this Region. For information about the aggregator // index and how it differs from a local index, see Turning on cross-Region @@ -3311,7 +3685,11 @@ type GetViewOutput struct { _ struct{} `type:"structure"` // Tag key and value pairs that are attached to the view. - Tags map[string]*string `type:"map"` + // + // Tags is a sensitive parameter and its value will be + // replaced with "sensitive" in string returned by GetViewOutput's + // String and GoString methods. + Tags map[string]*string `type:"map" sensitive:"true"` // A structure that contains the details for the requested view. View *View `type:"structure"` @@ -3540,6 +3918,136 @@ func (s *InternalServerException) RequestID() string { return s.RespMetadata.RequestID } +type ListIndexesForMembersInput struct { + _ struct{} `type:"structure"` + + // The account IDs will limit the output to only indexes from these accounts. + // + // AccountIdList is a required field + AccountIdList []*string `min:"1" type:"list" required:"true"` + + // The maximum number of results that you want included on each page of the + // response. If you do not include this parameter, it defaults to a value appropriate + // to the operation. If additional items exist beyond those included in the + // current response, the NextToken response element is present and has a value + // (is not null). Include that value as the NextToken request parameter in the + // next call to the operation to get the next part of the results. + // + // An API operation can return fewer results than the maximum even when there + // are more results available. You should check NextToken after every operation + // to ensure that you receive all of the results. + MaxResults *int64 `min:"1" type:"integer"` + + // The parameter for receiving additional results if you receive a NextToken + // response in a previous request. A NextToken response indicates that more + // output is available. Set this parameter to the value of the previous call's + // NextToken response to indicate where the output should continue from. The + // pagination tokens expire after 24 hours. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListIndexesForMembersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListIndexesForMembersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListIndexesForMembersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListIndexesForMembersInput"} + if s.AccountIdList == nil { + invalidParams.Add(request.NewErrParamRequired("AccountIdList")) + } + if s.AccountIdList != nil && len(s.AccountIdList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AccountIdList", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountIdList sets the AccountIdList field's value. +func (s *ListIndexesForMembersInput) SetAccountIdList(v []*string) *ListIndexesForMembersInput { + s.AccountIdList = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListIndexesForMembersInput) SetMaxResults(v int64) *ListIndexesForMembersInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListIndexesForMembersInput) SetNextToken(v string) *ListIndexesForMembersInput { + s.NextToken = &v + return s +} + +type ListIndexesForMembersOutput struct { + _ struct{} `type:"structure"` + + // A structure that contains the details and status of each index. + Indexes []*MemberIndex `type:"list"` + + // If present, indicates that more output is available than is included in the + // current response. Use this value in the NextToken request parameter in a + // subsequent call to the operation to get the next part of the output. You + // should repeat this until the NextToken response element comes back as null. + // The pagination tokens expire after 24 hours. + NextToken *string `type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListIndexesForMembersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ListIndexesForMembersOutput) GoString() string { + return s.String() +} + +// SetIndexes sets the Indexes field's value. +func (s *ListIndexesForMembersOutput) SetIndexes(v []*MemberIndex) *ListIndexesForMembersOutput { + s.Indexes = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListIndexesForMembersOutput) SetNextToken(v string) *ListIndexesForMembersOutput { + s.NextToken = &v + return s +} + type ListIndexesInput struct { _ struct{} `type:"structure"` @@ -3558,7 +4066,8 @@ type ListIndexesInput struct { // The parameter for receiving additional results if you receive a NextToken // response in a previous request. A NextToken response indicates that more // output is available. Set this parameter to the value of the previous call's - // NextToken response to indicate where the output should continue from. + // NextToken response to indicate where the output should continue from. The + // pagination tokens expire after 24 hours. NextToken *string `min:"1" type:"string"` // If specified, limits the response to only information about the index in @@ -3640,6 +4149,7 @@ type ListIndexesOutput struct { // current response. Use this value in the NextToken request parameter in a // subsequent call to the operation to get the next part of the output. You // should repeat this until the NextToken response element comes back as null. + // The pagination tokens expire after 24 hours. NextToken *string `type:"string"` } @@ -3691,7 +4201,8 @@ type ListSupportedResourceTypesInput struct { // The parameter for receiving additional results if you receive a NextToken // response in a previous request. A NextToken response indicates that more // output is available. Set this parameter to the value of the previous call's - // NextToken response to indicate where the output should continue from. + // NextToken response to indicate where the output should continue from. The + // pagination tokens expire after 24 hours. NextToken *string `type:"string"` } @@ -3745,6 +4256,7 @@ type ListSupportedResourceTypesOutput struct { // current response. Use this value in the NextToken request parameter in a // subsequent call to the operation to get the next part of the output. You // should repeat this until the NextToken response element comes back as null. + // The pagination tokens expire after 24 hours. NextToken *string `type:"string"` // The list of resource types supported by Resource Explorer. @@ -3836,7 +4348,11 @@ type ListTagsForResourceOutput struct { // The tag key and value pairs that you want to attach to the specified view // or index. - Tags map[string]*string `type:"map"` + // + // Tags is a sensitive parameter and its value will be + // replaced with "sensitive" in string returned by ListTagsForResourceOutput's + // String and GoString methods. + Tags map[string]*string `type:"map" sensitive:"true"` } // String returns the string representation. @@ -3881,7 +4397,8 @@ type ListViewsInput struct { // The parameter for receiving additional results if you receive a NextToken // response in a previous request. A NextToken response indicates that more // output is available. Set this parameter to the value of the previous call's - // NextToken response to indicate where the output should continue from. + // NextToken response to indicate where the output should continue from. The + // pagination tokens expire after 24 hours. NextToken *string `type:"string"` } @@ -3935,6 +4452,7 @@ type ListViewsOutput struct { // current response. Use this value in the NextToken request parameter in a // subsequent call to the operation to get the next part of the output. You // should repeat this until the NextToken response element comes back as null. + // The pagination tokens expire after 24 hours. NextToken *string `type:"string"` // The list of views available in the Amazon Web Services Region in which you @@ -3972,6 +4490,124 @@ func (s *ListViewsOutput) SetViews(v []*string) *ListViewsOutput { return s } +// An index is the data store used by Amazon Web Services Resource Explorer +// to hold information about your Amazon Web Services resources that the service +// discovers. +type MemberIndex struct { + _ struct{} `type:"structure"` + + // The account ID for the index. + AccountId *string `type:"string"` + + // The Amazon resource name (ARN) (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // of the index. + Arn *string `type:"string"` + + // The Amazon Web Services Region in which the index exists. + Region *string `type:"string"` + + // The type of index. It can be one of the following values: + // + // * LOCAL – The index contains information about resources from only the + // same Amazon Web Services Region. + // + // * AGGREGATOR – Resource Explorer replicates copies of the indexed information + // about resources in all other Amazon Web Services Regions to the aggregator + // index. This lets search results in the Region with the aggregator index + // to include resources from all Regions in the account where Resource Explorer + // is turned on. + Type *string `type:"string" enum:"IndexType"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s MemberIndex) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s MemberIndex) GoString() string { + return s.String() +} + +// SetAccountId sets the AccountId field's value. +func (s *MemberIndex) SetAccountId(v string) *MemberIndex { + s.AccountId = &v + return s +} + +// SetArn sets the Arn field's value. +func (s *MemberIndex) SetArn(v string) *MemberIndex { + s.Arn = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *MemberIndex) SetRegion(v string) *MemberIndex { + s.Region = &v + return s +} + +// SetType sets the Type field's value. +func (s *MemberIndex) SetType(v string) *MemberIndex { + s.Type = &v + return s +} + +// This is a structure that contains the status of Amazon Web Services service +// access, and whether you have a valid service-linked role to enable multi-account +// search for your organization. +type OrgConfiguration struct { + _ struct{} `type:"structure"` + + // This value displays whether your Amazon Web Services service access is ENABLED + // or DISABLED. + // + // AWSServiceAccessStatus is a required field + AWSServiceAccessStatus *string `type:"string" required:"true" enum:"AWSServiceAccessStatus"` + + // This value shows whether or not you have a valid a service-linked role required + // to start the multi-account search feature. + ServiceLinkedRole *string `type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s OrgConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s OrgConfiguration) GoString() string { + return s.String() +} + +// SetAWSServiceAccessStatus sets the AWSServiceAccessStatus field's value. +func (s *OrgConfiguration) SetAWSServiceAccessStatus(v string) *OrgConfiguration { + s.AWSServiceAccessStatus = &v + return s +} + +// SetServiceLinkedRole sets the ServiceLinkedRole field's value. +func (s *OrgConfiguration) SetServiceLinkedRole(v string) *OrgConfiguration { + s.ServiceLinkedRole = &v + return s +} + // A resource in Amazon Web Services that Amazon Web Services Resource Explorer // has discovered, and for which it has stored information in the index of the // Amazon Web Services Region that contains the resource. @@ -4296,7 +4932,8 @@ type SearchInput struct { // The parameter for receiving additional results if you receive a NextToken // response in a previous request. A NextToken response indicates that more // output is available. Set this parameter to the value of the previous call's - // NextToken response to indicate where the output should continue from. + // NextToken response to indicate where the output should continue from. The + // pagination tokens expire after 24 hours. NextToken *string `min:"1" type:"string"` // A string that includes keywords and filters that specify the resources that @@ -4399,6 +5036,7 @@ type SearchOutput struct { // current response. Use this value in the NextToken request parameter in a // subsequent call to the operation to get the next part of the output. You // should repeat this until the NextToken response element comes back as null. + // The pagination tokens expire after 24 hours. NextToken *string `min:"1" type:"string"` // The list of structures that describe the resources that match the query. @@ -4580,7 +5218,11 @@ type TagResourceInput struct { // A list of tag key and value pairs that you want to attach to the specified // view or index. - Tags map[string]*string `type:"map"` + // + // Tags is a sensitive parameter and its value will be + // replaced with "sensitive" in string returned by TagResourceInput's + // String and GoString methods. + Tags map[string]*string `type:"map" sensitive:"true"` } // String returns the string representation. @@ -4652,7 +5294,7 @@ func (s TagResourceOutput) GoString() string { } // The request failed because you exceeded a rate limit for this operation. -// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). +// For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). type ThrottlingException struct { _ struct{} `type:"structure"` RespMetadata protocol.ResponseMetadata `json:"-" xml:"-"` @@ -4792,8 +5434,12 @@ type UntagResourceInput struct { // A list of the keys for the tags that you want to remove from the specified // view or index. // + // TagKeys is a sensitive parameter and its value will be + // replaced with "sensitive" in string returned by UntagResourceInput's + // String and GoString methods. + // // TagKeys is a required field - TagKeys []*string `location:"querystring" locationName:"tagKeys" type:"list" required:"true"` + TagKeys []*string `location:"querystring" locationName:"tagKeys" type:"list" required:"true" sensitive:"true"` } // String returns the string representation. @@ -5335,6 +5981,22 @@ func (s *View) SetViewArn(v string) *View { return s } +const ( + // AWSServiceAccessStatusEnabled is a AWSServiceAccessStatus enum value + AWSServiceAccessStatusEnabled = "ENABLED" + + // AWSServiceAccessStatusDisabled is a AWSServiceAccessStatus enum value + AWSServiceAccessStatusDisabled = "DISABLED" +) + +// AWSServiceAccessStatus_Values returns all elements of the AWSServiceAccessStatus enum +func AWSServiceAccessStatus_Values() []string { + return []string{ + AWSServiceAccessStatusEnabled, + AWSServiceAccessStatusDisabled, + } +} + const ( // IndexStateCreating is a IndexState enum value IndexStateCreating = "CREATING" diff --git a/service/resourceexplorer2/errors.go b/service/resourceexplorer2/errors.go index 8d9818b8047..74e7c12b824 100644 --- a/service/resourceexplorer2/errors.go +++ b/service/resourceexplorer2/errors.go @@ -18,9 +18,18 @@ const ( // ErrCodeConflictException for service response error code // "ConflictException". // - // The request failed because either you specified parameters that didn’t - // match the original request, or you attempted to create a view with a name - // that already exists in this Amazon Web Services Region. + // If you attempted to create a view, then the request failed because either + // you specified parameters that didn’t match the original request, or you + // attempted to create a view with a name that already exists in this Amazon + // Web Services Region. + // + // If you attempted to create an index, then the request failed because either + // you specified parameters that didn't match the original request, or an index + // already exists in the current Amazon Web Services Region. + // + // If you attempted to update an index type to AGGREGATOR, then the request + // failed because you already have an AGGREGATOR index in a different Amazon + // Web Services Region. ErrCodeConflictException = "ConflictException" // ErrCodeInternalServerException for service response error code @@ -47,7 +56,7 @@ const ( // "ThrottlingException". // // The request failed because you exceeded a rate limit for this operation. - // For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/arexug/mainline/quotas.html). + // For more information, see Quotas for Resource Explorer (https://docs.aws.amazon.com/resource-explorer/latest/userguide/quotas.html). ErrCodeThrottlingException = "ThrottlingException" // ErrCodeUnauthorizedException for service response error code diff --git a/service/resourceexplorer2/resourceexplorer2iface/interface.go b/service/resourceexplorer2/resourceexplorer2iface/interface.go index adc5dafeccd..3ca88166e9c 100644 --- a/service/resourceexplorer2/resourceexplorer2iface/interface.go +++ b/service/resourceexplorer2/resourceexplorer2iface/interface.go @@ -88,6 +88,10 @@ type ResourceExplorer2API interface { DisassociateDefaultViewWithContext(aws.Context, *resourceexplorer2.DisassociateDefaultViewInput, ...request.Option) (*resourceexplorer2.DisassociateDefaultViewOutput, error) DisassociateDefaultViewRequest(*resourceexplorer2.DisassociateDefaultViewInput) (*request.Request, *resourceexplorer2.DisassociateDefaultViewOutput) + GetAccountLevelServiceConfiguration(*resourceexplorer2.GetAccountLevelServiceConfigurationInput) (*resourceexplorer2.GetAccountLevelServiceConfigurationOutput, error) + GetAccountLevelServiceConfigurationWithContext(aws.Context, *resourceexplorer2.GetAccountLevelServiceConfigurationInput, ...request.Option) (*resourceexplorer2.GetAccountLevelServiceConfigurationOutput, error) + GetAccountLevelServiceConfigurationRequest(*resourceexplorer2.GetAccountLevelServiceConfigurationInput) (*request.Request, *resourceexplorer2.GetAccountLevelServiceConfigurationOutput) + GetDefaultView(*resourceexplorer2.GetDefaultViewInput) (*resourceexplorer2.GetDefaultViewOutput, error) GetDefaultViewWithContext(aws.Context, *resourceexplorer2.GetDefaultViewInput, ...request.Option) (*resourceexplorer2.GetDefaultViewOutput, error) GetDefaultViewRequest(*resourceexplorer2.GetDefaultViewInput) (*request.Request, *resourceexplorer2.GetDefaultViewOutput) @@ -107,6 +111,13 @@ type ResourceExplorer2API interface { ListIndexesPages(*resourceexplorer2.ListIndexesInput, func(*resourceexplorer2.ListIndexesOutput, bool) bool) error ListIndexesPagesWithContext(aws.Context, *resourceexplorer2.ListIndexesInput, func(*resourceexplorer2.ListIndexesOutput, bool) bool, ...request.Option) error + ListIndexesForMembers(*resourceexplorer2.ListIndexesForMembersInput) (*resourceexplorer2.ListIndexesForMembersOutput, error) + ListIndexesForMembersWithContext(aws.Context, *resourceexplorer2.ListIndexesForMembersInput, ...request.Option) (*resourceexplorer2.ListIndexesForMembersOutput, error) + ListIndexesForMembersRequest(*resourceexplorer2.ListIndexesForMembersInput) (*request.Request, *resourceexplorer2.ListIndexesForMembersOutput) + + ListIndexesForMembersPages(*resourceexplorer2.ListIndexesForMembersInput, func(*resourceexplorer2.ListIndexesForMembersOutput, bool) bool) error + ListIndexesForMembersPagesWithContext(aws.Context, *resourceexplorer2.ListIndexesForMembersInput, func(*resourceexplorer2.ListIndexesForMembersOutput, bool) bool, ...request.Option) error + ListSupportedResourceTypes(*resourceexplorer2.ListSupportedResourceTypesInput) (*resourceexplorer2.ListSupportedResourceTypesOutput, error) ListSupportedResourceTypesWithContext(aws.Context, *resourceexplorer2.ListSupportedResourceTypesInput, ...request.Option) (*resourceexplorer2.ListSupportedResourceTypesOutput, error) ListSupportedResourceTypesRequest(*resourceexplorer2.ListSupportedResourceTypesInput) (*request.Request, *resourceexplorer2.ListSupportedResourceTypesOutput) diff --git a/service/sagemaker/api.go b/service/sagemaker/api.go index 7c4c02d6486..05969a363cc 100644 --- a/service/sagemaker/api.go +++ b/service/sagemaker/api.go @@ -32757,7 +32757,7 @@ type AutoMLJobObjective struct { // cross-entropy loss. After fine-tuning a language model, you can evaluate // the quality of its generated text using different metrics. For a list // of the available metrics, see Metrics for fine-tuning LLMs in Autopilot - // (https://docs.aws.amazon.com/sagemaker/latest/dg/llms-finetuning-models.html). + // (https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-llms-finetuning-metrics.html). // // MetricName is a required field MetricName *string `type:"string" required:"true" enum:"AutoMLMetricEnum"` @@ -38217,7 +38217,7 @@ type CreateAutoMLJobV2Input struct { // cross-entropy loss. After fine-tuning a language model, you can evaluate // the quality of its generated text using different metrics. For a list // of the available metrics, see Metrics for fine-tuning LLMs in Autopilot - // (https://docs.aws.amazon.com/sagemaker/latest/dg/llms-finetuning-models.html). + // (https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-llms-finetuning-metrics.html). AutoMLJobObjective *AutoMLJobObjective `type:"structure"` // Defines the configuration settings of one of the supported problem types. @@ -75316,9 +75316,7 @@ type InferenceSpecification struct { Containers []*ModelPackageContainerDefinition `min:"1" type:"list" required:"true"` // The supported MIME types for the input data. - // - // SupportedContentTypes is a required field - SupportedContentTypes []*string `type:"list" required:"true"` + SupportedContentTypes []*string `type:"list"` // A list of the instance types that are used to generate inferences in real-time. // @@ -75327,9 +75325,7 @@ type InferenceSpecification struct { SupportedRealtimeInferenceInstanceTypes []*string `type:"list" enum:"ProductionVariantInstanceType"` // The supported MIME types for the output data. - // - // SupportedResponseMIMETypes is a required field - SupportedResponseMIMETypes []*string `type:"list" required:"true"` + SupportedResponseMIMETypes []*string `type:"list"` // A list of the instance types on which a transformation job can be run or // on which an endpoint can be deployed. @@ -75366,12 +75362,6 @@ func (s *InferenceSpecification) Validate() error { if s.Containers != nil && len(s.Containers) < 1 { invalidParams.Add(request.NewErrParamMinLen("Containers", 1)) } - if s.SupportedContentTypes == nil { - invalidParams.Add(request.NewErrParamRequired("SupportedContentTypes")) - } - if s.SupportedResponseMIMETypes == nil { - invalidParams.Add(request.NewErrParamRequired("SupportedResponseMIMETypes")) - } if s.SupportedTransformInstanceTypes != nil && len(s.SupportedTransformInstanceTypes) < 1 { invalidParams.Add(request.NewErrParamMinLen("SupportedTransformInstanceTypes", 1)) } @@ -108851,7 +108841,7 @@ type TextGenerationJobConfig struct { // The name of the base model to fine-tune. Autopilot supports fine-tuning a // variety of large language models. For information on the list of supported - // models, see Text generation models supporting fine-tuning in Autopilot (https://docs.aws.amazon.com/sagemaker/src/AWSIronmanApiDoc/build/server-root/sagemaker/latest/dg/llms-finetuning-models.html#llms-finetuning-supported-llms). + // models, see Text generation models supporting fine-tuning in Autopilot (https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-llms-finetuning-models.html#autopilot-llms-finetuning-supported-llms). // If no BaseModelName is provided, the default model used is Falcon-7B-Instruct. BaseModelName *string `min:"1" type:"string"` diff --git a/service/sfn/api.go b/service/sfn/api.go index db3c239740a..812622095a8 100644 --- a/service/sfn/api.go +++ b/service/sfn/api.go @@ -515,8 +515,10 @@ func (c *SFN) DeleteStateMachineRequest(input *DeleteStateMachineInput) (req *re // DeleteStateMachine API operation for AWS Step Functions. // -// Deletes a state machine. This is an asynchronous operation: It sets the state -// machine's status to DELETING and begins the deletion process. +// Deletes a state machine. This is an asynchronous operation. It sets the state +// machine's status to DELETING and begins the deletion process. A state machine +// is deleted only when all its executions are completed. On the next state +// transition, the state machine's executions are terminated. // // A qualified state machine ARN can either refer to a Distributed Map state // defined within a state machine, a version ARN, or an alias ARN. @@ -922,8 +924,11 @@ func (c *SFN) DescribeExecutionRequest(input *DescribeExecutionInput) (req *requ // // Provides information about a state machine execution, such as the state machine // associated with the execution, the execution input and output, and relevant -// execution metadata. Use this API action to return the Map Run Amazon Resource -// Name (ARN) if the execution was dispatched by a Map Run. +// execution metadata. If you've redriven (https://docs.aws.amazon.com/step-functions/latest/dg/redrive-executions.html) +// an execution, you can use this API action to return information about the +// redrives of that execution. In addition, you can use this API action to return +// the Map Run Amazon Resource Name (ARN) if the execution was dispatched by +// a Map Run. // // If you specify a version or alias ARN when you call the StartExecution API // action, DescribeExecution returns that ARN. @@ -931,7 +936,7 @@ func (c *SFN) DescribeExecutionRequest(input *DescribeExecutionInput) (req *requ // This operation is eventually consistent. The results are best effort and // may not reflect very recent updates and changes. // -// Executions of an EXPRESS state machinearen't supported by DescribeExecution +// Executions of an EXPRESS state machine aren't supported by DescribeExecution // unless a Map Run dispatched them. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1015,7 +1020,9 @@ func (c *SFN) DescribeMapRunRequest(input *DescribeMapRunInput) (req *request.Re // DescribeMapRun API operation for AWS Step Functions. // // Provides information about a Map Run's configuration, progress, and results. -// For more information, see Examining Map Run (https://docs.aws.amazon.com/step-functions/latest/dg/concepts-examine-map-run.html) +// If you've redriven (https://docs.aws.amazon.com/step-functions/latest/dg/redrive-map-run.html) +// a Map Run, this API action also returns information about the redrives of +// that Map Run. For more information, see Examining Map Run (https://docs.aws.amazon.com/step-functions/latest/dg/concepts-examine-map-run.html) // in the Step Functions Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1799,7 +1806,9 @@ func (c *SFN) ListExecutionsRequest(input *ListExecutionsInput) (req *request.Re // // Lists all executions of a state machine or a Map Run. You can list all executions // related to a state machine by specifying a state machine Amazon Resource -// Name (ARN), or those related to a Map Run by specifying a Map Run ARN. +// Name (ARN), or those related to a Map Run by specifying a Map Run ARN. Using +// this API action, you can also list all redriven (https://docs.aws.amazon.com/step-functions/latest/dg/redrive-executions.html) +// executions. // // You can also provide a state machine alias (https://docs.aws.amazon.com/step-functions/latest/dg/concepts-state-machine-alias.html) // ARN or version (https://docs.aws.amazon.com/step-functions/latest/dg/concepts-state-machine-version.html) @@ -2625,6 +2634,139 @@ func (c *SFN) PublishStateMachineVersionWithContext(ctx aws.Context, input *Publ return out, req.Send() } +const opRedriveExecution = "RedriveExecution" + +// RedriveExecutionRequest generates a "aws/request.Request" representing the +// client's request for the RedriveExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RedriveExecution for more information on using the RedriveExecution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the RedriveExecutionRequest method. +// req, resp := client.RedriveExecutionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/states-2016-11-23/RedriveExecution +func (c *SFN) RedriveExecutionRequest(input *RedriveExecutionInput) (req *request.Request, output *RedriveExecutionOutput) { + op := &request.Operation{ + Name: opRedriveExecution, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RedriveExecutionInput{} + } + + output = &RedriveExecutionOutput{} + req = c.newRequest(op, input, output) + return +} + +// RedriveExecution API operation for AWS Step Functions. +// +// Restarts unsuccessful executions of Standard workflows that didn't complete +// successfully in the last 14 days. These include failed, aborted, or timed +// out executions. When you redrive (https://docs.aws.amazon.com/step-functions/latest/dg/redrive-executions.html) +// an execution, it continues the failed execution from the unsuccessful step +// and uses the same input. Step Functions preserves the results and execution +// history of the successful steps, and doesn't rerun these steps when you redrive +// an execution. Redriven executions use the same state machine definition and +// execution ARN as the original execution attempt. +// +// For workflows that include an Inline Map (https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-map-state.html) +// or Parallel (https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-parallel-state.html) +// state, RedriveExecution API action reschedules and redrives only the iterations +// and branches that failed or aborted. +// +// To redrive a workflow that includes a Distributed Map state with failed child +// workflow executions, you must redrive the parent workflow (https://docs.aws.amazon.com/step-functions/latest/dg/use-dist-map-orchestrate-large-scale-parallel-workloads.html#dist-map-orchestrate-parallel-workloads-key-terms). +// The parent workflow redrives all the unsuccessful states, including Distributed +// Map. +// +// This API action is not supported by EXPRESS state machines. +// +// However, you can restart the unsuccessful executions of Express child workflows +// in a Distributed Map by redriving its Map Run. When you redrive a Map Run, +// the Express child workflows are rerun using the StartExecution API action. +// For more information, see Redriving Map Runs (https://docs.aws.amazon.com/step-functions/latest/dg/redrive-map-run.html). +// +// You can redrive executions if your original execution meets the following +// conditions: +// +// - The execution status isn't SUCCEEDED. +// +// - Your workflow execution has not exceeded the redrivable period of 14 +// days. Redrivable period refers to the time during which you can redrive +// a given execution. This period starts from the day a state machine completes +// its execution. +// +// - The workflow execution has not exceeded the maximum open time of one +// year. For more information about state machine quotas, see Quotas related +// to state machine executions (https://docs.aws.amazon.com/step-functions/latest/dg/limits-overview.html#service-limits-state-machine-executions). +// +// - The execution event history count is less than 24,999. Redriven executions +// append their event history to the existing event history. Make sure your +// workflow execution contains less than 24,999 events to accommodate the +// ExecutionRedriven history event and at least one other history event. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Step Functions's +// API operation RedriveExecution for usage and error information. +// +// Returned Error Types: +// +// - ExecutionDoesNotExist +// The specified execution does not exist. +// +// - ExecutionNotRedrivable +// The execution Amazon Resource Name (ARN) that you specified for executionArn +// cannot be redriven. +// +// - ExecutionLimitExceeded +// The maximum number of running executions has been reached. Running executions +// must end or be stopped before a new execution can be started. +// +// - InvalidArn +// The provided Amazon Resource Name (ARN) is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/states-2016-11-23/RedriveExecution +func (c *SFN) RedriveExecution(input *RedriveExecutionInput) (*RedriveExecutionOutput, error) { + req, out := c.RedriveExecutionRequest(input) + return out, req.Send() +} + +// RedriveExecutionWithContext is the same as RedriveExecution with the addition of +// the ability to pass a context and additional request options. +// +// See RedriveExecution for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SFN) RedriveExecutionWithContext(ctx aws.Context, input *RedriveExecutionInput, opts ...request.Option) (*RedriveExecutionOutput, error) { + req, out := c.RedriveExecutionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opSendTaskFailure = "SendTaskFailure" // SendTaskFailureRequest generates a "aws/request.Request" representing the @@ -2669,7 +2811,8 @@ func (c *SFN) SendTaskFailureRequest(input *SendTaskFailureInput) (req *request. // SendTaskFailure API operation for AWS Step Functions. // -// Used by activity workers and task states using the callback (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-wait-token) +// Used by activity workers, Task states using the callback (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-wait-token) +// pattern, and optionally Task states using the job run (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-sync) // pattern to report that the task identified by the taskToken failed. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2682,11 +2825,14 @@ func (c *SFN) SendTaskFailureRequest(input *SendTaskFailureInput) (req *request. // Returned Error Types: // // - TaskDoesNotExist +// The activity does not exist. // // - InvalidToken // The provided token is not valid. // // - TaskTimedOut +// The task token has either expired or the task associated with the token has +// already been closed. // // See also, https://docs.aws.amazon.com/goto/WebAPI/states-2016-11-23/SendTaskFailure func (c *SFN) SendTaskFailure(input *SendTaskFailureInput) (*SendTaskFailureOutput, error) { @@ -2754,14 +2900,15 @@ func (c *SFN) SendTaskHeartbeatRequest(input *SendTaskHeartbeatInput) (req *requ // SendTaskHeartbeat API operation for AWS Step Functions. // -// Used by activity workers and task states using the callback (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-wait-token) +// Used by activity workers and Task states using the callback (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-wait-token) +// pattern, and optionally Task states using the job run (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-sync) // pattern to report to Step Functions that the task represented by the specified // taskToken is still making progress. This action resets the Heartbeat clock. // The Heartbeat threshold is specified in the state machine's Amazon States // Language definition (HeartbeatSeconds). This action does not in itself create // an event in the execution history. However, if the task times out, the execution // history contains an ActivityTimedOut entry for activities, or a TaskTimedOut -// entry for for tasks using the job run (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-sync) +// entry for tasks using the job run (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-sync) // or callback (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-wait-token) // pattern. // @@ -2780,11 +2927,14 @@ func (c *SFN) SendTaskHeartbeatRequest(input *SendTaskHeartbeatInput) (req *requ // Returned Error Types: // // - TaskDoesNotExist +// The activity does not exist. // // - InvalidToken // The provided token is not valid. // // - TaskTimedOut +// The task token has either expired or the task associated with the token has +// already been closed. // // See also, https://docs.aws.amazon.com/goto/WebAPI/states-2016-11-23/SendTaskHeartbeat func (c *SFN) SendTaskHeartbeat(input *SendTaskHeartbeatInput) (*SendTaskHeartbeatOutput, error) { @@ -2852,7 +3002,8 @@ func (c *SFN) SendTaskSuccessRequest(input *SendTaskSuccessInput) (req *request. // SendTaskSuccess API operation for AWS Step Functions. // -// Used by activity workers and task states using the callback (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-wait-token) +// Used by activity workers, Task states using the callback (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-wait-token) +// pattern, and optionally Task states using the job run (https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-sync) // pattern to report that the task identified by the taskToken completed successfully. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2865,6 +3016,7 @@ func (c *SFN) SendTaskSuccessRequest(input *SendTaskSuccessInput) (req *request. // Returned Error Types: // // - TaskDoesNotExist +// The activity does not exist. // // - InvalidOutput // The provided JSON output data is not valid. @@ -2873,6 +3025,8 @@ func (c *SFN) SendTaskSuccessRequest(input *SendTaskSuccessInput) (req *request. // The provided token is not valid. // // - TaskTimedOut +// The task token has either expired or the task associated with the token has +// already been closed. // // See also, https://docs.aws.amazon.com/goto/WebAPI/states-2016-11-23/SendTaskSuccess func (c *SFN) SendTaskSuccess(input *SendTaskSuccessInput) (*SendTaskSuccessOutput, error) { @@ -5568,6 +5722,57 @@ type DescribeExecutionOutput struct { // Provides details about execution input or output. OutputDetails *CloudWatchEventsExecutionDataDetails `locationName:"outputDetails" type:"structure"` + // The number of times you've redriven an execution. If you have not yet redriven + // an execution, the redriveCount is 0. This count is not updated for redrives + // that failed to start or are pending to be redriven. + RedriveCount *int64 `locationName:"redriveCount" type:"integer"` + + // The date the execution was last redriven. If you have not yet redriven an + // execution, the redriveDate is null. + // + // The redriveDate is unavailable if you redrive a Map Run that starts child + // workflow executions of type EXPRESS. + RedriveDate *time.Time `locationName:"redriveDate" type:"timestamp"` + + // Indicates whether or not an execution can be redriven at a given point in + // time. + // + // * For executions of type STANDARD, redriveStatus is NOT_REDRIVABLE if + // calling the RedriveExecution API action would return the ExecutionNotRedrivable + // error. + // + // * For a Distributed Map that includes child workflows of type STANDARD, + // redriveStatus indicates whether or not the Map Run can redrive child workflow + // executions. + // + // * For a Distributed Map that includes child workflows of type EXPRESS, + // redriveStatus indicates whether or not the Map Run can redrive child workflow + // executions. You can redrive failed or timed out EXPRESS workflows only + // if they're a part of a Map Run. When you redrive (https://docs.aws.amazon.com/step-functions/latest/dg/redrive-map-run.html) + // the Map Run, these workflows are restarted using the StartExecution API + // action. + RedriveStatus *string `locationName:"redriveStatus" type:"string" enum:"ExecutionRedriveStatus"` + + // When redriveStatus is NOT_REDRIVABLE, redriveStatusReason specifies the reason + // why an execution cannot be redriven. + // + // * For executions of type STANDARD, or for a Distributed Map that includes + // child workflows of type STANDARD, redriveStatusReason can include one + // of the following reasons: State machine is in DELETING status. Execution + // is RUNNING and cannot be redriven. Execution is SUCCEEDED and cannot be + // redriven. Execution was started before the launch of RedriveExecution. + // Execution history event limit exceeded. Execution has exceeded the max + // execution time. Execution redrivable period exceeded. + // + // * For a Distributed Map that includes child workflows of type EXPRESS, + // redriveStatusReason is only returned if the child workflows are not redrivable. + // This happens when the child workflow executions have completed successfully. + // + // RedriveStatusReason is a sensitive parameter and its value will be + // replaced with "sensitive" in string returned by DescribeExecutionOutput's + // String and GoString methods. + RedriveStatusReason *string `locationName:"redriveStatusReason" type:"string" sensitive:"true"` + // The date the execution is started. // // StartDate is a required field @@ -5678,6 +5883,30 @@ func (s *DescribeExecutionOutput) SetOutputDetails(v *CloudWatchEventsExecutionD return s } +// SetRedriveCount sets the RedriveCount field's value. +func (s *DescribeExecutionOutput) SetRedriveCount(v int64) *DescribeExecutionOutput { + s.RedriveCount = &v + return s +} + +// SetRedriveDate sets the RedriveDate field's value. +func (s *DescribeExecutionOutput) SetRedriveDate(v time.Time) *DescribeExecutionOutput { + s.RedriveDate = &v + return s +} + +// SetRedriveStatus sets the RedriveStatus field's value. +func (s *DescribeExecutionOutput) SetRedriveStatus(v string) *DescribeExecutionOutput { + s.RedriveStatus = &v + return s +} + +// SetRedriveStatusReason sets the RedriveStatusReason field's value. +func (s *DescribeExecutionOutput) SetRedriveStatusReason(v string) *DescribeExecutionOutput { + s.RedriveStatusReason = &v + return s +} + // SetStartDate sets the StartDate field's value. func (s *DescribeExecutionOutput) SetStartDate(v time.Time) *DescribeExecutionOutput { s.StartDate = &v @@ -5802,6 +6031,15 @@ type DescribeMapRunOutput struct { // MaxConcurrency is a required field MaxConcurrency *int64 `locationName:"maxConcurrency" type:"integer" required:"true"` + // The number of times you've redriven a Map Run. If you have not yet redriven + // a Map Run, the redriveCount is 0. This count is not updated for redrives + // that failed to start or are pending to be redriven. + RedriveCount *int64 `locationName:"redriveCount" type:"integer"` + + // The date a Map Run was last redriven. If you have not yet redriven a Map + // Run, the redriveDate is null. + RedriveDate *time.Time `locationName:"redriveDate" type:"timestamp"` + // The date when the Map Run was started. // // StartDate is a required field @@ -5876,6 +6114,18 @@ func (s *DescribeMapRunOutput) SetMaxConcurrency(v int64) *DescribeMapRunOutput return s } +// SetRedriveCount sets the RedriveCount field's value. +func (s *DescribeMapRunOutput) SetRedriveCount(v int64) *DescribeMapRunOutput { + s.RedriveCount = &v + return s +} + +// SetRedriveDate sets the RedriveDate field's value. +func (s *DescribeMapRunOutput) SetRedriveDate(v time.Time) *DescribeMapRunOutput { + s.RedriveDate = &v + return s +} + // SetStartDate sets the StartDate field's value. func (s *DescribeMapRunOutput) SetStartDate(v time.Time) *DescribeMapRunOutput { s.StartDate = &v @@ -6792,6 +7042,14 @@ type ExecutionListItem struct { // Name is a required field Name *string `locationName:"name" min:"1" type:"string" required:"true"` + // The number of times you've redriven an execution. If you have not yet redriven + // an execution, the redriveCount is 0. This count is not updated for redrives + // that failed to start or are pending to be redriven. + RedriveCount *int64 `locationName:"redriveCount" type:"integer"` + + // The date the execution was last redriven. + RedriveDate *time.Time `locationName:"redriveDate" type:"timestamp"` + // The date the execution started. // // StartDate is a required field @@ -6870,6 +7128,18 @@ func (s *ExecutionListItem) SetName(v string) *ExecutionListItem { return s } +// SetRedriveCount sets the RedriveCount field's value. +func (s *ExecutionListItem) SetRedriveCount(v int64) *ExecutionListItem { + s.RedriveCount = &v + return s +} + +// SetRedriveDate sets the RedriveDate field's value. +func (s *ExecutionListItem) SetRedriveDate(v time.Time) *ExecutionListItem { + s.RedriveDate = &v + return s +} + // SetStartDate sets the StartDate field's value. func (s *ExecutionListItem) SetStartDate(v time.Time) *ExecutionListItem { s.StartDate = &v @@ -6906,6 +7176,105 @@ func (s *ExecutionListItem) SetStopDate(v time.Time) *ExecutionListItem { return s } +// The execution Amazon Resource Name (ARN) that you specified for executionArn +// cannot be redriven. +type ExecutionNotRedrivable struct { + _ struct{} `type:"structure"` + RespMetadata protocol.ResponseMetadata `json:"-" xml:"-"` + + Message_ *string `locationName:"message" type:"string"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ExecutionNotRedrivable) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ExecutionNotRedrivable) GoString() string { + return s.String() +} + +func newErrorExecutionNotRedrivable(v protocol.ResponseMetadata) error { + return &ExecutionNotRedrivable{ + RespMetadata: v, + } +} + +// Code returns the exception type name. +func (s *ExecutionNotRedrivable) Code() string { + return "ExecutionNotRedrivable" +} + +// Message returns the exception's message. +func (s *ExecutionNotRedrivable) Message() string { + if s.Message_ != nil { + return *s.Message_ + } + return "" +} + +// OrigErr always returns nil, satisfies awserr.Error interface. +func (s *ExecutionNotRedrivable) OrigErr() error { + return nil +} + +func (s *ExecutionNotRedrivable) Error() string { + return fmt.Sprintf("%s: %s", s.Code(), s.Message()) +} + +// Status code returns the HTTP status code for the request's response error. +func (s *ExecutionNotRedrivable) StatusCode() int { + return s.RespMetadata.StatusCode +} + +// RequestID returns the service's response RequestID for request. +func (s *ExecutionNotRedrivable) RequestID() string { + return s.RespMetadata.RequestID +} + +// Contains details about a redriven execution. +type ExecutionRedrivenEventDetails struct { + _ struct{} `type:"structure"` + + // The number of times you've redriven an execution. If you have not yet redriven + // an execution, the redriveCount is 0. This count is not updated for redrives + // that failed to start or are pending to be redriven. + RedriveCount *int64 `locationName:"redriveCount" type:"integer"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ExecutionRedrivenEventDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s ExecutionRedrivenEventDetails) GoString() string { + return s.String() +} + +// SetRedriveCount sets the RedriveCount field's value. +func (s *ExecutionRedrivenEventDetails) SetRedriveCount(v int64) *ExecutionRedrivenEventDetails { + s.RedriveCount = &v + return s +} + // Contains details about the start of the execution. type ExecutionStartedEventDetails struct { _ struct{} `type:"structure"` @@ -7361,6 +7730,9 @@ type HistoryEvent struct { // Contains details about an execution failure event. ExecutionFailedEventDetails *ExecutionFailedEventDetails `locationName:"executionFailedEventDetails" type:"structure"` + // Contains details about the redrive attempt of an execution. + ExecutionRedrivenEventDetails *ExecutionRedrivenEventDetails `locationName:"executionRedrivenEventDetails" type:"structure"` + // Contains details about the start of the execution. ExecutionStartedEventDetails *ExecutionStartedEventDetails `locationName:"executionStartedEventDetails" type:"structure"` @@ -7411,6 +7783,9 @@ type HistoryEvent struct { // Contains error and cause details about a Map Run that failed. MapRunFailedEventDetails *MapRunFailedEventDetails `locationName:"mapRunFailedEventDetails" type:"structure"` + // Contains details about the redrive attempt of a Map Run. + MapRunRedrivenEventDetails *MapRunRedrivenEventDetails `locationName:"mapRunRedrivenEventDetails" type:"structure"` + // Contains details, such as mapRunArn, and the start date and time of a Map // Run. mapRunArn is the Amazon Resource Name (ARN) of the Map Run that was // started. @@ -7529,6 +7904,12 @@ func (s *HistoryEvent) SetExecutionFailedEventDetails(v *ExecutionFailedEventDet return s } +// SetExecutionRedrivenEventDetails sets the ExecutionRedrivenEventDetails field's value. +func (s *HistoryEvent) SetExecutionRedrivenEventDetails(v *ExecutionRedrivenEventDetails) *HistoryEvent { + s.ExecutionRedrivenEventDetails = v + return s +} + // SetExecutionStartedEventDetails sets the ExecutionStartedEventDetails field's value. func (s *HistoryEvent) SetExecutionStartedEventDetails(v *ExecutionStartedEventDetails) *HistoryEvent { s.ExecutionStartedEventDetails = v @@ -7619,6 +8000,12 @@ func (s *HistoryEvent) SetMapRunFailedEventDetails(v *MapRunFailedEventDetails) return s } +// SetMapRunRedrivenEventDetails sets the MapRunRedrivenEventDetails field's value. +func (s *HistoryEvent) SetMapRunRedrivenEventDetails(v *MapRunRedrivenEventDetails) *HistoryEvent { + s.MapRunRedrivenEventDetails = v + return s +} + // SetMapRunStartedEventDetails sets the MapRunStartedEventDetails field's value. func (s *HistoryEvent) SetMapRunStartedEventDetails(v *MapRunStartedEventDetails) *HistoryEvent { s.MapRunStartedEventDetails = v @@ -8709,6 +9096,18 @@ type ListExecutionsInput struct { // pagination token will return an HTTP 400 InvalidToken error. NextToken *string `locationName:"nextToken" min:"1" type:"string"` + // Sets a filter to list executions based on whether or not they have been redriven. + // + // For a Distributed Map, redriveFilter sets a filter to list child workflow + // executions based on whether or not they have been redriven. + // + // If you do not provide a redriveFilter, Step Functions returns a list of both + // redriven and non-redriven executions. + // + // If you provide a state machine ARN in redriveFilter, the API returns a validation + // exception. + RedriveFilter *string `locationName:"redriveFilter" type:"string" enum:"ExecutionRedriveFilter"` + // The Amazon Resource Name (ARN) of the state machine whose executions is listed. // // You can specify either a mapRunArn or a stateMachineArn, but not both. @@ -8779,6 +9178,12 @@ func (s *ListExecutionsInput) SetNextToken(v string) *ListExecutionsInput { return s } +// SetRedriveFilter sets the RedriveFilter field's value. +func (s *ListExecutionsInput) SetRedriveFilter(v string) *ListExecutionsInput { + s.RedriveFilter = &v + return s +} + // SetStateMachineArn sets the StateMachineArn field's value. func (s *ListExecutionsInput) SetStateMachineArn(v string) *ListExecutionsInput { s.StateMachineArn = &v @@ -9584,12 +9989,24 @@ type MapRunExecutionCounts struct { // Failed is a required field Failed *int64 `locationName:"failed" type:"long" required:"true"` + // The number of FAILED, ABORTED, or TIMED_OUT child workflow executions that + // cannot be redriven because their execution status is terminal. For example, + // if your execution event history contains 25,000 entries, or the toleratedFailureCount + // or toleratedFailurePercentage for the Distributed Map has exceeded. + FailuresNotRedrivable *int64 `locationName:"failuresNotRedrivable" type:"long"` + // The total number of child workflow executions that were started by a Map // Run, but haven't started executing yet. // // Pending is a required field Pending *int64 `locationName:"pending" type:"long" required:"true"` + // The number of unsuccessful child workflow executions currently waiting to + // be redriven. The status of these child workflow executions could be FAILED, + // ABORTED, or TIMED_OUT in the original execution attempt or a previous redrive + // attempt. + PendingRedrive *int64 `locationName:"pendingRedrive" type:"long"` + // Returns the count of child workflow executions whose results were written // by ResultWriter. For more information, see ResultWriter (https://docs.aws.amazon.com/step-functions/latest/dg/input-output-resultwriter.html) // in the Step Functions Developer Guide. @@ -9652,12 +10069,24 @@ func (s *MapRunExecutionCounts) SetFailed(v int64) *MapRunExecutionCounts { return s } +// SetFailuresNotRedrivable sets the FailuresNotRedrivable field's value. +func (s *MapRunExecutionCounts) SetFailuresNotRedrivable(v int64) *MapRunExecutionCounts { + s.FailuresNotRedrivable = &v + return s +} + // SetPending sets the Pending field's value. func (s *MapRunExecutionCounts) SetPending(v int64) *MapRunExecutionCounts { s.Pending = &v return s } +// SetPendingRedrive sets the PendingRedrive field's value. +func (s *MapRunExecutionCounts) SetPendingRedrive(v int64) *MapRunExecutionCounts { + s.PendingRedrive = &v + return s +} + // SetResultsWritten sets the ResultsWritten field's value. func (s *MapRunExecutionCounts) SetResultsWritten(v int64) *MapRunExecutionCounts { s.ResultsWritten = &v @@ -9755,12 +10184,23 @@ type MapRunItemCounts struct { // Failed is a required field Failed *int64 `locationName:"failed" type:"long" required:"true"` + // The number of FAILED, ABORTED, or TIMED_OUT items in child workflow executions + // that cannot be redriven because the execution status of those child workflows + // is terminal. For example, if your execution event history contains 25,000 + // entries, or the toleratedFailureCount or toleratedFailurePercentage for the + // Distributed Map has exceeded. + FailuresNotRedrivable *int64 `locationName:"failuresNotRedrivable" type:"long"` + // The total number of items to process in child workflow executions that haven't // started running yet. // // Pending is a required field Pending *int64 `locationName:"pending" type:"long" required:"true"` + // The number of unsuccessful items in child workflow executions currently waiting + // to be redriven. + PendingRedrive *int64 `locationName:"pendingRedrive" type:"long"` + // Returns the count of items whose results were written by ResultWriter. For // more information, see ResultWriter (https://docs.aws.amazon.com/step-functions/latest/dg/input-output-resultwriter.html) // in the Step Functions Developer Guide. @@ -9823,12 +10263,24 @@ func (s *MapRunItemCounts) SetFailed(v int64) *MapRunItemCounts { return s } +// SetFailuresNotRedrivable sets the FailuresNotRedrivable field's value. +func (s *MapRunItemCounts) SetFailuresNotRedrivable(v int64) *MapRunItemCounts { + s.FailuresNotRedrivable = &v + return s +} + // SetPending sets the Pending field's value. func (s *MapRunItemCounts) SetPending(v int64) *MapRunItemCounts { s.Pending = &v return s } +// SetPendingRedrive sets the PendingRedrive field's value. +func (s *MapRunItemCounts) SetPendingRedrive(v int64) *MapRunItemCounts { + s.PendingRedrive = &v + return s +} + // SetResultsWritten sets the ResultsWritten field's value. func (s *MapRunItemCounts) SetResultsWritten(v int64) *MapRunItemCounts { s.ResultsWritten = &v @@ -9935,6 +10387,49 @@ func (s *MapRunListItem) SetStopDate(v time.Time) *MapRunListItem { return s } +// Contains details about a Map Run that was redriven. +type MapRunRedrivenEventDetails struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of a Map Run that was redriven. + MapRunArn *string `locationName:"mapRunArn" min:"1" type:"string"` + + // The number of times the Map Run has been redriven at this point in the execution's + // history including this event. The redrive count for a redriven Map Run is + // always greater than 0. + RedriveCount *int64 `locationName:"redriveCount" type:"integer"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s MapRunRedrivenEventDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s MapRunRedrivenEventDetails) GoString() string { + return s.String() +} + +// SetMapRunArn sets the MapRunArn field's value. +func (s *MapRunRedrivenEventDetails) SetMapRunArn(v string) *MapRunRedrivenEventDetails { + s.MapRunArn = &v + return s +} + +// SetRedriveCount sets the RedriveCount field's value. +func (s *MapRunRedrivenEventDetails) SetRedriveCount(v int64) *MapRunRedrivenEventDetails { + s.RedriveCount = &v + return s +} + // Contains details about a Map Run that was started during a state machine // execution. type MapRunStartedEventDetails struct { @@ -10190,6 +10685,103 @@ func (s *PublishStateMachineVersionOutput) SetStateMachineVersionArn(v string) * return s } +type RedriveExecutionInput struct { + _ struct{} `type:"structure"` + + // A unique, case-sensitive identifier that you provide to ensure the idempotency + // of the request. If you don’t specify a client token, the Amazon Web Services + // SDK automatically generates a client token and uses it for the request to + // ensure idempotency. The API uses one of the last 10 client tokens provided. + ClientToken *string `locationName:"clientToken" min:"1" type:"string" idempotencyToken:"true"` + + // The Amazon Resource Name (ARN) of the execution to be redriven. + // + // ExecutionArn is a required field + ExecutionArn *string `locationName:"executionArn" min:"1" type:"string" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s RedriveExecutionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s RedriveExecutionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RedriveExecutionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RedriveExecutionInput"} + if s.ClientToken != nil && len(*s.ClientToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 1)) + } + if s.ExecutionArn == nil { + invalidParams.Add(request.NewErrParamRequired("ExecutionArn")) + } + if s.ExecutionArn != nil && len(*s.ExecutionArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExecutionArn", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *RedriveExecutionInput) SetClientToken(v string) *RedriveExecutionInput { + s.ClientToken = &v + return s +} + +// SetExecutionArn sets the ExecutionArn field's value. +func (s *RedriveExecutionInput) SetExecutionArn(v string) *RedriveExecutionInput { + s.ExecutionArn = &v + return s +} + +type RedriveExecutionOutput struct { + _ struct{} `type:"structure"` + + // The date the execution was last redriven. + // + // RedriveDate is a required field + RedriveDate *time.Time `locationName:"redriveDate" type:"timestamp" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s RedriveExecutionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s RedriveExecutionOutput) GoString() string { + return s.String() +} + +// SetRedriveDate sets the RedriveDate field's value. +func (s *RedriveExecutionOutput) SetRedriveDate(v time.Time) *RedriveExecutionOutput { + s.RedriveDate = &v + return s +} + // Could not find the referenced resource. type ResourceNotFound struct { _ struct{} `type:"structure"` @@ -10272,8 +10864,8 @@ type RoutingConfigurationListItem struct { // StateMachineVersionArn is a required field StateMachineVersionArn *string `locationName:"stateMachineVersionArn" min:"1" type:"string" required:"true"` - // The percentage of traffic you want to route to the second state machine version. - // The sum of the weights in the routing configuration must be equal to 100. + // The percentage of traffic you want to route to a state machine version. The + // sum of the weights in the routing configuration must be equal to 100. // // Weight is a required field Weight *int64 `locationName:"weight" type:"integer" required:"true"` @@ -10681,6 +11273,9 @@ type StartExecutionInput struct { // see Limits Related to State Machine Executions (https://docs.aws.amazon.com/step-functions/latest/dg/limits.html#service-limits-state-machine-executions) // in the Step Functions Developer Guide. // + // If you don't provide a name for the execution, Step Functions automatically + // generates a universally unique identifier (UUID) as the execution name. + // // A name must not contain: // // * white space @@ -12029,6 +12624,7 @@ func (s *TaskCredentials) SetRoleArn(v string) *TaskCredentials { return s } +// The activity does not exist. type TaskDoesNotExist struct { _ struct{} `type:"structure"` RespMetadata protocol.ResponseMetadata `json:"-" xml:"-"` @@ -12586,6 +13182,8 @@ func (s *TaskSucceededEventDetails) SetResourceType(v string) *TaskSucceededEven return s } +// The task token has either expired or the task associated with the token has +// already been closed. type TaskTimedOut struct { _ struct{} `type:"structure"` RespMetadata protocol.ResponseMetadata `json:"-" xml:"-"` @@ -13375,6 +13973,42 @@ func (s *ValidationException) RequestID() string { return s.RespMetadata.RequestID } +const ( + // ExecutionRedriveFilterRedriven is a ExecutionRedriveFilter enum value + ExecutionRedriveFilterRedriven = "REDRIVEN" + + // ExecutionRedriveFilterNotRedriven is a ExecutionRedriveFilter enum value + ExecutionRedriveFilterNotRedriven = "NOT_REDRIVEN" +) + +// ExecutionRedriveFilter_Values returns all elements of the ExecutionRedriveFilter enum +func ExecutionRedriveFilter_Values() []string { + return []string{ + ExecutionRedriveFilterRedriven, + ExecutionRedriveFilterNotRedriven, + } +} + +const ( + // ExecutionRedriveStatusRedrivable is a ExecutionRedriveStatus enum value + ExecutionRedriveStatusRedrivable = "REDRIVABLE" + + // ExecutionRedriveStatusNotRedrivable is a ExecutionRedriveStatus enum value + ExecutionRedriveStatusNotRedrivable = "NOT_REDRIVABLE" + + // ExecutionRedriveStatusRedrivableByMapRun is a ExecutionRedriveStatus enum value + ExecutionRedriveStatusRedrivableByMapRun = "REDRIVABLE_BY_MAP_RUN" +) + +// ExecutionRedriveStatus_Values returns all elements of the ExecutionRedriveStatus enum +func ExecutionRedriveStatus_Values() []string { + return []string{ + ExecutionRedriveStatusRedrivable, + ExecutionRedriveStatusNotRedrivable, + ExecutionRedriveStatusRedrivableByMapRun, + } +} + const ( // ExecutionStatusRunning is a ExecutionStatus enum value ExecutionStatusRunning = "RUNNING" @@ -13390,6 +14024,9 @@ const ( // ExecutionStatusAborted is a ExecutionStatus enum value ExecutionStatusAborted = "ABORTED" + + // ExecutionStatusPendingRedrive is a ExecutionStatus enum value + ExecutionStatusPendingRedrive = "PENDING_REDRIVE" ) // ExecutionStatus_Values returns all elements of the ExecutionStatus enum @@ -13400,6 +14037,7 @@ func ExecutionStatus_Values() []string { ExecutionStatusFailed, ExecutionStatusTimedOut, ExecutionStatusAborted, + ExecutionStatusPendingRedrive, } } @@ -13580,6 +14218,12 @@ const ( // HistoryEventTypeMapRunSucceeded is a HistoryEventType enum value HistoryEventTypeMapRunSucceeded = "MapRunSucceeded" + + // HistoryEventTypeExecutionRedriven is a HistoryEventType enum value + HistoryEventTypeExecutionRedriven = "ExecutionRedriven" + + // HistoryEventTypeMapRunRedriven is a HistoryEventType enum value + HistoryEventTypeMapRunRedriven = "MapRunRedriven" ) // HistoryEventType_Values returns all elements of the HistoryEventType enum @@ -13644,6 +14288,8 @@ func HistoryEventType_Values() []string { HistoryEventTypeMapRunFailed, HistoryEventTypeMapRunStarted, HistoryEventTypeMapRunSucceeded, + HistoryEventTypeExecutionRedriven, + HistoryEventTypeMapRunRedriven, } } diff --git a/service/sfn/doc.go b/service/sfn/doc.go index e35772a1418..846e4f55e95 100644 --- a/service/sfn/doc.go +++ b/service/sfn/doc.go @@ -22,6 +22,11 @@ // SDKs, or an HTTP API. For more information about Step Functions, see the // Step Functions Developer Guide (https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) . // +// If you use the Step Functions API actions using Amazon Web Services SDK integrations, +// make sure the API actions are in camel case and parameter names are in Pascal +// case. For example, you could use Step Functions API action startSyncExecution +// and specify its parameter as StateMachineArn. +// // See https://docs.aws.amazon.com/goto/WebAPI/states-2016-11-23 for more information on this service. // // See sfn package documentation for more information. diff --git a/service/sfn/errors.go b/service/sfn/errors.go index 57e16852e20..bbe49f34b7f 100644 --- a/service/sfn/errors.go +++ b/service/sfn/errors.go @@ -59,6 +59,13 @@ const ( // must end or be stopped before a new execution can be started. ErrCodeExecutionLimitExceeded = "ExecutionLimitExceeded" + // ErrCodeExecutionNotRedrivable for service response error code + // "ExecutionNotRedrivable". + // + // The execution Amazon Resource Name (ARN) that you specified for executionArn + // cannot be redriven. + ErrCodeExecutionNotRedrivable = "ExecutionNotRedrivable" + // ErrCodeInvalidArn for service response error code // "InvalidArn". // @@ -159,10 +166,15 @@ const ( // ErrCodeTaskDoesNotExist for service response error code // "TaskDoesNotExist". + // + // The activity does not exist. ErrCodeTaskDoesNotExist = "TaskDoesNotExist" // ErrCodeTaskTimedOut for service response error code // "TaskTimedOut". + // + // The task token has either expired or the task associated with the token has + // already been closed. ErrCodeTaskTimedOut = "TaskTimedOut" // ErrCodeTooManyTags for service response error code @@ -189,6 +201,7 @@ var exceptionFromCode = map[string]func(protocol.ResponseMetadata) error{ "ExecutionAlreadyExists": newErrorExecutionAlreadyExists, "ExecutionDoesNotExist": newErrorExecutionDoesNotExist, "ExecutionLimitExceeded": newErrorExecutionLimitExceeded, + "ExecutionNotRedrivable": newErrorExecutionNotRedrivable, "InvalidArn": newErrorInvalidArn, "InvalidDefinition": newErrorInvalidDefinition, "InvalidExecutionInput": newErrorInvalidExecutionInput, diff --git a/service/sfn/sfniface/interface.go b/service/sfn/sfniface/interface.go index 72d4e4676c9..435bb886aa8 100644 --- a/service/sfn/sfniface/interface.go +++ b/service/sfn/sfniface/interface.go @@ -167,6 +167,10 @@ type SFNAPI interface { PublishStateMachineVersionWithContext(aws.Context, *sfn.PublishStateMachineVersionInput, ...request.Option) (*sfn.PublishStateMachineVersionOutput, error) PublishStateMachineVersionRequest(*sfn.PublishStateMachineVersionInput) (*request.Request, *sfn.PublishStateMachineVersionOutput) + RedriveExecution(*sfn.RedriveExecutionInput) (*sfn.RedriveExecutionOutput, error) + RedriveExecutionWithContext(aws.Context, *sfn.RedriveExecutionInput, ...request.Option) (*sfn.RedriveExecutionOutput, error) + RedriveExecutionRequest(*sfn.RedriveExecutionInput) (*request.Request, *sfn.RedriveExecutionOutput) + SendTaskFailure(*sfn.SendTaskFailureInput) (*sfn.SendTaskFailureOutput, error) SendTaskFailureWithContext(aws.Context, *sfn.SendTaskFailureInput, ...request.Option) (*sfn.SendTaskFailureOutput, error) SendTaskFailureRequest(*sfn.SendTaskFailureInput) (*request.Request, *sfn.SendTaskFailureOutput) diff --git a/service/signer/api.go b/service/signer/api.go index 15ac4bb33fb..7fbb327444b 100644 --- a/service/signer/api.go +++ b/service/signer/api.go @@ -713,11 +713,11 @@ func (c *Signer) ListSigningJobsRequest(input *ListSigningJobsInput) (req *reque // // Lists all your signing jobs. You can use the maxResults parameter to limit // the number of signing jobs that are returned in the response. If additional -// jobs remain to be listed, code signing returns a nextToken value. Use this +// jobs remain to be listed, AWS Signer returns a nextToken value. Use this // value in subsequent calls to ListSigningJobs to fetch the remaining values. // You can continue calling ListSigningJobs with your maxResults parameter and -// with new values that code signing returns in the nextToken parameter until -// all of your signing jobs have been returned. +// with new values that Signer returns in the nextToken parameter until all +// of your signing jobs have been returned. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -864,12 +864,12 @@ func (c *Signer) ListSigningPlatformsRequest(input *ListSigningPlatformsInput) ( // ListSigningPlatforms API operation for AWS Signer. // -// Lists all signing platforms available in code signing that match the request -// parameters. If additional jobs remain to be listed, code signing returns -// a nextToken value. Use this value in subsequent calls to ListSigningJobs -// to fetch the remaining values. You can continue calling ListSigningJobs with -// your maxResults parameter and with new values that code signing returns in -// the nextToken parameter until all of your signing jobs have been returned. +// Lists all signing platforms available in AWS Signer that match the request +// parameters. If additional jobs remain to be listed, Signer returns a nextToken +// value. Use this value in subsequent calls to ListSigningJobs to fetch the +// remaining values. You can continue calling ListSigningJobs with your maxResults +// parameter and with new values that Signer returns in the nextToken parameter +// until all of your signing jobs have been returned. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1018,11 +1018,11 @@ func (c *Signer) ListSigningProfilesRequest(input *ListSigningProfilesInput) (re // // Lists all available signing profiles in your AWS account. Returns only profiles // with an ACTIVE status unless the includeCanceled request field is set to -// true. If additional jobs remain to be listed, code signing returns a nextToken +// true. If additional jobs remain to be listed, AWS Signer returns a nextToken // value. Use this value in subsequent calls to ListSigningJobs to fetch the // remaining values. You can continue calling ListSigningJobs with your maxResults -// parameter and with new values that code signing returns in the nextToken -// parameter until all of your signing jobs have been returned. +// parameter and with new values that Signer returns in the nextToken parameter +// until all of your signing jobs have been returned. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1251,7 +1251,7 @@ func (c *Signer) PutSigningProfileRequest(input *PutSigningProfileInput) (req *r // PutSigningProfile API operation for AWS Signer. // -// Creates a signing profile. A signing profile is a code signing template that +// Creates a signing profile. A signing profile is a code-signing template that // can be used to carry out a pre-defined signing job. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1735,14 +1735,14 @@ func (c *Signer) StartSigningJobRequest(input *StartSigningJobInput) (req *reque // // - Your S3 source bucket must be version enabled. // -// - You must create an S3 destination bucket. Code signing uses your S3 -// destination bucket to write your signed code. +// - You must create an S3 destination bucket. AWS Signer uses your S3 destination +// bucket to write your signed code. // // - You specify the name of the source and destination buckets when calling // the StartSigningJob operation. // // - You must also specify a request token that identifies your request to -// code signing. +// Signer. // // You can call the DescribeSigningJob and the ListSigningJobs actions after // you call StartSigningJob. @@ -2499,7 +2499,7 @@ type DescribeSigningJobOutput struct { // Thr expiration timestamp for the signature generated by the signing job. SignatureExpiresAt *time.Time `locationName:"signatureExpiresAt" type:"timestamp"` - // Name of the S3 bucket where the signed code image is saved by code signing. + // Name of the S3 bucket where the signed code image is saved by AWS Signer. SignedObject *SignedObject `locationName:"signedObject" type:"structure"` // The Amazon Resource Name (ARN) of your code signing certificate. @@ -2684,17 +2684,17 @@ func (s *Destination) SetS3(v *S3Destination) *Destination { return s } -// The encryption algorithm options that are available to a code signing job. +// The encryption algorithm options that are available to a code-signing job. type EncryptionAlgorithmOptions struct { _ struct{} `type:"structure"` - // The set of accepted encryption algorithms that are allowed in a code signing + // The set of accepted encryption algorithms that are allowed in a code-signing // job. // // AllowedValues is a required field AllowedValues []*string `locationName:"allowedValues" type:"list" required:"true" enum:"EncryptionAlgorithm"` - // The default encryption algorithm that is used by a code signing job. + // The default encryption algorithm that is used by a code-signing job. // // DefaultValue is a required field DefaultValue *string `locationName:"defaultValue" type:"string" required:"true" enum:"EncryptionAlgorithm"` @@ -2739,6 +2739,20 @@ type GetRevocationStatusInput struct { // by the parent CA) combined with a parent CA TBS hash (signed by the parent // CA’s CA). Root certificates are defined as their own CA. // + // The following example shows how to calculate a hash for this parameter using + // OpenSSL commands: + // + // openssl asn1parse -in childCert.pem -strparse 4 -out childCert.tbs + // + // openssl sha384 < childCert.tbs -binary > childCertTbsHash + // + // openssl asn1parse -in parentCert.pem -strparse 4 -out parentCert.tbs + // + // openssl sha384 < parentCert.tbs -binary > parentCertTbsHash xxd -p childCertTbsHash + // > certificateHash.hex xxd -p parentCertTbsHash >> certificateHash.hex + // + // cat certificateHash.hex | tr -d '\n' + // // CertificateHashes is a required field CertificateHashes []*string `location:"querystring" locationName:"certificateHashes" type:"list" required:"true"` @@ -2845,8 +2859,8 @@ func (s *GetRevocationStatusInput) SetSignatureTimestamp(v time.Time) *GetRevoca type GetRevocationStatusOutput struct { _ struct{} `type:"structure"` - // A list of revoked entities (including one or more of the signing profile - // ARN, signing job ID, and certificate hash) supplied as input to the API. + // A list of revoked entities (including zero or more of the signing profile + // ARN, signing job ARN, and certificate hashes) supplied as input to the API. RevokedEntities []*string `locationName:"revokedEntities" type:"list"` } @@ -3239,16 +3253,16 @@ func (s *GetSigningProfileOutput) SetTags(v map[string]*string) *GetSigningProfi return s } -// The hash algorithms that are available to a code signing job. +// The hash algorithms that are available to a code-signing job. type HashAlgorithmOptions struct { _ struct{} `type:"structure"` - // The set of accepted hash algorithms allowed in a code signing job. + // The set of accepted hash algorithms allowed in a code-signing job. // // AllowedValues is a required field AllowedValues []*string `locationName:"allowedValues" type:"list" required:"true" enum:"HashAlgorithm"` - // The default hash algorithm that is used in a code signing job. + // The default hash algorithm that is used in a code-signing job. // // DefaultValue is a required field DefaultValue *string `locationName:"defaultValue" type:"string" required:"true" enum:"HashAlgorithm"` @@ -4657,7 +4671,7 @@ func (s RevokeSigningProfileOutput) GoString() string { return s.String() } -// The name and prefix of the S3 bucket where code signing saves your signed +// The name and prefix of the Amazon S3 bucket where AWS Signer saves your signed // objects. type S3Destination struct { _ struct{} `type:"structure"` @@ -4665,8 +4679,8 @@ type S3Destination struct { // Name of the S3 bucket. BucketName *string `locationName:"bucketName" type:"string"` - // An Amazon S3 prefix that you can use to limit responses to those that begin - // with the specified prefix. + // An S3 prefix that you can use to limit responses to those that begin with + // the specified prefix. Prefix *string `locationName:"prefix" type:"string"` } @@ -4700,7 +4714,7 @@ func (s *S3Destination) SetPrefix(v string) *S3Destination { return s } -// The S3 bucket name and key where code signing saved your signed code image. +// The Amazon S3 bucket name and key where Signer saved your signed code image. type S3SignedObject struct { _ struct{} `type:"structure"` @@ -4741,7 +4755,7 @@ func (s *S3SignedObject) SetKey(v string) *S3SignedObject { return s } -// Information about the S3 bucket where you saved your unsigned code. +// Information about the Amazon S3 bucket where you saved your unsigned code. type S3Source struct { _ struct{} `type:"structure"` @@ -4891,7 +4905,7 @@ type SignPayloadInput struct { // Payload is a required field Payload []byte `locationName:"payload" min:"1" type:"blob" required:"true"` - // Payload content type + // Payload content type. The single valid type is application/vnd.cncf.notary.payload.v1+json. // // PayloadFormat is a required field PayloadFormat *string `locationName:"payloadFormat" type:"string" required:"true"` @@ -4984,9 +4998,7 @@ type SignPayloadOutput struct { // The AWS account ID of the job owner. JobOwner *string `locationName:"jobOwner" min:"12" type:"string"` - // Information including the signing profile ARN and the signing job ID. Clients - // use metadata to signature records, for example, as annotations added to the - // signature manifest inside an OCI registry. + // Information including the signing profile ARN and the signing job ID. Metadata map[string]*string `locationName:"metadata" type:"map"` // A cryptographic signature. @@ -5110,16 +5122,16 @@ func (s *SignedObject) SetS3(v *S3SignedObject) *SignedObject { return s } -// The configuration of a code signing operation. +// The configuration of a signing operation. type SigningConfiguration struct { _ struct{} `type:"structure"` - // The encryption algorithm options that are available for a code signing job. + // The encryption algorithm options that are available for a code-signing job. // // EncryptionAlgorithmOptions is a required field EncryptionAlgorithmOptions *EncryptionAlgorithmOptions `locationName:"encryptionAlgorithmOptions" type:"structure" required:"true"` - // The hash algorithm options that are available for a code signing job. + // The hash algorithm options that are available for a code-signing job. // // HashAlgorithmOptions is a required field HashAlgorithmOptions *HashAlgorithmOptions `locationName:"hashAlgorithmOptions" type:"structure" required:"true"` @@ -5161,11 +5173,11 @@ type SigningConfigurationOverrides struct { _ struct{} `type:"structure"` // A specified override of the default encryption algorithm that is used in - // a code signing job. + // a code-signing job. EncryptionAlgorithm *string `locationName:"encryptionAlgorithm" type:"string" enum:"EncryptionAlgorithm"` - // A specified override of the default hash algorithm that is used in a code - // signing job. + // A specified override of the default hash algorithm that is used in a code-signing + // job. HashAlgorithm *string `locationName:"hashAlgorithm" type:"string" enum:"HashAlgorithm"` } @@ -5199,16 +5211,16 @@ func (s *SigningConfigurationOverrides) SetHashAlgorithm(v string) *SigningConfi return s } -// The image format of a code signing platform or profile. +// The image format of a AWS Signer platform or profile. type SigningImageFormat struct { _ struct{} `type:"structure"` - // The default format of a code signing image. + // The default format of a signing image. // // DefaultFormat is a required field DefaultFormat *string `locationName:"defaultFormat" type:"string" required:"true" enum:"ImageFormat"` - // The supported formats of a code signing image. + // The supported formats of a signing image. // // SupportedFormats is a required field SupportedFormats []*string `locationName:"supportedFormats" type:"list" required:"true" enum:"ImageFormat"` @@ -5494,36 +5506,36 @@ func (s *SigningMaterial) SetCertificateArn(v string) *SigningMaterial { } // Contains information about the signing configurations and parameters that -// are used to perform a code signing job. +// are used to perform a code-signing job. type SigningPlatform struct { _ struct{} `type:"structure"` - // The category of a code signing platform. + // The category of a signing platform. Category *string `locationName:"category" type:"string" enum:"Category"` - // The display name of a code signing platform. + // The display name of a signing platform. DisplayName *string `locationName:"displayName" type:"string"` - // The maximum size (in MB) of code that can be signed by a code signing platform. + // The maximum size (in MB) of code that can be signed by a signing platform. MaxSizeInMB *int64 `locationName:"maxSizeInMB" type:"integer"` - // Any partner entities linked to a code signing platform. + // Any partner entities linked to a signing platform. Partner *string `locationName:"partner" type:"string"` - // The ID of a code signing platform. + // The ID of a signing platform. PlatformId *string `locationName:"platformId" type:"string"` // Indicates whether revocation is supported for the platform. RevocationSupported *bool `locationName:"revocationSupported" type:"boolean"` - // The configuration of a code signing platform. This includes the designated - // hash algorithm and encryption algorithm of a signing platform. + // The configuration of a signing platform. This includes the designated hash + // algorithm and encryption algorithm of a signing platform. SigningConfiguration *SigningConfiguration `locationName:"signingConfiguration" type:"structure"` - // The image format of a code signing platform or profile. + // The image format of a AWS Signer platform or profile. SigningImageFormat *SigningImageFormat `locationName:"signingImageFormat" type:"structure"` - // The types of targets that can be signed by a code signing platform. + // The types of targets that can be signed by a signing platform. Target *string `locationName:"target" type:"string"` } @@ -5599,7 +5611,7 @@ func (s *SigningPlatform) SetTarget(v string) *SigningPlatform { return s } -// Any overrides that are applied to the signing configuration of a code signing +// Any overrides that are applied to the signing configuration of a signing // platform. type SigningPlatformOverrides struct { _ struct{} `type:"structure"` @@ -5646,7 +5658,7 @@ func (s *SigningPlatformOverrides) SetSigningImageFormat(v string) *SigningPlatf return s } -// Contains information about the ACM certificates and code signing configuration +// Contains information about the ACM certificates and signing configuration // parameters that can be used by a given code signing user. type SigningProfile struct { _ struct{} `type:"structure"` @@ -5675,10 +5687,10 @@ type SigningProfile struct { // The ACM certificate that is available for use by a signing profile. SigningMaterial *SigningMaterial `locationName:"signingMaterial" type:"structure"` - // The parameters that are available for use by a code signing user. + // The parameters that are available for use by a Signer user. SigningParameters map[string]*string `locationName:"signingParameters" type:"map"` - // The status of a code signing profile. + // The status of a signing profile. Status *string `locationName:"status" type:"string" enum:"SigningProfileStatus"` // A list of tags associated with the signing profile. diff --git a/service/signer/doc.go b/service/signer/doc.go index c92406893ad..6d327e5b293 100644 --- a/service/signer/doc.go +++ b/service/signer/doc.go @@ -3,10 +3,10 @@ // Package signer provides the client and types for making API // requests to AWS Signer. // -// AWS Signer is a fully managed code signing service to help you ensure the +// AWS Signer is a fully managed code-signing service to help you ensure the // trust and integrity of your code. // -// AWS Signer supports the following applications: +// Signer supports the following applications: // // With code signing for AWS Lambda, you can sign AWS Lambda (http://docs.aws.amazon.com/lambda/latest/dg/) // deployment packages. Integrated support is provided for Amazon S3 (http://docs.aws.amazon.com/AmazonS3/latest/gsg/), @@ -19,14 +19,17 @@ // by AWS. IoT code signing is available for Amazon FreeRTOS (http://docs.aws.amazon.com/freertos/latest/userguide/) // and AWS IoT Device Management (http://docs.aws.amazon.com/iot/latest/developerguide/), // and is integrated with AWS Certificate Manager (ACM) (http://docs.aws.amazon.com/acm/latest/userguide/). -// In order to sign code, you import a third-party code signing certificate +// In order to sign code, you import a third-party code-signing certificate // using ACM, and use that to sign updates in Amazon FreeRTOS and AWS IoT Device // Management. // -// With code signing for containers …(TBD) +// With Signer and the Notation CLI from the Notary Project (https://notaryproject.dev/), +// you can sign container images stored in a container registry such as Amazon +// Elastic Container Registry (ECR). The signatures are stored in the registry +// alongside the images, where they are available for verifying image authenticity +// and integrity. // -// For more information about AWS Signer, see the AWS Signer Developer Guide -// (https://docs.aws.amazon.com/signer/latest/developerguide/Welcome.html). +// For more information about Signer, see the AWS Signer Developer Guide (https://docs.aws.amazon.com/signer/latest/developerguide/Welcome.html). // // See https://docs.aws.amazon.com/goto/WebAPI/signer-2017-08-25 for more information on this service. //