Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cdktf: update documentation #40719

Merged
merged 1 commit into from
Dec 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
---
subcategory: "EventBridge"
layout: "aws"
page_title: "AWS: aws_cloudwatch_event_buses"
description: |-
Terraform data source for managing an AWS EventBridge (Cloudwatch) Event Buses.
---


<!-- Please do not edit this file, it is generated. -->
# Data Source: aws_cloudwatch_event_buses

Terraform data source for managing an AWS EventBridge Event Buses.

## Example Usage

### Basic Usage

```python
# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug
from constructs import Construct
from cdktf import TerraformStack
#
# Provider bindings are generated by running `cdktf get`.
# See https://cdk.tf/provider-generation for more details.
#
from imports.aws. import DataAwsCloudwatchEventBuses
class MyConvertedCode(TerraformStack):
def __init__(self, scope, name):
super().__init__(scope, name)
DataAwsCloudwatchEventBuses(self, "example",
name_prefix="test"
)
```

## Argument Reference

The following arguments are optional:

* `name_prefix` - (Optional) Specifying this limits the results to only those event buses with names that start with the specified prefix.

## Attribute Reference

This data source exports the following attributes in addition to the arguments above:

* `event_buses` - This list of event buses.

### `event_buses` Attribute Reference

* `arn` - The ARN of the event bus.
* `creation_time` - The time the event bus was created.
* `description` - The event bus description.
* `last_modified_time` - The time the event bus was last modified.
* `name` - The name of the event bus.
* `policy` - The permissions policy of the event bus, describing which other AWS accounts can write events to this event bus.

<!-- cache-key: cdktf-0.20.8 input-edc68a47cdadc6ea17720030205db67bdcde4de355a2ff487a2754598fadec7e -->
40 changes: 40 additions & 0 deletions website/docs/cdktf/python/d/ecs_clusters.html.markdown
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
subcategory: "ECS (Elastic Container)"
layout: "aws"
page_title: "AWS: aws_ecs_clusters"
description: |-
Terraform data source for managing an AWS ECS (Elastic Container) Clusters.
---


<!-- Please do not edit this file, it is generated. -->
# Data Source: aws_ecs_clusters

Terraform data source for managing an AWS ECS (Elastic Container) Clusters.

## Example Usage

### Basic Usage

```python
# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug
from constructs import Construct
from cdktf import TerraformStack
#
# Provider bindings are generated by running `cdktf get`.
# See https://cdk.tf/provider-generation for more details.
#
from imports.aws. import DataAwsEcsClusters
class MyConvertedCode(TerraformStack):
def __init__(self, scope, name):
super().__init__(scope, name)
DataAwsEcsClusters(self, "example")
```

## Attribute Reference

This data source exports the following attributes in addition to the arguments above:

* `cluster_arns` - List of ECS cluster ARNs associated with the account.

<!-- cache-key: cdktf-0.20.8 input-44fb89849dabf8a4ed89b44a9b93a58ff5ad76c5784437739a64edfcca025db2 -->
3 changes: 2 additions & 1 deletion website/docs/cdktf/python/d/rds_certificate.html.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ class MyConvertedCode(TerraformStack):
This data source supports the following arguments:

* `id` - (Optional) Certificate identifier. For example, `rds-ca-2019`.
* `default_for_new_launches` - (Optional) When enabled, returns the default certificate for new RDS instances.
* `latest_valid_till` - (Optional) When enabled, returns the certificate with the latest `ValidTill`.

## Attribute Reference
Expand All @@ -50,4 +51,4 @@ This data source exports the following attributes in addition to the arguments a
* `valid_from` - [RFC3339 format](https://tools.ietf.org/html/rfc3339#section-5.8) of certificate starting validity date.
* `valid_till` - [RFC3339 format](https://tools.ietf.org/html/rfc3339#section-5.8) of certificate ending validity date.

<!-- cache-key: cdktf-0.20.8 input-11fac79b0d201c3f0e3fe5f7ce9e1273fd98f100c84dc2f6a4a3f5286bb6b1dd -->
<!-- cache-key: cdktf-0.20.8 input-054e6bdf005c0a6d813e59f8b001c9c7d1deea619f51dac00ec7da994696b322 -->
Original file line number Diff line number Diff line change
Expand Up @@ -92,8 +92,11 @@ This data source exports the following attributes in addition to the arguments a
* `supported_feature_names` - Set of features supported by the engine version.
* `supported_modes` - Set of supported engine version modes.
* `supported_timezones` - Set of the time zones supported by the engine version.
* `supports_certificate_rotation_without_restart` - Whether the certificates can be rotated without restarting the Aurora instance.
* `supports_global_databases` - Whether you can use Aurora global databases with the engine version.
* `supports_integrations` - Whether the engine version supports integrations with other AWS services.
* `supports_log_exports_to_cloudwatch` - Whether the engine version supports exporting the log types specified by `exportable_log_types` to CloudWatch Logs.
* `supports_local_write_forwarding` - Whether the engine version supports local write forwarding or not.
* `supports_limitless_database` - Whether the engine version supports Aurora Limitless Database.
* `supports_parallel_query` - Whether you can use Aurora parallel query with the engine version.
* `supports_read_replica` - Whether the engine version supports read replicas.
Expand All @@ -103,4 +106,4 @@ This data source exports the following attributes in addition to the arguments a
* `version_actual` - Complete engine version.
* `version_description` - Description of the engine version.

<!-- cache-key: cdktf-0.20.8 input-c79463a69506695ed29ad8f547a90f667cebfb0ef7c37e26376a528db05d0b20 -->
<!-- cache-key: cdktf-0.20.8 input-c1c417ee20b29b9a30a8535daa2a033b1c6d977270f8d5c04d0dce74533ece46 -->
4 changes: 2 additions & 2 deletions website/docs/cdktf/python/index.html.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Use the Amazon Web Services (AWS) provider to interact with the
many resources supported by AWS. You must configure the provider
with the proper credentials before you can use it.

Use the navigation to the left to read about the available resources. There are currently 1465 resources and 592 data sources available in the provider.
Use the navigation to the left to read about the available resources. There are currently 1470 resources and 593 data sources available in the provider.

To learn the basics of Terraform using this provider, follow the
hands-on [get started tutorials](https://learn.hashicorp.com/tutorials/terraform/infrastructure-as-code?in=terraform/aws-get-started&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS). Interact with AWS services,
Expand Down Expand Up @@ -898,4 +898,4 @@ Approaches differ per authentication providers:
There used to be no better way to get account ID out of the API
when using the federated account until `sts:GetCallerIdentity` was introduced.

<!-- cache-key: cdktf-0.20.8 input-3e6d3debfd91945218eacf4fe603306d6ec9df34455db0941d4d30a6f7293893 -->
<!-- cache-key: cdktf-0.20.8 input-d366916a95f73e3e23a2a9a808c61afc1b1f918a77a6d53c8d81bec3ef21abc5 -->
4 changes: 2 additions & 2 deletions website/docs/cdktf/python/r/batch_job_queue.html.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ This resource supports the following arguments:
### job_state_time_limit_action

* `action` - (Required) The action to take when a job is at the head of the job queue in the specified state for the specified period of time. Valid values include `"CANCEL"`
* `job_state_time_limit_action.#.max_time_seconds` - The approximate amount of time, in seconds, that must pass with the job in the specified state before the action is taken. Valid values include integers between `600` & `86400`
* `max_time_seconds` - The approximate amount of time, in seconds, that must pass with the job in the specified state before the action is taken. Valid values include integers between `600` & `86400`
* `reason` - (Required) The reason to log for the action being taken.
* `state` - (Required) The state of the job needed to trigger the action. Valid values include `"RUNNABLE"`.

Expand Down Expand Up @@ -154,4 +154,4 @@ Using `terraform import`, import Batch Job Queue using the `arn`. For example:
% terraform import aws_batch_job_queue.test_queue arn:aws:batch:us-east-1:123456789012:job-queue/sample
```

<!-- cache-key: cdktf-0.20.8 input-d2d8afc607d24c6304997117ac827b221e6d3aa3e410fd4f8d5afa5e4d1a8f16 -->
<!-- cache-key: cdktf-0.20.8 input-dafa689faaa2c506a7d852f2802f652ecad7631ce51a9acd458c39d775fe7e0e -->
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ The CloudFront distribution argument layout is a complex structure composed of s
* `enabled` (Required) - Whether the distribution is enabled to accept end user requests for content.
* `is_ipv6_enabled` (Optional) - Whether the IPv6 is enabled for the distribution.
* `http_version` (Optional) - Maximum HTTP version to support on the distribution. Allowed values are `http1.1`, `http2`, `http2and3` and `http3`. The default is `http2`.
* `logging_config` (Optional) - The [logging configuration](#logging-config-arguments) that controls how logs are written to your distribution (maximum one).
* `logging_config` (Optional) - The [logging configuration](#logging-config-arguments) that controls how logs are written to your distribution (maximum one). AWS provides two versions of access logs for CloudFront: Legacy and v2. This argument configures legacy version standard logs.
* `ordered_cache_behavior` (Optional) - Ordered list of [cache behaviors](#cache-behavior-arguments) resource for this distribution. List from top to bottom in order of precedence. The topmost cache behavior will have precedence 0.
* `origin` (Required) - One or more [origins](#origin-arguments) for this distribution (multiples allowed).
* `origin_group` (Optional) - One or more [origin_group](#origin-group-arguments) for this distribution (multiples allowed).
Expand Down Expand Up @@ -408,7 +408,7 @@ argument should not be specified.

#### Logging Config Arguments

* `bucket` (Required) - Amazon S3 bucket to store the access logs in, for example, `myawslogbucket.s3.amazonaws.com`.
* `bucket` (Required) - Amazon S3 bucket to store the access logs in, for example, `myawslogbucket.s3.amazonaws.com`. The bucket must have correct ACL attached with "FULL_CONTROL" permission for "awslogsdelivery" account (Canonical ID: "c4c1ede66af53448b93c283ce9448c4ba468c9432aa01d700d3878632f77d2d0") for log transfer to work.
* `include_cookies` (Optional) - Whether to include cookies in access logs (default: `false`).
* `prefix` (Optional) - Prefix to the access log filenames for this distribution, for example, `myprefix/`.

Expand Down Expand Up @@ -539,4 +539,4 @@ Using `terraform import`, import CloudFront Distributions using the `id`. For ex
% terraform import aws_cloudfront_distribution.distribution E74FTE3EXAMPLE
```

<!-- cache-key: cdktf-0.20.8 input-1c68fd08598394f5af5331fd4800f38c4625aa36f9d44a979afd3169ac54f312 -->
<!-- cache-key: cdktf-0.20.8 input-016f5dbc9c21b429c56166e27cf76bcc384dee8b54d767c742e1c29c528c8899 -->
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ from imports.aws.cloudfront_vpc_origin import CloudfrontVpcOrigin
class MyConvertedCode(TerraformStack):
def __init__(self, scope, name):
super().__init__(scope, name)
CloudfrontVpcOrigin.generate_config_for_import(self, "origin", "${vo_JQEa410sssUFoY6wMkx69j}")
CloudfrontVpcOrigin.generate_config_for_import(self, "origin", "vo_JQEa410sssUFoY6wMkx69j")
```

Using `terraform import`, import Cloudfront VPC origins using the `id`. For example:
Expand All @@ -105,4 +105,4 @@ Using `terraform import`, import Cloudfront VPC origins using the `id`. For exam
% terraform import aws_cloudfront_vpc_origin vo_JQEa410sssUFoY6wMkx69j
```

<!-- cache-key: cdktf-0.20.8 input-bcdef97f0333b63cec31ab48f7045d350e36fcd0107ae9fb57377649137876dc -->
<!-- cache-key: cdktf-0.20.8 input-187cf38c97ac211aea4b679f94d3c76f36282959a1becf8fe51ae560fd45e2b4 -->
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
---
subcategory: "CloudWatch Logs"
layout: "aws"
page_title: "AWS: aws_cloudwatch_log_index_policy"
description: |-
Terraform resource for managing an AWS CloudWatch Logs Index Policy.
---


<!-- Please do not edit this file, it is generated. -->
# Resource: aws_cloudwatch_log_index_policy

Terraform resource for managing an AWS CloudWatch Logs Index Policy.

## Example Usage

### Basic Usage

```python
# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug
from constructs import Construct
from cdktf import Fn, TerraformStack
#
# Provider bindings are generated by running `cdktf get`.
# See https://cdk.tf/provider-generation for more details.
#
from imports.aws. import CloudwatchLogIndexPolicy
from imports.aws.cloudwatch_log_group import CloudwatchLogGroup
class MyConvertedCode(TerraformStack):
def __init__(self, scope, name):
super().__init__(scope, name)
example = CloudwatchLogGroup(self, "example",
name="example"
)
aws_cloudwatch_log_index_policy_example = CloudwatchLogIndexPolicy(self, "example_1",
log_group_name=example.name,
policy_document=Fn.jsonencode({
"Fields": ["eventName"]
})
)
# This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.
aws_cloudwatch_log_index_policy_example.override_logical_id("example")
```

## Argument Reference

The following arguments are required:

* `log_group_name` - (Required) Log group name to set the policy for.
* `policy_document` - (Required) JSON policy document. This is a JSON formatted string.

## Attribute Reference

This resource exports no additional attributes.

## Import

In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import CloudWatch Logs Index Policy using the `log_group_name`. For example:

```python
# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug
from constructs import Construct
from cdktf import TerraformStack
#
# Provider bindings are generated by running `cdktf get`.
# See https://cdk.tf/provider-generation for more details.
#
from imports.aws. import CloudwatchLogIndexPolicy
class MyConvertedCode(TerraformStack):
def __init__(self, scope, name):
super().__init__(scope, name)
CloudwatchLogIndexPolicy.generate_config_for_import(self, "example", "/aws/log/group/name")
```

Using `terraform import`, import CloudWatch Logs Index Policy using the `log_group_name`. For example:

```console
% terraform import aws_cloudwatch_log_index_policy.example /aws/log/group/name
```

<!-- cache-key: cdktf-0.20.8 input-609b37f1fbe72fb93027ce23661c2c59b076708e0b3d17ee90e5256144a7c9c1 -->
7 changes: 6 additions & 1 deletion website/docs/cdktf/python/r/eks_node_group.html.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,7 @@ The following arguments are optional:
* `launch_template` - (Optional) Configuration block with Launch Template settings. See [`launch_template`](#launch_template-configuration-block) below for details. Conflicts with `remote_access`.
* `node_group_name` – (Optional) Name of the EKS Node Group. If omitted, Terraform will assign a random, unique name. Conflicts with `node_group_name_prefix`. The node group name can't be longer than 63 characters. It must start with a letter or digit, but can also include hyphens and underscores for the remaining characters.
* `node_group_name_prefix` – (Optional) Creates a unique name beginning with the specified prefix. Conflicts with `node_group_name`.
* `node_repair_config` - (Optional) The node auto repair configuration for the node group. See [`node_repair_config`](#node_repair_config-configuration-block) below for details.
* `release_version` – (Optional) AMI version of the EKS Node Group. Defaults to latest version for Kubernetes version.
* `remote_access` - (Optional) Configuration block with remote access settings. See [`remote_access`](#remote_access-configuration-block) below for details. Conflicts with `launch_template`.
* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level.
Expand All @@ -222,6 +223,10 @@ The following arguments are optional:
* `name` - (Optional) Name of the EC2 Launch Template. Conflicts with `id`.
* `version` - (Required) EC2 Launch Template version number. While the API accepts values like `$Default` and `$Latest`, the API will convert the value to the associated version number (e.g., `1`) on read and Terraform will show a difference on next plan. Using the `default_version` or `latest_version` attribute of the `aws_launch_template` resource or data source is recommended for this argument.

### node_repair_config Configuration Block

* `enabled` - (Required) Specifies whether to enable node auto repair for the node group. Node auto repair is disabled by default.

### remote_access Configuration Block

* `ec2_ssh_key` - (Optional) EC2 Key Pair name that provides access for remote communication with the worker nodes in the EKS Node Group. If you specify this configuration, but do not specify `source_security_group_ids` when you create an EKS Node Group, either port 3389 for Windows, or port 22 for all other operating systems is opened on the worker nodes to the Internet (0.0.0.0/0). For Windows nodes, this will allow you to use RDP, for all others this allows you to SSH into the worker nodes.
Expand Down Expand Up @@ -292,4 +297,4 @@ Using `terraform import`, import EKS Node Groups using the `cluster_name` and `n
% terraform import aws_eks_node_group.my_node_group my_cluster:my_node_group
```

<!-- cache-key: cdktf-0.20.8 input-fe608fb7adc16d4f1e0363865ecd6fc2d5b0eb9372860c4a8aa730be1b4503a7 -->
<!-- cache-key: cdktf-0.20.8 input-bd4da5e734791aeb8e83cd00d25e46f8bdc19df7017cb3a4cc0ce59c8947dc3c -->
2 changes: 1 addition & 1 deletion website/docs/cdktf/python/r/lb_trust_store.html.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -95,4 +95,4 @@ Using `terraform import`, import Target Groups using their ARN. For example:
% terraform import aws_lb_trust_store.example arn:aws:elasticloadbalancing:us-west-2:187416307283:truststore/my-trust-store/20cfe21448b66314
```

<!-- cache-key: cdktf-0.20.8 input-89fe3c36bebef5470b9cbe4480f0a8400fa67f5a22bdf0570245d7e1606e84fa -->
<!-- cache-key: cdktf-0.20.8 input-34bb0a1e9045deea914ea976441edea2890e4b73835aa15a907bb8a250bbb058 -->
Loading
Loading