From dd3d025acf960f8ee43ddd90527ea71bdab1a295 Mon Sep 17 00:00:00 2001
From: byashimov Appears on System-wide settings for the pg_qualstats extension. Optional Appears on PGBouncer connection pooling settings. Appears on PGLookout settings. System-wide settings for pglookout. Required Appears on TimescaleDB extension configuration values. System-wide settings for the timescaledb extension. Required Provision and manage Aiven services from your Kubernetes cluster. Aiven offers managed services for the best open source data technologies, on a cloud of your choice. We offer multiple cloud options because we believe that everyone should have access to great data platforms wherever they host their applications. Our customers tell us they love it because they know that they aren\u2019t locked in to one particular cloud platform for all their data needs. The contribution guide covers everything you need to know about how you can contribute to Aiven Operator for Kubernetes. The developer guide will help you onboard as a developer. To get authenticated and authorized, set up the communication between the Aiven Operator for Kubernetes and Aiven by using a token stored in a Kubernetes secret. You can then refer to the secret name on every custom resource in the If you don't have an Aiven account yet, sign up here for a free trial. \ud83e\udd80 1. Generate an authentication token Refer to our documentation article to generate your authentication token. 2. Create the Kubernetes Secret The following command creates a secret named When managing your Aiven resources, we will be using the created Secret in the Also, note that within Aiven, all resources are conceptually inside a Project. By default, a random project name is generated when you signup, but you can also create new projects. The Project name is required in most of the resources. It will look like the example below: Important: This release brings breaking changes to the Note: It is now recommended to disable webhooks for Kubernetes version 1.25 and higher, as native CRD validation rules are used. features: * add Redis CRD improvements: * watch CRDs to reconcile token secrets fixes: * fix RBACs of KafkaACL CRD improvements: * update helm installation docs fixes: * fix typo in a kafka-connector kuttl test features: * initial release Use the following checks to help you troubleshoot the Aiven Operator for Kubernetes. Verify that all the operator Pods are The output is similar to the following: Verify that the The output has the status: Use the following command to visualize all the logs from the operator. We're always working to resolve problems that pop up in Aiven products. If your problem is listed below, we know about it and are working to fix it. If your problem isn't listed below, report it as an issue. The following event appears on the operator Pod: You cannot run the operator. Make sure that cert-manager is up and running. The output shows the status of each cert-manager: Autogenerated from CRD files. Cassandra is the Schema for the cassandras API. Required Appears on CassandraSpec defines the desired state of Cassandra. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional Appears on ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. Required Optional Appears on Service integrations to specify when creating a service. Not applied after initial service creation. Required Appears on Cassandra specific user configuration options. Optional Appears on cassandra configuration values. Optional Appears on Allow incoming connections from CIDR address block, e.g. Required Optional Appears on Allow access to selected service ports from private networks. Required Appears on Allow access to selected service ports from the public Internet. Required Clickhouse is the Schema for the clickhouses API. Required Appears on ClickhouseSpec defines the desired state of Clickhouse. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional Appears on ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. Required Optional Appears on Service integrations to specify when creating a service. Not applied after initial service creation. Required Appears on OpenSearch specific user configuration options. Optional Appears on Allow incoming connections from CIDR address block, e.g. Required Optional Appears on Allow access to selected service ports from private networks. Optional Appears on Allow access to selected service components through Privatelink. Optional Appears on Allow access to selected service ports from the public Internet. Optional ClickhouseUser is the Schema for the clickhouseusers API. Required Appears on ClickhouseUserSpec defines the desired state of ClickhouseUser. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional ConnectionPool is the Schema for the connectionpools API. Required Appears on ConnectionPoolSpec defines the desired state of ConnectionPool. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional Database is the Schema for the databases API. Required Appears on DatabaseSpec defines the desired state of Database. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Grafana is the Schema for the grafanas API. Required Appears on GrafanaSpec defines the desired state of Grafana. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional Appears on ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. Required Optional Appears on Service integrations to specify when creating a service. Not applied after initial service creation. Required Appears on Cassandra specific user configuration options. Optional Appears on Azure AD OAuth integration. Required Optional Appears on Generic OAuth integration. Required Optional Appears on Github Auth integration. Required Optional Appears on GitLab Auth integration. Required Optional Appears on Google Auth integration. Required Optional Appears on Grafana date format specifications. Optional Appears on External image store settings. Required Appears on Allow incoming connections from CIDR address block, e.g. Required Optional Appears on Allow access to selected service ports from private networks. Required Appears on Allow access to selected service components through Privatelink. Required Appears on Allow access to selected service ports from the public Internet. Required Appears on SMTP server settings. Required Optional Kafka is the Schema for the kafkas API. Required Appears on KafkaSpec defines the desired state of Kafka. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional Appears on ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. Required Optional Appears on Service integrations to specify when creating a service. Not applied after initial service creation. Required Appears on Kafka specific user configuration options. Optional Appears on Allow incoming connections from CIDR address block, e.g. Required Optional Appears on Kafka broker configuration values. Optional Appears on Kafka authentication methods. Optional Appears on Kafka Connect configuration values. Optional Appears on Kafka REST configuration. Optional Appears on Allow access to selected service ports from private networks. Optional Appears on Allow access to selected service components through Privatelink. Optional Appears on Allow access to selected service ports from the public Internet. Optional Appears on Schema Registry configuration. Optional Appears on Tiered storage configuration. Optional Appears on Deprecated. Local cache configuration. Required KafkaACL is the Schema for the kafkaacls API. Required Appears on KafkaACLSpec defines the desired state of KafkaACL. Required Optional Appears on Authentication reference to Aiven token in a secret. Required KafkaConnect is the Schema for the kafkaconnects API. Required Appears on KafkaConnectSpec defines the desired state of KafkaConnect. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. Required Optional Appears on Service integrations to specify when creating a service. Not applied after initial service creation. Required Appears on KafkaConnect specific user configuration options. Optional Appears on Allow incoming connections from CIDR address block, e.g. Required Optional Appears on Kafka Connect configuration values. Optional Appears on Allow access to selected service ports from private networks. Optional Appears on Allow access to selected service components through Privatelink. Optional Appears on Allow access to selected service ports from the public Internet. Optional KafkaConnector is the Schema for the kafkaconnectors API. Required Appears on KafkaConnectorSpec defines the desired state of KafkaConnector. Required Optional Appears on Authentication reference to Aiven token in a secret. Required KafkaSchema is the Schema for the kafkaschemas API. Required Appears on KafkaSchemaSpec defines the desired state of KafkaSchema. Required Optional Appears on Authentication reference to Aiven token in a secret. Required KafkaTopic is the Schema for the kafkatopics API. Required Appears on KafkaTopicSpec defines the desired state of KafkaTopic. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Kafka topic configuration. Optional Appears on Kafka topic tags. Required Optional MySQL is the Schema for the mysqls API. Required Appears on MySQLSpec defines the desired state of MySQL. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional Appears on ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. Required Optional Appears on Service integrations to specify when creating a service. Not applied after initial service creation. Required Appears on MySQL specific user configuration options. Optional Appears on Allow incoming connections from CIDR address block, e.g. Required Optional Appears on Migrate data from existing server. Required Optional Appears on mysql.conf configuration values. Optional Appears on Allow access to selected service ports from private networks. Optional Appears on Allow access to selected service components through Privatelink. Optional Appears on Allow access to selected service ports from the public Internet. Optional OpenSearch is the Schema for the opensearches API. Required Appears on OpenSearchSpec defines the desired state of OpenSearch. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional Appears on ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. Required Optional Appears on Service integrations to specify when creating a service. Not applied after initial service creation. Required Appears on OpenSearch specific user configuration options. Optional Appears on Index patterns. Required Optional Appears on Template settings for all new indexes. Optional Appears on Allow incoming connections from CIDR address block, e.g. Required Optional Appears on OpenSearch OpenID Connect Configuration. Required Optional Appears on OpenSearch settings. Optional Appears on Opensearch Security Plugin Settings. Optional Appears on Optional Appears on IP address rate limiting settings. Optional Appears on OpenSearch Dashboards settings. Optional Appears on Allow access to selected service ports from private networks. Optional Appears on Allow access to selected service components through Privatelink. Optional Appears on Allow access to selected service ports from the public Internet. Optional Appears on OpenSearch SAML configuration. Required Optional PostgreSQL is the Schema for the postgresql API. Required Appears on PostgreSQLSpec defines the desired state of postgres instance. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional Appears on ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. Required Optional Appears on Service integrations to specify when creating a service. Not applied after initial service creation. Required Appears on PostgreSQL specific user configuration options. Optional Appears on Allow incoming connections from CIDR address block, e.g. Required Optional Appears on Migrate data from existing server. Required Optional Appears on postgresql.conf configuration values. Optional Appears on PGBouncer connection pooling settings. Optional Appears on PGLookout settings. Required Appears on Allow access to selected service ports from private networks. Optional Appears on Allow access to selected service components through Privatelink. Optional Appears on Allow access to selected service ports from the public Internet. Optional Appears on TimescaleDB extension configuration values. Required Project is the Schema for the projects API. Required Appears on ProjectSpec defines the desired state of Project. Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional ProjectVPC is the Schema for the projectvpcs API. Required Appears on ProjectVPCSpec defines the desired state of ProjectVPC. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Redis is the Schema for the redis API. Required Appears on RedisSpec defines the desired state of Redis. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional Appears on ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. Required Optional Appears on Service integrations to specify when creating a service. Not applied after initial service creation. Required Appears on Redis specific user configuration options. Optional Appears on Allow incoming connections from CIDR address block, e.g. Required Optional Appears on Migrate data from existing server. Required Optional Appears on Allow access to selected service ports from private networks. Optional Appears on Allow access to selected service components through Privatelink. Optional Appears on Allow access to selected service ports from the public Internet. Optional ServiceIntegration is the Schema for the serviceintegrations API. Required Appears on ServiceIntegrationSpec defines the desired state of ServiceIntegration. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Clickhouse Kafka configuration values. Required Appears on Tables to create. Required Optional Appears on Table columns. Required Appears on Kafka topics. Required Appears on Clickhouse PostgreSQL configuration values. Required Appears on Databases to expose. Optional Appears on Datadog specific user configuration options. Optional Appears on Custom tags provided by user. Required Optional Appears on Datadog Opensearch Options. Optional Appears on Datadog Redis Options. Required Appears on External AWS CloudWatch Metrics integration Logs configuration values. Optional Appears on Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics). Required Appears on Metrics to allow through to AWS CloudWatch (in addition to default metrics). Required Appears on Kafka Connect service configuration values. Required Appears on Kafka Connect service configuration values. Optional Appears on Kafka logs configuration values. Required Optional Appears on Kafka MirrorMaker configuration values. Optional Appears on Kafka MirrorMaker configuration values. Optional Appears on Logs configuration values. Optional Appears on Metrics configuration values. Optional Appears on Configuration options for metrics where source service is MySQL. Required Appears on Configuration options for Telegraf MySQL input plugin. Optional ServiceUser is the Schema for the serviceusers API. Required Appears on ServiceUserSpec defines the desired state of ServiceUser. Required Optional Appears on Authentication reference to Aiven token in a secret. Required Appears on Information regarding secret creation. Exposed keys: Required Optional The Aiven Operator for Kubernetes project accepts contributions via GitHub pull requests. This document outlines the process to help get your contribution accepted. Please see also the Aiven Operator for Kubernetes Developer Guide. This project offers support through GitHub issues and can be filed here. Moreover, GitHub issues are used as the primary method for tracking anything to do with the Aiven Operator for Kubernetes project. In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Examples of unacceptable behavior by participants include: This project adheres to the Conventional Commits specification. Please, make sure that your commit messages follow that specification. Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at opensource@aiven.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. You must have a working Go environment and then clone the repository: Please see this page for more information. The project uses the Building the operator binary: As of now, we only support integration tests who interact directly with Aiven. To run the tests, you'll need an Aiven account and an Aiven authentication code. Please have installed first: The following commands must be executed with these environment variables (keep them in secret!): Setup everything: Note Additionally, webhooks can be disabled, if there are any problems with them. Run e2e tests (creates real services in When you're done, just drop the cluster: The documentation is written in markdown and generated by mkdocs and mkdocs-material. To run the documentation live preview: And open the The documentation API Reference section is generated automatically from the source code during the documentation deployment. To generate it locally, run the following command: Aiven Kubernetes Operator generates service configs code (also known as user configs) and documentation from public service types schema. When a new schema is issued on the API, a cron job fetches it, parses, patches, and saves in a shared library \u2014 go-api-schemas. When the library is updated, the GitHub dependabot creates PRs to the dependant repositories, like Aiven Kubernetes Operator and Aiven Terraform Provider. Then the The command runs several generators in a certain sequence. First, the user config generator is called. Then controller-gen cli. Then API reference docs generator and charts generator. Here how it goes in the details: By default, charts generator keeps the current helm chart's version, because it doesn't know semver. You need it to do manually. To do so run the following command with the version of your choice: The Aiven Operator for Kubernetes can be installed via Helm. Before you start, make sure you have the prerequisites. First add the Aiven Helm repository: Verify the installation: userConfig
private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.
project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.service_to_join_with
(string, MaxLength: 64). When bootstrapping, instead of creating a new Cassandra cluster try to join an existing one from another service. Can only be set on service creation.static_ips
(boolean). Use static public IP addresses.userConfig
privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.
project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.userConfig
project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.
public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.smtp_server
(object). SMTP server settings. See below for nested schema.static_ips
(boolean). Use static public IP addresses.userConfig
public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.
schema_registry
(boolean). Enable Schema-Registry service.schema_registry_config
(object). Schema Registry configuration. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.static_ips
(boolean). Use static public IP addresses.tiered_storage
(object). Tiered storage configuration. See below for nested schema.kafka_rest_config
consumer_enable_auto_commit
(boolean). If true the consumer's offset will be periodically committed to Kafka in the background.
consumer_request_max_bytes
(integer, Minimum: 0, Maximum: 671088640). Maximum number of bytes in unencoded message keys and values by a single request.consumer_request_timeout_ms
(integer, Enum: 1000
, 15000
, 30000
, Minimum: 1000, Maximum: 30000). The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached.name_strategy_validation
(boolean). If true, validate that given schema is registered under expected subject name by the used name strategy when producing messages.producer_acks
(string, Enum: all
, -1
, 0
, 1
). The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to all
or -1
, the leader will wait for the full set of in-sync replicas to acknowledge the record.producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). Wait for up to the given delay to allow batching records together.userConfig
private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.
privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.static_ips
(boolean). Use static public IP addresses.ip_filter¶
diff --git a/api-reference/mysql.html b/api-reference/mysql.html
index 28c7c457..283f00fe 100644
--- a/api-reference/mysql.html
+++ b/api-reference/mysql.html
@@ -1978,6 +1978,7 @@ userConfig
project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.
public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_target_time
(string, Immutable, MaxLength: 32). Recovery target time when forking a service. This has effect only when a new service is being created.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.userConfig
public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.
recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.saml
(object). OpenSearch SAML configuration. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.userConfig
ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.pg
(object). postgresql.conf configuration values. See below for nested schema.pg_qualstats
(object). System-wide settings for the pg_qualstats extension. See below for nested schema.pg_read_replica
(boolean). Should the service which is being forked be a read replica (deprecated, use read_replica service integration instead).pg_service_to_fork_from
(string, Immutable, MaxLength: 64). Name of the PG Service from which to fork (deprecated, use service_to_fork_from). This has effect only when a new service is being created.pg_stat_monitor_enable
(boolean). Enable the pg_stat_monitor extension. Enabling this extension will cause the cluster to be restarted.When this extension is enabled, pg_stat_statements results for utility commands are unreliable.pg_version
(string, Enum: 11
, 12
, 13
, 14
, 15
). PostgreSQL major version.pgbouncer
(object). PGBouncer connection pooling settings. See below for nested schema.pglookout
(object). PGLookout settings. See below for nested schema.pglookout
(object). System-wide settings for pglookout. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_target_time
(string, Immutable, MaxLength: 32). Recovery target time when forking a service. This has effect only when a new service is being created.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.shared_buffers_percentage
(number, Minimum: 20, Maximum: 60). Percentage of total RAM that the database server uses for shared memory buffers. Valid range is 20-60 (float), which corresponds to 20% - 60%. This setting adjusts the shared_buffers configuration value.static_ips
(boolean). Use static public IP addresses.synchronous_replication
(string, Enum: quorum
, off
). Synchronous replication type. Note that the service plan also needs to support synchronous replication.timescaledb
(object). TimescaleDB extension configuration values. See below for nested schema.timescaledb
(object). System-wide settings for the timescaledb extension. See below for nested schema.variant
(string, Enum: aiven
, timescale
). Variant of the PostgreSQL service, may affect the features that are exposed by default.work_mem
(integer, Minimum: 1, Maximum: 1024). Sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files, in MB. Default is 1MB + 0.075% of total RAM (up to 32MB).pg
wal_sender_timeout
(integer). Terminate replication connections that are inactive for longer than this amount of time, in milliseconds. Setting this value to zero disables the timeout.
wal_writer_delay
(integer, Minimum: 10, Maximum: 200). WAL flush interval in milliseconds. Note that setting this value to lower than the default 200ms may negatively impact performance.pg_qualstats¶
+spec.userConfig
.
+
enabled
(boolean). Enable / Disable pg_qualstats.min_err_estimate_num
(integer, Minimum: 0). Error estimation num threshold to save quals.min_err_estimate_ratio
(integer, Minimum: 0). Error estimation ratio threshold to save quals.track_constants
(boolean). Enable / Disable pg_qualstats constants tracking.track_pg_catalog
(boolean). Track quals on system catalogs too.pgbouncer¶
spec.userConfig
.pgbouncerpglookout¶
spec.userConfig
.
max_failover_replication_time_lag
(integer, Minimum: 10). Number of seconds of master unavailability before triggering database failover to standby.public_accesstimescaledb¶
spec.userConfig
.
diff --git a/changelog.html b/changelog.html
index a6a81c99..cbe7bacf 100644
--- a/changelog.html
+++ b/changelog.html
@@ -426,6 +426,15 @@
max_background_workers
(integer, Minimum: 1, Maximum: 4096). The number of background workers for timescaledb operations. You should configure this setting to the sum of your number of databases and the total number of concurrent background workers you want running at any given point in time.userConfig
redis_pubsub_client_output_buffer_limit
(integer, Minimum: 32, Maximum: 512). Set output buffer limit for pub / sub clients in MB. The value is the hard limit, the soft limit is 1/4 of the hard limit. When setting the limit, be mindful of the available memory in the selected service plan.
redis_ssl
(boolean). Require SSL to access Redis.redis_timeout
(integer, Minimum: 0, Maximum: 31536000). Redis idle connection timeout in seconds.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.
+
+
Changelog¶
+v0.16.0 - 2023-12-07¶
+
+
Preconditions
, CreateOrUpdate
, Delete
. Thanks to @ataraxKafka
field userConfig.kafka.transaction_partition_verification_enable
, type boolean
: Enable
+ verification that checks that the partition has been added to the transaction before writing transactional
+ records to the partitionCassandra
field userConfig.service_log
, type boolean
: Store logs for the service so that
+ they are available in the HTTP API and consoleClickhouse
field userConfig.service_log
, type boolean
: Store logs for the service so that
+ they are available in the HTTP API and consoleGrafana
field userConfig.service_log
, type boolean
: Store logs for the service so that they
+ are available in the HTTP API and consoleKafkaConnect
field userConfig.service_log
, type boolean
: Store logs for the service so that
+ they are available in the HTTP API and consoleKafka
field userConfig.kafka_rest_config.name_strategy_validation
, type boolean
: If true,
+ validate that given schema is registered under expected subject name by the used name strategy when
+ producing messagesKafka
field userConfig.service_log
, type boolean
: Store logs for the service so that they
+ are available in the HTTP API and consoleMySQL
field userConfig.service_log
, type boolean
: Store logs for the service so that they
+ are available in the HTTP API and consoleOpenSearch
field userConfig.service_log
, type boolean
: Store logs for the service so that
+ they are available in the HTTP API and consolePostgreSQL
field userConfig.pg_qualstats
, type object
: System-wide settings for the pg_qualstats
+ extensionPostgreSQL
field userConfig.service_log
, type boolean
: Store logs for the service so that
+ they are available in the HTTP API and consoleRedis
field userConfig.service_log
, type boolean
: Store logs for the service so that they
+ are available in the HTTP API and consolev0.15.0 - 2023-11-17¶
authSecretRef
field.aiven-token
with a token
field containing the authentication token:kubectl create secret generic aiven-token --from-literal=token=\"<your-token-here>\"\n
authSecretRef
field. It will look like the example below:apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n [ ... ]\n
"},{"location":"changelog.html","title":"Changelog","text":""},{"location":"changelog.html#v0150-2023-11-17","title":"v0.15.0 - 2023-11-17","text":"apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n project: <your-project-name-here>\n [ ... ]\n
"},{"location":"changelog.html#v0140-2023-09-21","title":"v0.14.0 - 2023-09-21","text":"ServiceIntegration
: do not send empty user config to the API string
type fields to the documentationClickhouse
field userConfig.private_access.clickhouse_mysql
, type boolean
: Allow clients to connect to clickhouse_mysql with a DNS name that always resolves to the service's private IP addressesClickhouse
field userConfig.privatelink_access.clickhouse_mysql
, type boolean
: Enable clickhouse_mysqlClickhouse
field userConfig.public_access.clickhouse_mysql
, type boolean
: Allow clients to connect to clickhouse_mysql from the public internet for service nodes that are in a project VPC or another type of private networkGrafana
field userConfig.unified_alerting_enabled
, type boolean
: Enable or disable Grafana unified alerting functionalityKafka
field userConfig.aiven_kafka_topic_messages
, type boolean
: Allow access to read Kafka topic messages in the Aiven Console and REST APIKafka
field userConfig.kafka.sasl_oauthbearer_expected_audience
, type string
: The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiencesKafka
field userConfig.kafka.sasl_oauthbearer_expected_issuer
, type string
: Optional setting for the broker to use to verify that the JWT was created by the expected issuerKafka
field userConfig.kafka.sasl_oauthbearer_jwks_endpoint_url
, type string
: OIDC JWKS endpoint URL. By setting this the SASL SSL OAuth2/OIDC authentication is enabledKafka
field userConfig.kafka.sasl_oauthbearer_sub_claim_name
, type string
: Name of the scope from which to extract the subject claim from the JWT. Defaults to subKafka
field userConfig.kafka_version
: enum [3.1, 3.3, 3.4, 3.5]
\u2192 [3.1, 3.3, 3.4, 3.5, 3.6]
Kafka
field userConfig.tiered_storage.local_cache.size
: deprecatedOpenSearch
field userConfig.opensearch.indices_memory_max_index_buffer_size
, type integer
: Absolute value. Default is unbound. Doesn't work without indices.memory.index_buffer_sizeOpenSearch
field userConfig.opensearch.indices_memory_min_index_buffer_size
, type integer
: Absolute value. Default is 48mb. Doesn't work without indices.memory.index_buffer_sizeOpenSearch
field userConfig.opensearch.auth_failure_listeners.internal_authentication_backend_limiting.authentication_backend
: enum [internal]
OpenSearch
field userConfig.opensearch.auth_failure_listeners.internal_authentication_backend_limiting.type
: enum [username]
OpenSearch
field userConfig.opensearch.auth_failure_listeners.ip_rate_limiting.type
: enum [ip]
OpenSearch
field userConfig.opensearch.search_max_buckets
: maximum 65536
\u2192 1000000
ServiceIntegration
field kafkaMirrormaker.kafka_mirrormaker.producer_max_request_size
: maximum 67108864
\u2192 268435456
"},{"location":"changelog.html#v0130-2023-08-18","title":"v0.13.0 - 2023-08-18","text":"projectVpcId
and projectVPCRef
mutablenil
user config conversionCassandra
kind option additional_backup_regions
Grafana
kind option auto_login
Kafka
kind properties log_local_retention_bytes
, log_local_retention_ms
Kafka
kind option remote_log_storage_system_enable
OpenSearch
kind option auth_failure_listeners
OpenSearch
kind Index State Management options
"},{"location":"changelog.html#v0123-2023-07-13","title":"v0.12.3 - 2023-07-13","text":"Kafka
Kafka
version 3.5
Kafka
spec property scheduled_rebalance_max_delay_ms
Kafka
spec property remote_log_storage_system_enable
KafkaConnect
spec property scheduled_rebalance_max_delay_ms
OpenSearch
spec property openid
"},{"location":"changelog.html#v0122-2023-06-20","title":"v0.12.2 - 2023-06-20","text":"KAFKA_SCHEMA_REGISTRY_HOST
and KAFKA_SCHEMA_REGISTRY_PORT
for Kafka
KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
and KAFKA_REST_PORT
for Kafka
. Thanks to @Dariusch
"},{"location":"changelog.html#v0120-2023-05-10","title":"v0.12.0 - 2023-05-10","text":"unclean_leader_election_enable
from KafkaTopic
kind configKAFKA_SASL_PORT
for Kafka
kind if SASL
authentication method is enabledredis
options to datadog ServiceIntegration
Cassandra
version 3
Kafka
versions 3.1
and 3.4
kafka_rest_config.producer_max_request_size
optionkafka_mirrormaker.producer_compression_type
option
"},{"location":"changelog.html#v0110-2023-04-25","title":"v0.11.0 - 2023-04-25","text":"clusterRole.create
option to Helm chart. Thanks to @ryaneorthOpenSearch.spec.userConfig.idp_pemtrustedcas_content
option. Specifies the PEM-encoded root certificate authority (CA) content for the SAML identity provider (IdP) server verification.
"},{"location":"changelog.html#v0100-2023-04-17","title":"v0.10.0 - 2023-04-17","text":"ServiceIntegration
kind SourceProjectName
and DestinationProjectName
fieldsServiceIntegration
fields MaxLength
validationServiceIntegration
validation: multiple user configs cannot be setServiceIntegration
, should not require destinationServiceName
or sourceEndpointID
fieldServiceIntegration
, add missing external_aws_cloudwatch_metrics
type config serializationServiceIntegration
integration type listannotations
and labels
fields to connInfoSecretTarget
OpenSearch.spec.userConfig.opensearch.search_max_buckets
maximum to 65536
"},{"location":"changelog.html#v090-2023-03-03","title":"v0.9.0 - 2023-03-03","text":"plan
as a required fieldminumim
, maximum
validations for number
typeip_filter
backward compatabilityclickhouseKafka.tables.data_format-property
enum RawBLOB
valueuserConfig.opensearch.email_sender_username
validation patternlog_cleaner_min_cleanable_ratio
minimum and maximum validation rules3.2
, reached EOL10
, reached EOLProjectVPC
by ID
to avoid conflicts ProjectVPC
deletion by exiting on DELETING
statusClickhouseUser
controllerClickhouseUser.spec.project
and ClickhouseUser.spec.serviceName
as immutablesignalfx
"},{"location":"changelog.html#v080-2023-02-15","title":"v0.8.0 - 2023-02-15","text":"AuthSecretRef
fields marked as requireddatadog
, kafka_connect
, kafka_logs
, metrics
clickhouse_postgresql
, clickhouse_kafka
, clickhouse_kafka
, logs
, external_aws_cloudwatch_metrics
KafkaTopic.Spec.topicName
field. Unlike the metadata.name
, supports additional characters and has a longer length. KafkaTopic.Spec.topicName
replaces metadata.name
in future releases and will be marked as required.false
value for termination_protection
propertymin_cleanable_dirty_ratio
. Thanks to @TV2rduserConfig
property. After new charts are installed, update your existing instances manually using the kubectl edit
command according to the API reference.
"},{"location":"changelog.html#v071-2023-01-24","title":"v0.7.1 - 2023-01-24","text":"ip_filter
field is now of object
typeserviceIntegrations
on service types. Only the read_replica
type is available.min_cleanable_dirty_ratio
config field supportspec.disk_space
propertylinux/amd64
build. Thanks to @christoffer-eide
"},{"location":"changelog.html#v060-2023-01-16","title":"v0.6.0 - 2023-01-16","text":"
"},{"location":"changelog.html#v052-2022-12-09","title":"v0.5.2 - 2022-12-09","text":"never
from choices of maintenance dowdevelopment
flag to configure logger's behaviormake generate-user-configs
)genericServiceHandler
to generalize service management
"},{"location":"changelog.html#v051-2022-11-28","title":"v0.5.1 - 2022-11-28","text":"
"},{"location":"changelog.html#v050-2022-11-27","title":"v0.5.0 - 2022-11-27","text":"KafkaACL
deletion
"},{"location":"changelog.html#v040-2022-08-04","title":"v0.4.0 - 2022-08-04","text":"ProjectVPCRef
property to Kafka
, OpenSearch
, Clickhouse
and Redis
kinds to get ProjectVPC
ID when resource is readyProjectVPC
deletion, deletes by ID first if possible, then tries by nameclient.Object
storage update data loss
"},{"location":"changelog.html#v020-2021-11-17","title":"v0.2.0 - 2021-11-17","text":"READY
, and the STATUS
is Running
.kubectl get pod -n aiven-operator-system\n
NAME READY STATUS RESTARTS AGE\naiven-operator-controller-manager-576d944499-ggttj 1/1 Running 0 12m\n
cert-manager
Pods are also running.kubectl get pod -n cert-manager\n
"},{"location":"troubleshooting.html#visualizing-the-operator-logs","title":"Visualizing the operator logs","text":"NAME READY STATUS RESTARTS AGE\ncert-manager-7dd5854bb4-85cpv 1/1 Running 0 76s\ncert-manager-cainjector-64c949654c-n2z8l 1/1 Running 0 77s\ncert-manager-webhook-6bdffc7c9d-47w6z 1/1 Running 0 76s\n
"},{"location":"troubleshooting.html#verifing-the-operator-version","title":"Verifing the operator version","text":"kubectl logs -n aiven-operator-system -l control-plane=controller-manager\n
"},{"location":"troubleshooting.html#known-issues-and-limitations","title":"Known issues and limitations","text":"kubectl get pod -n aiven-operator-system -l control-plane=controller-manager -o jsonpath=\"{.items[0].spec.containers[0].image}\"\n
"},{"location":"troubleshooting.html#impact","title":"Impact","text":"MountVolume.SetUp failed for volume \"cert\" : secret \"webhook-server-cert\" not found\n
kubectl get pod -n cert-manager\n
"},{"location":"api-reference/index.html","title":"aiven.io/v1alpha1","text":"NAME READY STATUS RESTARTS AGE\ncert-manager-7dd5854bb4-85cpv 1/1 Running 0 76s\ncert-manager-cainjector-64c949654c-n2z8l 1/1 Running 0 77s\ncert-manager-webhook-6bdffc7c9d-47w6z 1/1 Running 0 76s\n
"},{"location":"api-reference/cassandra.html#Cassandra","title":"Cassandra","text":"apiVersion: aiven.io/v1alpha1\nkind: Cassandra\nmetadata:\n name: my-cassandra\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: cassandra-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n migrate_sstableloader: true\n public_access:\n prometheus: true\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/cassandra.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Cassandra
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). CassandraSpec defines the desired state of Cassandra. See below for nested schema.Cassandra
.
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.
"},{"location":"api-reference/cassandra.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CASSANDRA_HOST
, CASSANDRA_PORT
, CASSANDRA_USER
, CASSANDRA_PASSWORD
, CASSANDRA_URI
, CASSANDRA_HOSTS
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Cassandra specific user configuration options. See below for nested schema.spec
.
"},{"location":"api-reference/cassandra.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.CASSANDRA_HOST
, CASSANDRA_PORT
, CASSANDRA_USER
, CASSANDRA_PASSWORD
, CASSANDRA_URI
, CASSANDRA_HOSTS
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/cassandra.html#spec.projectVPCRef","title":"projectVPCRef","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.spec
.
name
(string, MinLength: 1).
"},{"location":"api-reference/cassandra.html#spec.serviceIntegrations","title":"serviceIntegrations","text":"namespace
(string, MinLength: 1). spec
.
"},{"location":"api-reference/cassandra.html#spec.userConfig","title":"userConfig","text":"integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). spec
.
"},{"location":"api-reference/cassandra.html#spec.userConfig.cassandra","title":"cassandra","text":"additional_backup_regions
(array of strings, MaxItems: 1). Deprecated. Additional Cloud Regions for Backup Replication.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.cassandra
(object). cassandra configuration values. See below for nested schema.cassandra_version
(string, Enum: 4
, 3
). Cassandra major version.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migrate_sstableloader
(boolean). Sets the service into migration mode enabling the sstableloader utility to be used to upload Cassandra data files. Available only on service create.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.service_to_join_with
(string, MaxLength: 64). When bootstrapping, instead of creating a new Cassandra cluster try to join an existing one from another service. Can only be set on service creation.static_ips
(boolean). Use static public IP addresses.spec.userConfig
.
"},{"location":"api-reference/cassandra.html#spec.userConfig.ip_filter","title":"ip_filter","text":"batch_size_fail_threshold_in_kb
(integer, Minimum: 1, Maximum: 1000000). Fail any multiple-partition batch exceeding this value. 50kb (10x warn threshold) by default.batch_size_warn_threshold_in_kb
(integer, Minimum: 1, Maximum: 1000000). Log a warning message on any multiple-partition batch size exceeding this value.5kb per batch by default.Caution should be taken on increasing the size of this thresholdas it can lead to node instability.datacenter
(string, MaxLength: 128). Name of the datacenter to which nodes of this service belong. Can be set only when creating the service.spec.userConfig
.10.20.0.0/16
.
network
(string, MaxLength: 43). CIDR address block.
"},{"location":"api-reference/cassandra.html#spec.userConfig.private_access","title":"private_access","text":"description
(string, MaxLength: 1024). Description for IP filter list entry.spec.userConfig
.
"},{"location":"api-reference/cassandra.html#spec.userConfig.public_access","title":"public_access","text":"prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.spec.userConfig
.
"},{"location":"api-reference/clickhouse.html","title":"Clickhouse","text":""},{"location":"api-reference/clickhouse.html#usage-example","title":"Usage example","text":"prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.
"},{"location":"api-reference/clickhouse.html#Clickhouse","title":"Clickhouse","text":"apiVersion: aiven.io/v1alpha1\nkind: Clickhouse\nmetadata:\n name: my-clickhouse\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: clickhouse-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-16\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/clickhouse.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Clickhouse
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ClickhouseSpec defines the desired state of Clickhouse. See below for nested schema.Clickhouse
.
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.
"},{"location":"api-reference/clickhouse.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CLICKHOUSE_HOST
, CLICKHOUSE_PORT
, CLICKHOUSE_USER
, CLICKHOUSE_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). OpenSearch specific user configuration options. See below for nested schema.spec
.
"},{"location":"api-reference/clickhouse.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.CLICKHOUSE_HOST
, CLICKHOUSE_PORT
, CLICKHOUSE_USER
, CLICKHOUSE_PASSWORD
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/clickhouse.html#spec.projectVPCRef","title":"projectVPCRef","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.spec
.
name
(string, MinLength: 1).
"},{"location":"api-reference/clickhouse.html#spec.serviceIntegrations","title":"serviceIntegrations","text":"namespace
(string, MinLength: 1). spec
.
"},{"location":"api-reference/clickhouse.html#spec.userConfig","title":"userConfig","text":"integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). spec
.
"},{"location":"api-reference/clickhouse.html#spec.userConfig.ip_filter","title":"ip_filter","text":"additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.spec.userConfig
.10.20.0.0/16
.
network
(string, MaxLength: 43). CIDR address block.
"},{"location":"api-reference/clickhouse.html#spec.userConfig.private_access","title":"private_access","text":"description
(string, MaxLength: 1024). Description for IP filter list entry.spec.userConfig
.
"},{"location":"api-reference/clickhouse.html#spec.userConfig.privatelink_access","title":"privatelink_access","text":"clickhouse
(boolean). Allow clients to connect to clickhouse with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.clickhouse_https
(boolean). Allow clients to connect to clickhouse_https with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.clickhouse_mysql
(boolean). Allow clients to connect to clickhouse_mysql with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.spec.userConfig
.
"},{"location":"api-reference/clickhouse.html#spec.userConfig.public_access","title":"public_access","text":"clickhouse
(boolean). Enable clickhouse.clickhouse_https
(boolean). Enable clickhouse_https.clickhouse_mysql
(boolean). Enable clickhouse_mysql.prometheus
(boolean). Enable prometheus.spec.userConfig
.
"},{"location":"api-reference/clickhouseuser.html","title":"ClickhouseUser","text":""},{"location":"api-reference/clickhouseuser.html#usage-example","title":"Usage example","text":"clickhouse
(boolean). Allow clients to connect to clickhouse from the public internet for service nodes that are in a project VPC or another type of private network.clickhouse_https
(boolean). Allow clients to connect to clickhouse_https from the public internet for service nodes that are in a project VPC or another type of private network.clickhouse_mysql
(boolean). Allow clients to connect to clickhouse_mysql from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.
"},{"location":"api-reference/clickhouseuser.html#ClickhouseUser","title":"ClickhouseUser","text":"apiVersion: aiven.io/v1alpha1\nkind: ClickhouseUser\nmetadata:\n name: my-clickhouse-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: clickhouse-user-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n serviceName: my-clickhouse\n
"},{"location":"api-reference/clickhouseuser.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ClickhouseUser
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ClickhouseUserSpec defines the desired state of ClickhouseUser. See below for nested schema.ClickhouseUser
.
project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the user to.serviceName
(string, Immutable, MaxLength: 63). Service to link the user to.
"},{"location":"api-reference/clickhouseuser.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CLICKHOUSEUSER_HOST
, CLICKHOUSEUSER_PORT
, CLICKHOUSEUSER_USER
, CLICKHOUSEUSER_PASSWORD
. See below for nested schema.spec
.
"},{"location":"api-reference/clickhouseuser.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.CLICKHOUSEUSER_HOST
, CLICKHOUSEUSER_PORT
, CLICKHOUSEUSER_USER
, CLICKHOUSEUSER_PASSWORD
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/connectionpool.html","title":"ConnectionPool","text":""},{"location":"api-reference/connectionpool.html#usage-example","title":"Usage example","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.
"},{"location":"api-reference/connectionpool.html#ConnectionPool","title":"ConnectionPool","text":"apiVersion: aiven.io/v1alpha1\nkind: ConnectionPool\nmetadata:\n name: my-connection-pool\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: connection-pool-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n serviceName: google-europe-west1\n databaseName: my-db\n username: my-user\n poolMode: transaction\n poolSize: 25\n
"},{"location":"api-reference/connectionpool.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ConnectionPool
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ConnectionPoolSpec defines the desired state of ConnectionPool. See below for nested schema.ConnectionPool
.
databaseName
(string, MaxLength: 40). Name of the database the pool connects to.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.serviceName
(string, MaxLength: 63). Service name.username
(string, MaxLength: 64). Name of the service user used to connect to the database.
"},{"location":"api-reference/connectionpool.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CONNECTIONPOOL_HOST
, CONNECTIONPOOL_PORT
, CONNECTIONPOOL_DATABASE
, CONNECTIONPOOL_USER
, CONNECTIONPOOL_PASSWORD
, CONNECTIONPOOL_SSLMODE
, CONNECTIONPOOL_DATABASE_URI
. See below for nested schema.poolMode
(string, Enum: session
, transaction
, statement
). Mode the pool operates in (session, transaction, statement).poolSize
(integer). Number of connections the pool may create towards the backend server.spec
.
"},{"location":"api-reference/connectionpool.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.CONNECTIONPOOL_HOST
, CONNECTIONPOOL_PORT
, CONNECTIONPOOL_DATABASE
, CONNECTIONPOOL_USER
, CONNECTIONPOOL_PASSWORD
, CONNECTIONPOOL_SSLMODE
, CONNECTIONPOOL_DATABASE_URI
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/database.html","title":"Database","text":""},{"location":"api-reference/database.html#usage-example","title":"Usage example","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.
"},{"location":"api-reference/database.html#Database","title":"Database","text":"apiVersion: aiven.io/v1alpha1\nkind: Database\nmetadata:\n name: my-db\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n serviceName: google-europe-west1\n\n lcCtype: en_US.UTF-8\n lcCollate: en_US.UTF-8\n
"},{"location":"api-reference/database.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Database
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). DatabaseSpec defines the desired state of Database. See below for nested schema.Database
.
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the database to.serviceName
(string, MaxLength: 63). PostgreSQL service to link the database to.
"},{"location":"api-reference/database.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.lcCollate
(string, MaxLength: 128). Default string sort order (LC_COLLATE) of the database. Default value: en_US.UTF-8.lcCtype
(string, MaxLength: 128). Default character classification (LC_CTYPE) of the database. Default value: en_US.UTF-8.terminationProtection
(boolean). It is a Kubernetes side deletion protections, which prevents the database from being deleted by Kubernetes. It is recommended to enable this for any production databases containing critical data.spec
.
"},{"location":"api-reference/grafana.html","title":"Grafana","text":""},{"location":"api-reference/grafana.html#usage-example","title":"Usage example","text":"key
(string, MinLength: 1). name
(string, MinLength: 1).
"},{"location":"api-reference/grafana.html#Grafana","title":"Grafana","text":"apiVersion: aiven.io/v1alpha1\nkind: Grafana\nmetadata:\n name: my-grafana\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: grafana-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-1\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n public_access:\n grafana: true\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/grafana.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Grafana
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). GrafanaSpec defines the desired state of Grafana. See below for nested schema.Grafana
.
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.
"},{"location":"api-reference/grafana.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: GRAFANA_HOST
, GRAFANA_PORT
, GRAFANA_USER
, GRAFANA_PASSWORD
, GRAFANA_URI
, GRAFANA_HOSTS
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Cassandra specific user configuration options. See below for nested schema.spec
.
"},{"location":"api-reference/grafana.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.GRAFANA_HOST
, GRAFANA_PORT
, GRAFANA_USER
, GRAFANA_PASSWORD
, GRAFANA_URI
, GRAFANA_HOSTS
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/grafana.html#spec.projectVPCRef","title":"projectVPCRef","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.spec
.
name
(string, MinLength: 1).
"},{"location":"api-reference/grafana.html#spec.serviceIntegrations","title":"serviceIntegrations","text":"namespace
(string, MinLength: 1). spec
.
"},{"location":"api-reference/grafana.html#spec.userConfig","title":"userConfig","text":"integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). spec
.
"},{"location":"api-reference/grafana.html#spec.userConfig.auth_azuread","title":"auth_azuread","text":"additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.alerting_enabled
(boolean). Enable or disable Grafana legacy alerting functionality. This should not be enabled with unified_alerting_enabled.alerting_error_or_timeout
(string, Enum: alerting
, keep_state
). Default error or timeout setting for new alerting rules.alerting_max_annotations_to_keep
(integer, Minimum: 0, Maximum: 1000000). Max number of alert annotations that Grafana stores. 0 (default) keeps all alert annotations.alerting_nodata_or_nullvalues
(string, Enum: alerting
, no_data
, keep_state
, ok
). Default value for 'no data or null values' for new alerting rules.allow_embedding
(boolean). Allow embedding Grafana dashboards with iframe/frame/object/embed tags. Disabled by default to limit impact of clickjacking.auth_azuread
(object). Azure AD OAuth integration. See below for nested schema.auth_basic_enabled
(boolean). Enable or disable basic authentication form, used by Grafana built-in login.auth_generic_oauth
(object). Generic OAuth integration. See below for nested schema.auth_github
(object). Github Auth integration. See below for nested schema.auth_gitlab
(object). GitLab Auth integration. See below for nested schema.auth_google
(object). Google Auth integration. See below for nested schema.cookie_samesite
(string, Enum: lax
, strict
, none
). Cookie SameSite attribute: strict
prevents sending cookie for cross-site requests, effectively disabling direct linking from other sites to Grafana. lax
is the default value.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.dashboard_previews_enabled
(boolean). This feature is new in Grafana 9 and is quite resource intensive. It may cause low-end plans to work more slowly while the dashboard previews are rendering.dashboards_min_refresh_interval
(string, Pattern: ^[0-9]+(ms|s|m|h|d)$
, MaxLength: 16). Signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s, 1h.dashboards_versions_to_keep
(integer, Minimum: 1, Maximum: 100). Dashboard versions to keep per dashboard.dataproxy_send_user_header
(boolean). Send X-Grafana-User
header to data source.dataproxy_timeout
(integer, Minimum: 15, Maximum: 90). Timeout for data proxy requests in seconds.date_formats
(object). Grafana date format specifications. See below for nested schema.disable_gravatar
(boolean). Set to true to disable gravatar. Defaults to false (gravatar is enabled).editors_can_admin
(boolean). Editors can manage folders, teams and dashboards created by them.external_image_storage
(object). External image store settings. See below for nested schema.google_analytics_ua_id
(string, Pattern: ^(G|UA|YT|MO)-[a-zA-Z0-9-]+$
, MaxLength: 64). Google Analytics ID.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.metrics_enabled
(boolean). Enable Grafana /metrics endpoint.oauth_allow_insecure_email_lookup
(boolean). Enforce user lookup based on email instead of the unique ID provided by the IdP.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.smtp_server
(object). SMTP server settings. See below for nested schema.static_ips
(boolean). Use static public IP addresses.unified_alerting_enabled
(boolean). Enable or disable Grafana unified alerting functionality. By default this is enabled and any legacy alerts will be migrated on upgrade to Grafana 9+. To stay on legacy alerting, set unified_alerting_enabled to false and alerting_enabled to true. See https://grafana.com/docs/grafana/latest/alerting/set-up/migrating-alerts/ for more details.user_auto_assign_org
(boolean). Auto-assign new users on signup to main organization. Defaults to false.user_auto_assign_org_role
(string, Enum: Viewer
, Admin
, Editor
). Set role for new signups. Defaults to Viewer.viewers_can_edit
(boolean). Users with view-only permission can edit but not save dashboards.spec.userConfig
.
auth_url
(string, MaxLength: 2048). Authorization URL.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.token_url
(string, MaxLength: 2048). Token URL.
"},{"location":"api-reference/grafana.html#spec.userConfig.auth_generic_oauth","title":"auth_generic_oauth","text":"allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_domains
(array of strings, MaxItems: 50). Allowed domains.allowed_groups
(array of strings, MaxItems: 50). Require users to belong to one of given groups.spec.userConfig
.
api_url
(string, MaxLength: 2048). API URL.auth_url
(string, MaxLength: 2048). Authorization URL.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.token_url
(string, MaxLength: 2048). Token URL.
"},{"location":"api-reference/grafana.html#spec.userConfig.auth_github","title":"auth_github","text":"allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_domains
(array of strings, MaxItems: 50). Allowed domains.allowed_organizations
(array of strings, MaxItems: 50). Require user to be member of one of the listed organizations.auto_login
(boolean). Allow users to bypass the login screen and automatically log in.name
(string, Pattern: ^[a-zA-Z0-9_\\- ]+$
, MaxLength: 128). Name of the OAuth integration.scopes
(array of strings, MaxItems: 50). OAuth scopes.spec.userConfig
.
client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.
"},{"location":"api-reference/grafana.html#spec.userConfig.auth_gitlab","title":"auth_gitlab","text":"allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_organizations
(array of strings, MaxItems: 50). Require users to belong to one of given organizations.team_ids
(array of integers, MaxItems: 50). Require users to belong to one of given team IDs.spec.userConfig
.
allowed_groups
(array of strings, MaxItems: 50). Require users to belong to one of given groups.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.
"},{"location":"api-reference/grafana.html#spec.userConfig.auth_google","title":"auth_google","text":"allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.api_url
(string, MaxLength: 2048). API URL. This only needs to be set when using self hosted GitLab.auth_url
(string, MaxLength: 2048). Authorization URL. This only needs to be set when using self hosted GitLab.token_url
(string, MaxLength: 2048). Token URL. This only needs to be set when using self hosted GitLab.spec.userConfig
.
allowed_domains
(array of strings, MaxItems: 64). Domains allowed to sign-in to this Grafana.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.
"},{"location":"api-reference/grafana.html#spec.userConfig.date_formats","title":"date_formats","text":"allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.spec.userConfig
.
"},{"location":"api-reference/grafana.html#spec.userConfig.external_image_storage","title":"external_image_storage","text":"default_timezone
(string, MaxLength: 64). Default time zone for user preferences. Value browser
uses browser local time zone.full_date
(string, MaxLength: 128). Moment.js style format string for cases where full date is shown.interval_day
(string, MaxLength: 128). Moment.js style format string used when a time requiring day accuracy is shown.interval_hour
(string, MaxLength: 128). Moment.js style format string used when a time requiring hour accuracy is shown.interval_minute
(string, MaxLength: 128). Moment.js style format string used when a time requiring minute accuracy is shown.interval_month
(string, MaxLength: 128). Moment.js style format string used when a time requiring month accuracy is shown.interval_second
(string, MaxLength: 128). Moment.js style format string used when a time requiring second accuracy is shown.interval_year
(string, MaxLength: 128). Moment.js style format string used when a time requiring year accuracy is shown.spec.userConfig
.
"},{"location":"api-reference/grafana.html#spec.userConfig.ip_filter","title":"ip_filter","text":"access_key
(string, Pattern: ^[A-Z0-9]+$
, MaxLength: 4096). S3 access key. Requires permissions to the S3 bucket for the s3:PutObject and s3:PutObjectAcl actions.bucket_url
(string, MaxLength: 2048). Bucket URL for S3.provider
(string, Enum: s3
). Provider type.secret_key
(string, Pattern: ^[A-Za-z0-9/+=]+$
, MaxLength: 4096). S3 secret key.spec.userConfig
.10.20.0.0/16
.
network
(string, MaxLength: 43). CIDR address block.
"},{"location":"api-reference/grafana.html#spec.userConfig.private_access","title":"private_access","text":"description
(string, MaxLength: 1024). Description for IP filter list entry.spec.userConfig
.
"},{"location":"api-reference/grafana.html#spec.userConfig.privatelink_access","title":"privatelink_access","text":"grafana
(boolean). Allow clients to connect to grafana with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.spec.userConfig
.
"},{"location":"api-reference/grafana.html#spec.userConfig.public_access","title":"public_access","text":"grafana
(boolean). Enable grafana.spec.userConfig
.
"},{"location":"api-reference/grafana.html#spec.userConfig.smtp_server","title":"smtp_server","text":"grafana
(boolean). Allow clients to connect to grafana from the public internet for service nodes that are in a project VPC or another type of private network.spec.userConfig
.
from_address
(string, MaxLength: 319). Address used for sending emails.host
(string, MaxLength: 255). Server hostname or IP.port
(integer, Minimum: 1, Maximum: 65535). SMTP server port.
"},{"location":"api-reference/kafka.html","title":"Kafka","text":""},{"location":"api-reference/kafka.html#usage-example","title":"Usage example","text":"from_name
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 128). Name used in outgoing emails, defaults to Grafana.password
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 255). Password for SMTP authentication.skip_verify
(boolean). Skip verifying server certificate. Defaults to false.starttls_policy
(string, Enum: OpportunisticStartTLS
, MandatoryStartTLS
, NoStartTLS
). Either OpportunisticStartTLS, MandatoryStartTLS or NoStartTLS. Default is OpportunisticStartTLS.username
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 255). Username for SMTP authentication.
"},{"location":"api-reference/kafka.html#Kafka","title":"Kafka","text":"apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: my-kafka\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: kafka-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-2\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/kafka.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Kafka
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaSpec defines the desired state of Kafka. See below for nested schema.Kafka
.
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.
"},{"location":"api-reference/kafka.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: KAFKA_HOST
, KAFKA_PORT
, KAFKA_USERNAME
, KAFKA_PASSWORD
, KAFKA_ACCESS_CERT
, KAFKA_ACCESS_KEY
, KAFKA_SASL_HOST
, KAFKA_SASL_PORT
, KAFKA_SCHEMA_REGISTRY_HOST
, KAFKA_SCHEMA_REGISTRY_PORT
, KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
, KAFKA_REST_PORT
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.karapace
(boolean). Switch the service to use Karapace for schema registry and REST proxy.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Kafka specific user configuration options. See below for nested schema.spec
.
"},{"location":"api-reference/kafka.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.KAFKA_HOST
, KAFKA_PORT
, KAFKA_USERNAME
, KAFKA_PASSWORD
, KAFKA_ACCESS_CERT
, KAFKA_ACCESS_KEY
, KAFKA_SASL_HOST
, KAFKA_SASL_PORT
, KAFKA_SCHEMA_REGISTRY_HOST
, KAFKA_SCHEMA_REGISTRY_PORT
, KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
, KAFKA_REST_PORT
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/kafka.html#spec.projectVPCRef","title":"projectVPCRef","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.spec
.
name
(string, MinLength: 1).
"},{"location":"api-reference/kafka.html#spec.serviceIntegrations","title":"serviceIntegrations","text":"namespace
(string, MinLength: 1). spec
.
"},{"location":"api-reference/kafka.html#spec.userConfig","title":"userConfig","text":"integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). spec
.
"},{"location":"api-reference/kafka.html#spec.userConfig.ip_filter","title":"ip_filter","text":"additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.aiven_kafka_topic_messages
(boolean). Allow access to read Kafka topic messages in the Aiven Console and REST API.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.kafka
(object). Kafka broker configuration values. See below for nested schema.kafka_authentication_methods
(object). Kafka authentication methods. See below for nested schema.kafka_connect
(boolean). Enable Kafka Connect service.kafka_connect_config
(object). Kafka Connect configuration values. See below for nested schema.kafka_rest
(boolean). Enable Kafka-REST service.kafka_rest_authorization
(boolean). Enable authorization in Kafka-REST service.kafka_rest_config
(object). Kafka REST configuration. See below for nested schema.kafka_version
(string, Enum: 3.3
, 3.1
, 3.4
, 3.5
, 3.6
). Kafka major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.schema_registry
(boolean). Enable Schema-Registry service.schema_registry_config
(object). Schema Registry configuration. See below for nested schema.static_ips
(boolean). Use static public IP addresses.tiered_storage
(object). Tiered storage configuration. See below for nested schema.spec.userConfig
.10.20.0.0/16
.
network
(string, MaxLength: 43). CIDR address block.
"},{"location":"api-reference/kafka.html#spec.userConfig.kafka","title":"kafka","text":"description
(string, MaxLength: 1024). Description for IP filter list entry.spec.userConfig
.
"},{"location":"api-reference/kafka.html#spec.userConfig.kafka_authentication_methods","title":"kafka_authentication_methods","text":"auto_create_topics_enable
(boolean). Enable auto creation of topics.compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, uncompressed
, producer
). Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts uncompressed
which is equivalent to no compression; and producer
which means retain the original compression codec set by the producer.connections_max_idle_ms
(integer, Minimum: 1000, Maximum: 3600000). Idle connections timeout: the server socket processor threads close the connections that idle for longer than this.default_replication_factor
(integer, Minimum: 1, Maximum: 10). Replication factor for autocreated topics.group_initial_rebalance_delay_ms
(integer, Minimum: 0, Maximum: 300000). The amount of time, in milliseconds, the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins. The default value for this is 3 seconds. During development and testing it might be desirable to set this to 0 in order to not delay test execution time.group_max_session_timeout_ms
(integer, Minimum: 0, Maximum: 1800000). The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.group_min_session_timeout_ms
(integer, Minimum: 0, Maximum: 60000). The minimum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.log_cleaner_delete_retention_ms
(integer, Minimum: 0, Maximum: 315569260000). How long are delete records retained?.log_cleaner_max_compaction_lag_ms
(integer, Minimum: 30000). The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted.log_cleaner_min_cleanable_ratio
(number, Minimum: 0.2, Maximum: 0.9). Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option.log_cleaner_min_compaction_lag_ms
(integer, Minimum: 0). The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.log_cleanup_policy
(string, Enum: delete
, compact
, compact,delete
). The default cleanup policy for segments beyond the retention window.log_flush_interval_messages
(integer, Minimum: 1). The number of messages accumulated on a log partition before messages are flushed to disk.log_flush_interval_ms
(integer, Minimum: 0). The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used.log_index_interval_bytes
(integer, Minimum: 0, Maximum: 104857600). The interval with which Kafka adds an entry to the offset index.log_index_size_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). The maximum size in bytes of the offset index.log_local_retention_bytes
(integer, Minimum: -2). The maximum size of local log segments that can grow for a partition before it gets eligible for deletion. If set to -2, the value of log.retention.bytes is used. The effective value should always be less than or equal to log.retention.bytes value.log_local_retention_ms
(integer, Minimum: -2). The number of milliseconds to keep the local log segments before it gets eligible for deletion. If set to -2, the value of log.retention.ms is used. The effective value should always be less than or equal to log.retention.ms value.log_message_downconversion_enable
(boolean). This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.log_message_timestamp_difference_max_ms
(integer, Minimum: 0). The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message.log_message_timestamp_type
(string, Enum: CreateTime
, LogAppendTime
). Define whether the timestamp in the message is message create time or log append time.log_preallocate
(boolean). Should pre allocate file when create new segment?.log_retention_bytes
(integer, Minimum: -1). The maximum size of the log before deleting messages.log_retention_hours
(integer, Minimum: -1, Maximum: 2147483647). The number of hours to keep a log file before deleting it.log_retention_ms
(integer, Minimum: -1). The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.log_roll_jitter_ms
(integer, Minimum: 0). The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used.log_roll_ms
(integer, Minimum: 1). The maximum time before a new log segment is rolled out (in milliseconds).log_segment_bytes
(integer, Minimum: 10485760, Maximum: 1073741824). The maximum size of a single log file.log_segment_delete_delay_ms
(integer, Minimum: 0, Maximum: 3600000). The amount of time to wait before deleting a file from the filesystem.max_connections_per_ip
(integer, Minimum: 256, Maximum: 2147483647). The maximum number of connections allowed from each ip address (defaults to 2147483647).max_incremental_fetch_session_cache_slots
(integer, Minimum: 1000, Maximum: 10000). The maximum number of incremental fetch sessions that the broker will maintain.message_max_bytes
(integer, Minimum: 0, Maximum: 100001200). The maximum size of message that the server can receive.min_insync_replicas
(integer, Minimum: 1, Maximum: 7). When a producer sets acks to all
(or -1
), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.num_partitions
(integer, Minimum: 1, Maximum: 1000). Number of partitions for autocreated topics.offsets_retention_minutes
(integer, Minimum: 1, Maximum: 2147483647). Log retention window in minutes for offsets topic.producer_purgatory_purge_interval_requests
(integer, Minimum: 10, Maximum: 10000). The purge interval (in number of requests) of the producer request purgatory(defaults to 1000).replica_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). The number of bytes of messages to attempt to fetch for each partition (defaults to 1048576). This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made.replica_fetch_response_max_bytes
(integer, Minimum: 10485760, Maximum: 1048576000). Maximum bytes expected for the entire fetch response (defaults to 10485760). Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum.sasl_oauthbearer_expected_audience
(string, MaxLength: 128). The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences.sasl_oauthbearer_expected_issuer
(string, MaxLength: 128). Optional setting for the broker to use to verify that the JWT was created by the expected issuer.sasl_oauthbearer_jwks_endpoint_url
(string, MaxLength: 2048). OIDC JWKS endpoint URL. By setting this the SASL SSL OAuth2/OIDC authentication is enabled. See also other options for SASL OAuth2/OIDC.sasl_oauthbearer_sub_claim_name
(string, MaxLength: 128). Name of the scope from which to extract the subject claim from the JWT. Defaults to sub.socket_request_max_bytes
(integer, Minimum: 10485760, Maximum: 209715200). The maximum number of bytes in a socket request (defaults to 104857600).transaction_partition_verification_enable
(boolean). Enable verification that checks that the partition has been added to the transaction before writing transactional records to the partition.transaction_remove_expired_transaction_cleanup_interval_ms
(integer, Minimum: 600000, Maximum: 3600000). The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing (defaults to 3600000 (1 hour)).transaction_state_log_segment_bytes
(integer, Minimum: 1048576, Maximum: 2147483647). The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads (defaults to 104857600 (100 mebibytes)).spec.userConfig
.
"},{"location":"api-reference/kafka.html#spec.userConfig.kafka_connect_config","title":"kafka_connect_config","text":"certificate
(boolean). Enable certificate/SSL authentication.sasl
(boolean). Enable SASL authentication.spec.userConfig
.
"},{"location":"api-reference/kafka.html#spec.userConfig.kafka_rest_config","title":"kafka_rest_config","text":"connector_client_config_override_policy
(string, Enum: None
, All
). Defines what client configurations can be overridden by the connector. Default is None.consumer_auto_offset_reset
(string, Enum: earliest
, latest
). What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.consumer_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.consumer_isolation_level
(string, Enum: read_uncommitted
, read_committed
). Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.consumer_max_partition_fetch_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.consumer_max_poll_interval_ms
(integer, Minimum: 1, Maximum: 2147483647). The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).consumer_max_poll_records
(integer, Minimum: 1, Maximum: 10000). The maximum number of records returned in a single call to poll() (defaults to 500).offset_flush_interval_ms
(integer, Minimum: 1, Maximum: 100000000). The interval at which to try committing offsets for tasks (defaults to 60000).offset_flush_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger
for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger
for the specified time waiting for more records to show up. Defaults to 0.producer_max_request_size
(integer, Minimum: 131072, Maximum: 67108864). This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.scheduled_rebalance_max_delay_ms
(integer, Minimum: 0, Maximum: 600000). The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.session_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). The timeout in milliseconds used to detect failures when using Kafka\u2019s group management facilities (defaults to 10000).spec.userConfig
.
"},{"location":"api-reference/kafka.html#spec.userConfig.private_access","title":"private_access","text":"consumer_enable_auto_commit
(boolean). If true the consumer's offset will be periodically committed to Kafka in the background.consumer_request_max_bytes
(integer, Minimum: 0, Maximum: 671088640). Maximum number of bytes in unencoded message keys and values by a single request.consumer_request_timeout_ms
(integer, Enum: 1000
, 15000
, 30000
, Minimum: 1000, Maximum: 30000). The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached.producer_acks
(string, Enum: all
, -1
, 0
, 1
). The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to all
or -1
, the leader will wait for the full set of in-sync replicas to acknowledge the record.producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). Wait for up to the given delay to allow batching records together.producer_max_request_size
(integer, Minimum: 0, Maximum: 2147483647). The maximum size of a request in bytes. Note that Kafka broker can also cap the record batch size.simpleconsumer_pool_size_max
(integer, Minimum: 10, Maximum: 250). Maximum number of SimpleConsumers that can be instantiated per broker.spec.userConfig
.
"},{"location":"api-reference/kafka.html#spec.userConfig.privatelink_access","title":"privatelink_access","text":"kafka
(boolean). Allow clients to connect to kafka with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.kafka_connect
(boolean). Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.kafka_rest
(boolean). Allow clients to connect to kafka_rest with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.schema_registry
(boolean). Allow clients to connect to schema_registry with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.spec.userConfig
.
"},{"location":"api-reference/kafka.html#spec.userConfig.public_access","title":"public_access","text":"jolokia
(boolean). Enable jolokia.kafka
(boolean). Enable kafka.kafka_connect
(boolean). Enable kafka_connect.kafka_rest
(boolean). Enable kafka_rest.prometheus
(boolean). Enable prometheus.schema_registry
(boolean). Enable schema_registry.spec.userConfig
.
"},{"location":"api-reference/kafka.html#spec.userConfig.schema_registry_config","title":"schema_registry_config","text":"kafka
(boolean). Allow clients to connect to kafka from the public internet for service nodes that are in a project VPC or another type of private network.kafka_connect
(boolean). Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.kafka_rest
(boolean). Allow clients to connect to kafka_rest from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.schema_registry
(boolean). Allow clients to connect to schema_registry from the public internet for service nodes that are in a project VPC or another type of private network.spec.userConfig
.
"},{"location":"api-reference/kafka.html#spec.userConfig.tiered_storage","title":"tiered_storage","text":"leader_eligibility
(boolean). If true, Karapace / Schema Registry on the service nodes can participate in leader election. It might be needed to disable this when the schemas topic is replicated to a secondary cluster and Karapace / Schema Registry there must not participate in leader election. Defaults to true
.topic_name
(string, MinLength: 1, MaxLength: 249). The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy. Please note that changing this configuration in an existing Schema Registry / Karapace setup leads to previous schemas being inaccessible, data encoded with them potentially unreadable and schema ID sequence put out of order. It's only possible to do the switch while Schema Registry / Karapace is disabled. Defaults to _schemas
.spec.userConfig
.
"},{"location":"api-reference/kafka.html#spec.userConfig.tiered_storage.local_cache","title":"local_cache","text":"enabled
(boolean). Whether to enable the tiered storage functionality.local_cache
(object). Deprecated. Local cache configuration. See below for nested schema.spec.userConfig.tiered_storage
.
"},{"location":"api-reference/kafkaacl.html","title":"KafkaACL","text":""},{"location":"api-reference/kafkaacl.html#usage-example","title":"Usage example","text":"size
(integer, Minimum: 1, Maximum: 107374182400). Deprecated. Local cache size in bytes.
"},{"location":"api-reference/kafkaacl.html#KafkaACL","title":"KafkaACL","text":"apiVersion: aiven.io/v1alpha1\nkind: KafkaACL\nmetadata:\n name: my-kafka-acl\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n topic: my-topic\n username: my-user\n permission: admin\n
"},{"location":"api-reference/kafkaacl.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaACL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaACLSpec defines the desired state of KafkaACL. See below for nested schema.KafkaACL
.
permission
(string, Enum: admin
, read
, readwrite
, write
). Kafka permission to grant (admin, read, readwrite, write).project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the Kafka ACL to.serviceName
(string, MaxLength: 63). Service to link the Kafka ACL to.topic
(string). Topic name pattern for the ACL entry.username
(string). Username pattern for the ACL entry.
"},{"location":"api-reference/kafkaacl.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.spec
.
"},{"location":"api-reference/kafkaconnect.html","title":"KafkaConnect","text":""},{"location":"api-reference/kafkaconnect.html#usage-example","title":"Usage example","text":"key
(string, MinLength: 1). name
(string, MinLength: 1).
"},{"location":"api-reference/kafkaconnect.html#KafkaConnect","title":"KafkaConnect","text":"apiVersion: aiven.io/v1alpha1\nkind: KafkaConnect\nmetadata:\n name: my-kafka-connect\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: business-4\n\n userConfig:\n kafka_connect:\n consumer_isolation_level: read_committed\n public_access:\n kafka_connect: true\n
"},{"location":"api-reference/kafkaconnect.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaConnect
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaConnectSpec defines the desired state of KafkaConnect. See below for nested schema.KafkaConnect
.
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.
"},{"location":"api-reference/kafkaconnect.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). KafkaConnect specific user configuration options. See below for nested schema.spec
.
"},{"location":"api-reference/kafkaconnect.html#spec.projectVPCRef","title":"projectVPCRef","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.
name
(string, MinLength: 1).
"},{"location":"api-reference/kafkaconnect.html#spec.serviceIntegrations","title":"serviceIntegrations","text":"namespace
(string, MinLength: 1). spec
.
"},{"location":"api-reference/kafkaconnect.html#spec.userConfig","title":"userConfig","text":"integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). spec
.
"},{"location":"api-reference/kafkaconnect.html#spec.userConfig.ip_filter","title":"ip_filter","text":"additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.kafka_connect
(object). Kafka Connect configuration values. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.static_ips
(boolean). Use static public IP addresses.spec.userConfig
.10.20.0.0/16
.
network
(string, MaxLength: 43). CIDR address block.
"},{"location":"api-reference/kafkaconnect.html#spec.userConfig.kafka_connect","title":"kafka_connect","text":"description
(string, MaxLength: 1024). Description for IP filter list entry.spec.userConfig
.
"},{"location":"api-reference/kafkaconnect.html#spec.userConfig.private_access","title":"private_access","text":"connector_client_config_override_policy
(string, Enum: None
, All
). Defines what client configurations can be overridden by the connector. Default is None.consumer_auto_offset_reset
(string, Enum: earliest
, latest
). What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.consumer_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.consumer_isolation_level
(string, Enum: read_uncommitted
, read_committed
). Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.consumer_max_partition_fetch_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.consumer_max_poll_interval_ms
(integer, Minimum: 1, Maximum: 2147483647). The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).consumer_max_poll_records
(integer, Minimum: 1, Maximum: 10000). The maximum number of records returned in a single call to poll() (defaults to 500).offset_flush_interval_ms
(integer, Minimum: 1, Maximum: 100000000). The interval at which to try committing offsets for tasks (defaults to 60000).offset_flush_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger
for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger
for the specified time waiting for more records to show up. Defaults to 0.producer_max_request_size
(integer, Minimum: 131072, Maximum: 67108864). This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.scheduled_rebalance_max_delay_ms
(integer, Minimum: 0, Maximum: 600000). The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.session_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). The timeout in milliseconds used to detect failures when using Kafka\u2019s group management facilities (defaults to 10000).spec.userConfig
.
"},{"location":"api-reference/kafkaconnect.html#spec.userConfig.privatelink_access","title":"privatelink_access","text":"kafka_connect
(boolean). Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.spec.userConfig
.
"},{"location":"api-reference/kafkaconnect.html#spec.userConfig.public_access","title":"public_access","text":"jolokia
(boolean). Enable jolokia.kafka_connect
(boolean). Enable kafka_connect.prometheus
(boolean). Enable prometheus.spec.userConfig
.
"},{"location":"api-reference/kafkaconnector.html","title":"KafkaConnector","text":""},{"location":"api-reference/kafkaconnector.html#KafkaConnector","title":"KafkaConnector","text":"kafka_connect
(boolean). Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.
"},{"location":"api-reference/kafkaconnector.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaConnector
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaConnectorSpec defines the desired state of KafkaConnector. See below for nested schema.KafkaConnector
.
connectorClass
(string, MaxLength: 1024). The Java class of the connector.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.serviceName
(string, MaxLength: 63). Service name.userConfig
(object, AdditionalProperties: string). The connector specific configuration To build config values from secret the template function {{ fromSecret \"name\" \"key\" }}
is provided when interpreting the keys.
"},{"location":"api-reference/kafkaconnector.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.spec
.
"},{"location":"api-reference/kafkaschema.html","title":"KafkaSchema","text":""},{"location":"api-reference/kafkaschema.html#usage-example","title":"Usage example","text":"key
(string, MinLength: 1). name
(string, MinLength: 1).
"},{"location":"api-reference/kafkaschema.html#KafkaSchema","title":"KafkaSchema","text":"apiVersion: aiven.io/v1alpha1\nkind: KafkaSchema\nmetadata:\n name: my-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n subjectName: mny-subject\n compatibilityLevel: BACKWARD\n schema: |\n {\n \"doc\": \"example_doc\",\n \"fields\": [{\n \"default\": 5,\n \"doc\": \"field_doc\",\n \"name\": \"field_name\",\n \"namespace\": \"field_namespace\",\n \"type\": \"int\"\n }],\n \"name\": \"example_name\",\n \"namespace\": \"example_namespace\",\n \"type\": \"record\"\n }\n
"},{"location":"api-reference/kafkaschema.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaSchema
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaSchemaSpec defines the desired state of KafkaSchema. See below for nested schema.KafkaSchema
.
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the Kafka Schema to.schema
(string). Kafka Schema configuration should be a valid Avro Schema JSON format.serviceName
(string, MaxLength: 63). Service to link the Kafka Schema to.subjectName
(string, MaxLength: 63). Kafka Schema Subject name.
"},{"location":"api-reference/kafkaschema.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.compatibilityLevel
(string, Enum: BACKWARD
, BACKWARD_TRANSITIVE
, FORWARD
, FORWARD_TRANSITIVE
, FULL
, FULL_TRANSITIVE
, NONE
). Kafka Schemas compatibility level.spec
.
"},{"location":"api-reference/kafkatopic.html","title":"KafkaTopic","text":""},{"location":"api-reference/kafkatopic.html#usage-example","title":"Usage example","text":"key
(string, MinLength: 1). name
(string, MinLength: 1).
"},{"location":"api-reference/kafkatopic.html#KafkaTopic","title":"KafkaTopic","text":"apiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: kafka-topic\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n topicName: my-kafka-topic\n\n replication: 2\n partitions: 1\n\n config:\n min_cleanable_dirty_ratio: 0.2\n
"},{"location":"api-reference/kafkatopic.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaTopic
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaTopicSpec defines the desired state of KafkaTopic. See below for nested schema.KafkaTopic
.
partitions
(integer, Minimum: 1, Maximum: 1000000). Number of partitions to create in the topic.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.replication
(integer, Minimum: 2). Replication factor for the topic.serviceName
(string, MaxLength: 63). Service name.
"},{"location":"api-reference/kafkatopic.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.config
(object). Kafka topic configuration. See below for nested schema.tags
(array of objects). Kafka topic tags. See below for nested schema.termination_protection
(boolean). It is a Kubernetes side deletion protections, which prevents the kafka topic from being deleted by Kubernetes. It is recommended to enable this for any production databases containing critical data.topicName
(string, Immutable, MinLength: 1, MaxLength: 249). Topic name. If provided, is used instead of metadata.name. This field supports additional characters, has a longer length, and will replace metadata.name in future releases.spec
.
"},{"location":"api-reference/kafkatopic.html#spec.config","title":"config","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.
"},{"location":"api-reference/kafkatopic.html#spec.tags","title":"tags","text":"cleanup_policy
(string). cleanup.policy value.compression_type
(string). compression.type value.delete_retention_ms
(integer). delete.retention.ms value.file_delete_delay_ms
(integer). file.delete.delay.ms value.flush_messages
(integer). flush.messages value.flush_ms
(integer). flush.ms value.index_interval_bytes
(integer). index.interval.bytes value.max_compaction_lag_ms
(integer). max.compaction.lag.ms value.max_message_bytes
(integer). max.message.bytes value.message_downconversion_enable
(boolean). message.downconversion.enable value.message_format_version
(string). message.format.version value.message_timestamp_difference_max_ms
(integer). message.timestamp.difference.max.ms value.message_timestamp_type
(string). message.timestamp.type value.min_cleanable_dirty_ratio
(number). min.cleanable.dirty.ratio value.min_compaction_lag_ms
(integer). min.compaction.lag.ms value.min_insync_replicas
(integer). min.insync.replicas value.preallocate
(boolean). preallocate value.retention_bytes
(integer). retention.bytes value.retention_ms
(integer). retention.ms value.segment_bytes
(integer). segment.bytes value.segment_index_bytes
(integer). segment.index.bytes value.segment_jitter_ms
(integer). segment.jitter.ms value.segment_ms
(integer). segment.ms value.spec
.
key
(string, MinLength: 1, MaxLength: 64, Format: ^[a-zA-Z0-9_-]*$
).
"},{"location":"api-reference/mysql.html","title":"MySQL","text":""},{"location":"api-reference/mysql.html#usage-example","title":"Usage example","text":"value
(string, MaxLength: 256, Format: ^[a-zA-Z0-9_-]*$
).
"},{"location":"api-reference/mysql.html#MySQL","title":"MySQL","text":"apiVersion: aiven.io/v1alpha1\nkind: MySQL\nmetadata:\n name: my-mysql\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: mysql-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: business-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n backup_hour: 17\n backup_minute: 11\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/mysql.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value MySQL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). MySQLSpec defines the desired state of MySQL. See below for nested schema.MySQL
.
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.
"},{"location":"api-reference/mysql.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: MYSQL_HOST
, MYSQL_PORT
, MYSQL_DATABASE
, MYSQL_USER
, MYSQL_PASSWORD
, MYSQL_SSL_MODE
, MYSQL_URI
, MYSQL_REPLICA_URI
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). MySQL specific user configuration options. See below for nested schema.spec
.
"},{"location":"api-reference/mysql.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.MYSQL_HOST
, MYSQL_PORT
, MYSQL_DATABASE
, MYSQL_USER
, MYSQL_PASSWORD
, MYSQL_SSL_MODE
, MYSQL_URI
, MYSQL_REPLICA_URI
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/mysql.html#spec.projectVPCRef","title":"projectVPCRef","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.spec
.
name
(string, MinLength: 1).
"},{"location":"api-reference/mysql.html#spec.serviceIntegrations","title":"serviceIntegrations","text":"namespace
(string, MinLength: 1). spec
.
"},{"location":"api-reference/mysql.html#spec.userConfig","title":"userConfig","text":"integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). spec
.
"},{"location":"api-reference/mysql.html#spec.userConfig.ip_filter","title":"ip_filter","text":"additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.admin_password
(string, Immutable, Pattern: ^[a-zA-Z0-9-_]+$
, MinLength: 8, MaxLength: 256). Custom password for admin user. Defaults to random string. This must be set only when a new service is being created.admin_username
(string, Immutable, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Custom username for admin user. This must be set only when a new service is being created.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.binlog_retention_period
(integer, Minimum: 600, Maximum: 86400). The minimum amount of time in seconds to keep binlog entries before deletion. This may be extended for services that require binlog entries for longer than the default for example if using the MySQL Debezium Kafka connector.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.mysql
(object). mysql.conf configuration values. See below for nested schema.mysql_version
(string, Enum: 8
). MySQL major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_target_time
(string, Immutable, MaxLength: 32). Recovery target time when forking a service. This has effect only when a new service is being created.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.spec.userConfig
.10.20.0.0/16
.
network
(string, MaxLength: 43). CIDR address block.
"},{"location":"api-reference/mysql.html#spec.userConfig.migration","title":"migration","text":"description
(string, MaxLength: 1024). Description for IP filter list entry.spec.userConfig
.
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.
"},{"location":"api-reference/mysql.html#spec.userConfig.mysql","title":"mysql","text":"dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.spec.userConfig
.
"},{"location":"api-reference/mysql.html#spec.userConfig.private_access","title":"private_access","text":"connect_timeout
(integer, Minimum: 2, Maximum: 3600). The number of seconds that the mysqld server waits for a connect packet before responding with Bad handshake.default_time_zone
(string, MinLength: 2, MaxLength: 100). Default server time zone as an offset from UTC (from -12:00 to +12:00), a time zone name, or SYSTEM
to use the MySQL server default.group_concat_max_len
(integer, Minimum: 4). The maximum permitted result length in bytes for the GROUP_CONCAT() function.information_schema_stats_expiry
(integer, Minimum: 900, Maximum: 31536000). The time, in seconds, before cached statistics expire.innodb_change_buffer_max_size
(integer, Minimum: 0, Maximum: 50). Maximum size for the InnoDB change buffer, as a percentage of the total size of the buffer pool. Default is 25.innodb_flush_neighbors
(integer, Minimum: 0, Maximum: 2). Specifies whether flushing a page from the InnoDB buffer pool also flushes other dirty pages in the same extent (default is 1): 0 - dirty pages in the same extent are not flushed, 1 - flush contiguous dirty pages in the same extent, 2 - flush dirty pages in the same extent.innodb_ft_min_token_size
(integer, Minimum: 0, Maximum: 16). Minimum length of words that are stored in an InnoDB FULLTEXT index. Changing this parameter will lead to a restart of the MySQL service.innodb_ft_server_stopword_table
(string, Pattern: ^.+/.+$
, MaxLength: 1024). This option is used to specify your own InnoDB FULLTEXT index stopword list for all InnoDB tables.innodb_lock_wait_timeout
(integer, Minimum: 1, Maximum: 3600). The length of time in seconds an InnoDB transaction waits for a row lock before giving up. Default is 120.innodb_log_buffer_size
(integer, Minimum: 1048576, Maximum: 4294967295). The size in bytes of the buffer that InnoDB uses to write to the log files on disk.innodb_online_alter_log_max_size
(integer, Minimum: 65536, Maximum: 1099511627776). The upper limit in bytes on the size of the temporary log files used during online DDL operations for InnoDB tables.innodb_print_all_deadlocks
(boolean). When enabled, information about all deadlocks in InnoDB user transactions is recorded in the error log. Disabled by default.innodb_read_io_threads
(integer, Minimum: 1, Maximum: 64). The number of I/O threads for read operations in InnoDB. Default is 4. Changing this parameter will lead to a restart of the MySQL service.innodb_rollback_on_timeout
(boolean). When enabled a transaction timeout causes InnoDB to abort and roll back the entire transaction. Changing this parameter will lead to a restart of the MySQL service.innodb_thread_concurrency
(integer, Minimum: 0, Maximum: 1000). Defines the maximum number of threads permitted inside of InnoDB. Default is 0 (infinite concurrency - no limit).innodb_write_io_threads
(integer, Minimum: 1, Maximum: 64). The number of I/O threads for write operations in InnoDB. Default is 4. Changing this parameter will lead to a restart of the MySQL service.interactive_timeout
(integer, Minimum: 30, Maximum: 604800). The number of seconds the server waits for activity on an interactive connection before closing it.internal_tmp_mem_storage_engine
(string, Enum: TempTable
, MEMORY
). The storage engine for in-memory internal temporary tables.long_query_time
(number, Minimum: 0, Maximum: 3600). The slow_query_logs work as SQL statements that take more than long_query_time seconds to execute. Default is 10s.max_allowed_packet
(integer, Minimum: 102400, Maximum: 1073741824). Size of the largest message in bytes that can be received by the server. Default is 67108864 (64M).max_heap_table_size
(integer, Minimum: 1048576, Maximum: 1073741824). Limits the size of internal in-memory tables. Also set tmp_table_size. Default is 16777216 (16M).net_buffer_length
(integer, Minimum: 1024, Maximum: 1048576). Start sizes of connection buffer and result buffer. Default is 16384 (16K). Changing this parameter will lead to a restart of the MySQL service.net_read_timeout
(integer, Minimum: 1, Maximum: 3600). The number of seconds to wait for more data from a connection before aborting the read.net_write_timeout
(integer, Minimum: 1, Maximum: 3600). The number of seconds to wait for a block to be written to a connection before aborting the write.slow_query_log
(boolean). Slow query log enables capturing of slow queries. Setting slow_query_log to false also truncates the mysql.slow_log table. Default is off.sort_buffer_size
(integer, Minimum: 32768, Maximum: 1073741824). Sort buffer size in bytes for ORDER BY optimization. Default is 262144 (256K).sql_mode
(string, Pattern: ^[A-Z_]*(,[A-Z_]+)*$
, MaxLength: 1024). Global SQL mode. Set to empty to use MySQL server defaults. When creating a new service and not setting this field Aiven default SQL mode (strict, SQL standard compliant) will be assigned.sql_require_primary_key
(boolean). Require primary key to be defined for new tables or old tables modified with ALTER TABLE and fail if missing. It is recommended to always have primary keys because various functionality may break if any large table is missing them.tmp_table_size
(integer, Minimum: 1048576, Maximum: 1073741824). Limits the size of internal in-memory tables. Also set max_heap_table_size. Default is 16777216 (16M).wait_timeout
(integer, Minimum: 1, Maximum: 2147483). The number of seconds the server waits for activity on a noninteractive connection before closing it.spec.userConfig
.
"},{"location":"api-reference/mysql.html#spec.userConfig.privatelink_access","title":"privatelink_access","text":"mysql
(boolean). Allow clients to connect to mysql with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.mysqlx
(boolean). Allow clients to connect to mysqlx with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.spec.userConfig
.
"},{"location":"api-reference/mysql.html#spec.userConfig.public_access","title":"public_access","text":"mysql
(boolean). Enable mysql.mysqlx
(boolean). Enable mysqlx.prometheus
(boolean). Enable prometheus.spec.userConfig
.
"},{"location":"api-reference/opensearch.html","title":"OpenSearch","text":""},{"location":"api-reference/opensearch.html#usage-example","title":"Usage example","text":"mysql
(boolean). Allow clients to connect to mysql from the public internet for service nodes that are in a project VPC or another type of private network.mysqlx
(boolean). Allow clients to connect to mysqlx from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.
"},{"location":"api-reference/opensearch.html#OpenSearch","title":"OpenSearch","text":"apiVersion: aiven.io/v1alpha1\nkind: OpenSearch\nmetadata:\n name: my-os\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: os-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-4\n disk_space: 80Gib\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/opensearch.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value OpenSearch
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). OpenSearchSpec defines the desired state of OpenSearch. See below for nested schema.OpenSearch
.
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.
"},{"location":"api-reference/opensearch.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: OPENSEARCH_HOST
, OPENSEARCH_PORT
, OPENSEARCH_USER
, OPENSEARCH_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). OpenSearch specific user configuration options. See below for nested schema.spec
.
"},{"location":"api-reference/opensearch.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.OPENSEARCH_HOST
, OPENSEARCH_PORT
, OPENSEARCH_USER
, OPENSEARCH_PASSWORD
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/opensearch.html#spec.projectVPCRef","title":"projectVPCRef","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.spec
.
name
(string, MinLength: 1).
"},{"location":"api-reference/opensearch.html#spec.serviceIntegrations","title":"serviceIntegrations","text":"namespace
(string, MinLength: 1). spec
.
"},{"location":"api-reference/opensearch.html#spec.userConfig","title":"userConfig","text":"integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). spec
.
"},{"location":"api-reference/opensearch.html#spec.userConfig.index_patterns","title":"index_patterns","text":"additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.disable_replication_factor_adjustment
(boolean). DEPRECATED: Disable automatic replication factor adjustment for multi-node services. By default, Aiven ensures all indexes are replicated at least to two nodes. Note: Due to potential data loss in case of losing a service node, this setting can no longer be activated.index_patterns
(array of objects, MaxItems: 512). Index patterns. See below for nested schema.index_template
(object). Template settings for all new indexes. See below for nested schema.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.keep_index_refresh_interval
(boolean). Aiven automation resets index.refresh_interval to default value for every index to be sure that indices are always visible to search. If it doesn't fit your case, you can disable this by setting up this flag to true.max_index_count
(integer, Minimum: 0). DEPRECATED: use index_patterns instead.openid
(object). OpenSearch OpenID Connect Configuration. See below for nested schema.opensearch
(object). OpenSearch settings. See below for nested schema.opensearch_dashboards
(object). OpenSearch Dashboards settings. See below for nested schema.opensearch_version
(string, Enum: 1
, 2
). OpenSearch major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.saml
(object). OpenSearch SAML configuration. See below for nested schema.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.spec.userConfig
.
max_index_count
(integer, Minimum: 0). Maximum number of indexes to keep.pattern
(string, Pattern: ^[A-Za-z0-9-_.*?]+$
, MaxLength: 1024). fnmatch pattern.
"},{"location":"api-reference/opensearch.html#spec.userConfig.index_template","title":"index_template","text":"sorting_algorithm
(string, Enum: alphabetical
, creation_date
). Deletion sorting algorithm.spec.userConfig
.
"},{"location":"api-reference/opensearch.html#spec.userConfig.ip_filter","title":"ip_filter","text":"mapping_nested_objects_limit
(integer, Minimum: 0, Maximum: 100000). The maximum number of nested JSON objects that a single document can contain across all nested types. This limit helps to prevent out of memory errors when a document contains too many nested objects. Default is 10000.number_of_replicas
(integer, Minimum: 0, Maximum: 29). The number of replicas each primary shard has.number_of_shards
(integer, Minimum: 1, Maximum: 1024). The number of primary shards that an index should have.spec.userConfig
.10.20.0.0/16
.
network
(string, MaxLength: 43). CIDR address block.
"},{"location":"api-reference/opensearch.html#spec.userConfig.openid","title":"openid","text":"description
(string, MaxLength: 1024). Description for IP filter list entry.spec.userConfig
.
client_id
(string, MinLength: 1, MaxLength: 1024). The ID of the OpenID Connect client configured in your IdP. Required.client_secret
(string, MinLength: 1, MaxLength: 1024). The client secret of the OpenID Connect client configured in your IdP. Required.connect_url
(string, MaxLength: 2048). The URL of your IdP where the Security plugin can find the OpenID Connect metadata/configuration settings.
enabled
(boolean). Enables or disables OpenID Connect authentication for OpenSearch. When enabled, users can authenticate using OpenID Connect with an Identity Provider.header
(string, MinLength: 1, MaxLength: 1024). HTTP header name of the JWT token. Optional. Default is Authorization.jwt_header
(string, MinLength: 1, MaxLength: 1024). The HTTP header that stores the token. Typically the Authorization header with the Bearer schema: Authorization: Bearer . Optional. Default is Authorization. jwt_url_parameter
(string, MinLength: 1, MaxLength: 1024). If the token is not transmitted in the HTTP header, but as an URL parameter, define the name of the parameter here. Optional.refresh_rate_limit_count
(integer, Minimum: 10). The maximum number of unknown key IDs in the time frame. Default is 10. Optional.refresh_rate_limit_time_window_ms
(integer, Minimum: 10000). The time frame to use when checking the maximum number of unknown key IDs, in milliseconds. Optional.Default is 10000 (10 seconds).roles_key
(string, MinLength: 1, MaxLength: 1024). The key in the JSON payload that stores the user\u2019s roles. The value of this key must be a comma-separated list of roles. Required only if you want to use roles in the JWT.scope
(string, MinLength: 1, MaxLength: 1024). The scope of the identity token issued by the IdP. Optional. Default is openid profile email address phone.subject_key
(string, MinLength: 1, MaxLength: 1024). The key in the JSON payload that stores the user\u2019s name. If not defined, the subject registered claim is used. Most IdP providers use the preferred_username claim. Optional.spec.userConfig
.
"},{"location":"api-reference/opensearch.html#spec.userConfig.opensearch.auth_failure_listeners","title":"auth_failure_listeners","text":"action_auto_create_index_enabled
(boolean). Explicitly allow or block automatic creation of indices. Defaults to true.action_destructive_requires_name
(boolean). Require explicit index names when deleting.auth_failure_listeners
(object). Opensearch Security Plugin Settings. See below for nested schema.cluster_max_shards_per_node
(integer, Minimum: 100, Maximum: 10000). Controls the number of shards allowed in the cluster per data node.cluster_routing_allocation_node_concurrent_recoveries
(integer, Minimum: 2, Maximum: 16). How many concurrent incoming/outgoing shard recoveries (normally replicas) are allowed to happen on a node. Defaults to 2.email_sender_name
(string, Pattern: ^[a-zA-Z0-9-_]+$
, MaxLength: 40). Sender name placeholder to be used in Opensearch Dashboards and Opensearch keystore.email_sender_password
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 1024). Sender password for Opensearch alerts to authenticate with SMTP server.email_sender_username
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 320). Sender username for Opensearch alerts.http_max_content_length
(integer, Minimum: 1, Maximum: 2147483647). Maximum content length for HTTP requests to the OpenSearch HTTP API, in bytes.http_max_header_size
(integer, Minimum: 1024, Maximum: 262144). The max size of allowed headers, in bytes.http_max_initial_line_length
(integer, Minimum: 1024, Maximum: 65536). The max length of an HTTP URL, in bytes.indices_fielddata_cache_size
(integer, Minimum: 3, Maximum: 100). Relative amount. Maximum amount of heap memory used for field data cache. This is an expert setting; decreasing the value too much will increase overhead of loading field data; too much memory used for field data cache will decrease amount of heap available for other operations.indices_memory_index_buffer_size
(integer, Minimum: 3, Maximum: 40). Percentage value. Default is 10%. Total amount of heap used for indexing buffer, before writing segments to disk. This is an expert setting. Too low value will slow down indexing; too high value will increase indexing performance but causes performance issues for query performance.indices_memory_max_index_buffer_size
(integer, Minimum: 3, Maximum: 2048). Absolute value. Default is unbound. Doesn't work without indices.memory.index_buffer_size. Maximum amount of heap used for query cache, an absolute indices.memory.index_buffer_size maximum hard limit.indices_memory_min_index_buffer_size
(integer, Minimum: 3, Maximum: 2048). Absolute value. Default is 48mb. Doesn't work without indices.memory.index_buffer_size. Minimum amount of heap used for query cache, an absolute indices.memory.index_buffer_size minimal hard limit.indices_queries_cache_size
(integer, Minimum: 3, Maximum: 40). Percentage value. Default is 10%. Maximum amount of heap used for query cache. This is an expert setting. Too low value will decrease query performance and increase performance for other operations; too high value will cause issues with other OpenSearch functionality.indices_query_bool_max_clause_count
(integer, Minimum: 64, Maximum: 4096). Maximum number of clauses Lucene BooleanQuery can have. The default value (1024) is relatively high, and increasing it may cause performance issues. Investigate other approaches first before increasing this value.indices_recovery_max_bytes_per_sec
(integer, Minimum: 40, Maximum: 400). Limits total inbound and outbound recovery traffic for each node. Applies to both peer recoveries as well as snapshot recoveries (i.e., restores from a snapshot). Defaults to 40mb.indices_recovery_max_concurrent_file_chunks
(integer, Minimum: 2, Maximum: 5). Number of file chunks sent in parallel for each recovery. Defaults to 2.ism_enabled
(boolean). Specifies whether ISM is enabled or not.ism_history_enabled
(boolean). Specifies whether audit history is enabled or not. The logs from ISM are automatically indexed to a logs document.ism_history_max_age
(integer, Minimum: 1, Maximum: 2147483647). The maximum age before rolling over the audit history index in hours.ism_history_max_docs
(integer, Minimum: 1). The maximum number of documents before rolling over the audit history index.ism_history_rollover_check_period
(integer, Minimum: 1, Maximum: 2147483647). The time between rollover checks for the audit history index in hours.ism_history_rollover_retention_period
(integer, Minimum: 1, Maximum: 2147483647). How long audit history indices are kept in days.override_main_response_version
(boolean). Compatibility mode sets OpenSearch to report its version as 7.10 so clients continue to work. Default is false.reindex_remote_whitelist
(array of strings, MaxItems: 32). Whitelisted addresses for reindexing. Changing this value will cause all OpenSearch instances to restart.script_max_compilations_rate
(string, MaxLength: 1024). Script compilation circuit breaker limits the number of inline script compilations within a period of time. Default is use-context.search_max_buckets
(integer, Minimum: 1, Maximum: 1000000). Maximum number of aggregation buckets allowed in a single response. OpenSearch default value is used when this is not defined.thread_pool_analyze_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_analyze_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_force_merge_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_get_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_get_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_search_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_search_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_search_throttled_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_search_throttled_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_write_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_write_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.spec.userConfig.opensearch
.
"},{"location":"api-reference/opensearch.html#spec.userConfig.opensearch.auth_failure_listeners.internal_authentication_backend_limiting","title":"internal_authentication_backend_limiting","text":"internal_authentication_backend_limiting
(object). See below for nested schema.ip_rate_limiting
(object). IP address rate limiting settings. See below for nested schema.spec.userConfig.opensearch.auth_failure_listeners
.
"},{"location":"api-reference/opensearch.html#spec.userConfig.opensearch.auth_failure_listeners.ip_rate_limiting","title":"ip_rate_limiting","text":"allowed_tries
(integer, Minimum: 0, Maximum: 2147483647). The number of login attempts allowed before login is blocked.authentication_backend
(string, Enum: internal
, MaxLength: 1024). internal_authentication_backend_limiting.authentication_backend.block_expiry_seconds
(integer, Minimum: 0, Maximum: 2147483647). The duration of time that login remains blocked after a failed login.max_blocked_clients
(integer, Minimum: 0, Maximum: 2147483647). internal_authentication_backend_limiting.max_blocked_clients.max_tracked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of tracked IP addresses that have failed login.time_window_seconds
(integer, Minimum: 0, Maximum: 2147483647). The window of time in which the value for allowed_tries
is enforced.type
(string, Enum: username
, MaxLength: 1024). internal_authentication_backend_limiting.type.spec.userConfig.opensearch.auth_failure_listeners
.
"},{"location":"api-reference/opensearch.html#spec.userConfig.opensearch_dashboards","title":"opensearch_dashboards","text":"allowed_tries
(integer, Minimum: 1, Maximum: 2147483647). The number of login attempts allowed before login is blocked.block_expiry_seconds
(integer, Minimum: 1, Maximum: 36000). The duration of time that login remains blocked after a failed login.max_blocked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of blocked IP addresses.max_tracked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of tracked IP addresses that have failed login.time_window_seconds
(integer, Minimum: 1, Maximum: 36000). The window of time in which the value for allowed_tries
is enforced.type
(string, Enum: ip
, MaxLength: 1024). The type of rate limiting.spec.userConfig
.
"},{"location":"api-reference/opensearch.html#spec.userConfig.private_access","title":"private_access","text":"enabled
(boolean). Enable or disable OpenSearch Dashboards.max_old_space_size
(integer, Minimum: 64, Maximum: 2048). Limits the maximum amount of memory (in MiB) the OpenSearch Dashboards process can use. This sets the max_old_space_size option of the nodejs running the OpenSearch Dashboards. Note: the memory reserved by OpenSearch Dashboards is not available for OpenSearch.opensearch_request_timeout
(integer, Minimum: 5000, Maximum: 120000). Timeout in milliseconds for requests made by OpenSearch Dashboards towards OpenSearch.spec.userConfig
.
"},{"location":"api-reference/opensearch.html#spec.userConfig.privatelink_access","title":"privatelink_access","text":"opensearch
(boolean). Allow clients to connect to opensearch with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.opensearch_dashboards
(boolean). Allow clients to connect to opensearch_dashboards with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.spec.userConfig
.
"},{"location":"api-reference/opensearch.html#spec.userConfig.public_access","title":"public_access","text":"opensearch
(boolean). Enable opensearch.opensearch_dashboards
(boolean). Enable opensearch_dashboards.prometheus
(boolean). Enable prometheus.spec.userConfig
.
"},{"location":"api-reference/opensearch.html#spec.userConfig.saml","title":"saml","text":"opensearch
(boolean). Allow clients to connect to opensearch from the public internet for service nodes that are in a project VPC or another type of private network.opensearch_dashboards
(boolean). Allow clients to connect to opensearch_dashboards from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.spec.userConfig
.
enabled
(boolean). Enables or disables SAML-based authentication for OpenSearch. When enabled, users can authenticate using SAML with an Identity Provider.idp_entity_id
(string, MinLength: 1, MaxLength: 1024). The unique identifier for the Identity Provider (IdP) entity that is used for SAML authentication. This value is typically provided by the IdP.idp_metadata_url
(string, MinLength: 1, MaxLength: 2048). The URL of the SAML metadata for the Identity Provider (IdP). This is used to configure SAML-based authentication with the IdP.sp_entity_id
(string, MinLength: 1, MaxLength: 1024). The unique identifier for the Service Provider (SP) entity that is used for SAML authentication. This value is typically provided by the SP.
"},{"location":"api-reference/postgresql.html","title":"PostgreSQL","text":""},{"location":"api-reference/postgresql.html#usage-example","title":"Usage example","text":"idp_pemtrustedcas_content
(string, MaxLength: 16384). This parameter specifies the PEM-encoded root certificate authority (CA) content for the SAML identity provider (IdP) server verification. The root CA content is used to verify the SSL/TLS certificate presented by the server.roles_key
(string, MinLength: 1, MaxLength: 256). Optional. Specifies the attribute in the SAML response where role information is stored, if available. Role attributes are not required for SAML authentication, but can be included in SAML assertions by most Identity Providers (IdPs) to determine user access levels or permissions.subject_key
(string, MinLength: 1, MaxLength: 256). Optional. Specifies the attribute in the SAML response where the subject identifier is stored. If not configured, the NameID attribute is used by default.
"},{"location":"api-reference/postgresql.html#PostgreSQL","title":"PostgreSQL","text":"apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: my-postgresql\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: postgresql-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n pg_version: \"15\"\n
"},{"location":"api-reference/postgresql.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value PostgreSQL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). PostgreSQLSpec defines the desired state of postgres instance. See below for nested schema.PostgreSQL
.
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.
"},{"location":"api-reference/postgresql.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: POSTGRESQL_HOST
, POSTGRESQL_PORT
, POSTGRESQL_DATABASE
, POSTGRESQL_USER
, POSTGRESQL_PASSWORD
, POSTGRESQL_SSLMODE
, POSTGRESQL_DATABASE_URI
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). PostgreSQL specific user configuration options. See below for nested schema.spec
.
"},{"location":"api-reference/postgresql.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.POSTGRESQL_HOST
, POSTGRESQL_PORT
, POSTGRESQL_DATABASE
, POSTGRESQL_USER
, POSTGRESQL_PASSWORD
, POSTGRESQL_SSLMODE
, POSTGRESQL_DATABASE_URI
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/postgresql.html#spec.projectVPCRef","title":"projectVPCRef","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.spec
.
name
(string, MinLength: 1).
"},{"location":"api-reference/postgresql.html#spec.serviceIntegrations","title":"serviceIntegrations","text":"namespace
(string, MinLength: 1). spec
.
"},{"location":"api-reference/postgresql.html#spec.userConfig","title":"userConfig","text":"integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). spec
.
"},{"location":"api-reference/postgresql.html#spec.userConfig.ip_filter","title":"ip_filter","text":"additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.admin_password
(string, Immutable, Pattern: ^[a-zA-Z0-9-_]+$
, MinLength: 8, MaxLength: 256). Custom password for admin user. Defaults to random string. This must be set only when a new service is being created.admin_username
(string, Immutable, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Custom username for admin user. This must be set only when a new service is being created.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.enable_ipv6
(boolean). Register AAAA DNS records for the service, and allow IPv6 packets to service ports.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.pg
(object). postgresql.conf configuration values. See below for nested schema.pg_read_replica
(boolean). Should the service which is being forked be a read replica (deprecated, use read_replica service integration instead).pg_service_to_fork_from
(string, Immutable, MaxLength: 64). Name of the PG Service from which to fork (deprecated, use service_to_fork_from). This has effect only when a new service is being created.pg_stat_monitor_enable
(boolean). Enable the pg_stat_monitor extension. Enabling this extension will cause the cluster to be restarted.When this extension is enabled, pg_stat_statements results for utility commands are unreliable.pg_version
(string, Enum: 11
, 12
, 13
, 14
, 15
). PostgreSQL major version.pgbouncer
(object). PGBouncer connection pooling settings. See below for nested schema.pglookout
(object). PGLookout settings. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_target_time
(string, Immutable, MaxLength: 32). Recovery target time when forking a service. This has effect only when a new service is being created.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.shared_buffers_percentage
(number, Minimum: 20, Maximum: 60). Percentage of total RAM that the database server uses for shared memory buffers. Valid range is 20-60 (float), which corresponds to 20% - 60%. This setting adjusts the shared_buffers configuration value.static_ips
(boolean). Use static public IP addresses.synchronous_replication
(string, Enum: quorum
, off
). Synchronous replication type. Note that the service plan also needs to support synchronous replication.timescaledb
(object). TimescaleDB extension configuration values. See below for nested schema.variant
(string, Enum: aiven
, timescale
). Variant of the PostgreSQL service, may affect the features that are exposed by default.work_mem
(integer, Minimum: 1, Maximum: 1024). Sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files, in MB. Default is 1MB + 0.075% of total RAM (up to 32MB).spec.userConfig
.10.20.0.0/16
.
network
(string, MaxLength: 43). CIDR address block.
"},{"location":"api-reference/postgresql.html#spec.userConfig.migration","title":"migration","text":"description
(string, MaxLength: 1024). Description for IP filter list entry.spec.userConfig
.
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.
"},{"location":"api-reference/postgresql.html#spec.userConfig.pg","title":"pg","text":"dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.spec.userConfig
.
"},{"location":"api-reference/postgresql.html#spec.userConfig.pgbouncer","title":"pgbouncer","text":"autovacuum_analyze_scale_factor
(number, Minimum: 0, Maximum: 1). Specifies a fraction of the table size to add to autovacuum_analyze_threshold when deciding whether to trigger an ANALYZE. The default is 0.2 (20% of table size).autovacuum_analyze_threshold
(integer, Minimum: 0, Maximum: 2147483647). Specifies the minimum number of inserted, updated or deleted tuples needed to trigger an ANALYZE in any one table. The default is 50 tuples.autovacuum_freeze_max_age
(integer, Minimum: 200000000, Maximum: 1500000000). Specifies the maximum age (in transactions) that a table's pg_class.relfrozenxid field can attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. This parameter will cause the server to be restarted.autovacuum_max_workers
(integer, Minimum: 1, Maximum: 20). Specifies the maximum number of autovacuum processes (other than the autovacuum launcher) that may be running at any one time. The default is three. This parameter can only be set at server start.autovacuum_naptime
(integer, Minimum: 1, Maximum: 86400). Specifies the minimum delay between autovacuum runs on any given database. The delay is measured in seconds, and the default is one minute.autovacuum_vacuum_cost_delay
(integer, Minimum: -1, Maximum: 100). Specifies the cost delay value that will be used in automatic VACUUM operations. If -1 is specified, the regular vacuum_cost_delay value will be used. The default value is 20 milliseconds.autovacuum_vacuum_cost_limit
(integer, Minimum: -1, Maximum: 10000). Specifies the cost limit value that will be used in automatic VACUUM operations. If -1 is specified (which is the default), the regular vacuum_cost_limit value will be used.autovacuum_vacuum_scale_factor
(number, Minimum: 0, Maximum: 1). Specifies a fraction of the table size to add to autovacuum_vacuum_threshold when deciding whether to trigger a VACUUM. The default is 0.2 (20% of table size).autovacuum_vacuum_threshold
(integer, Minimum: 0, Maximum: 2147483647). Specifies the minimum number of updated or deleted tuples needed to trigger a VACUUM in any one table. The default is 50 tuples.bgwriter_delay
(integer, Minimum: 10, Maximum: 10000). Specifies the delay between activity rounds for the background writer in milliseconds. Default is 200.bgwriter_flush_after
(integer, Minimum: 0, Maximum: 2048). Whenever more than bgwriter_flush_after bytes have been written by the background writer, attempt to force the OS to issue these writes to the underlying storage. Specified in kilobytes, default is 512. Setting of 0 disables forced writeback.bgwriter_lru_maxpages
(integer, Minimum: 0, Maximum: 1073741823). In each round, no more than this many buffers will be written by the background writer. Setting this to zero disables background writing. Default is 100.bgwriter_lru_multiplier
(number, Minimum: 0, Maximum: 10). The average recent need for new buffers is multiplied by bgwriter_lru_multiplier to arrive at an estimate of the number that will be needed during the next round, (up to bgwriter_lru_maxpages). 1.0 represents a \u201cjust in time\u201d policy of writing exactly the number of buffers predicted to be needed. Larger values provide some cushion against spikes in demand, while smaller values intentionally leave writes to be done by server processes. The default is 2.0.deadlock_timeout
(integer, Minimum: 500, Maximum: 1800000). This is the amount of time, in milliseconds, to wait on a lock before checking to see if there is a deadlock condition.default_toast_compression
(string, Enum: lz4
, pglz
). Specifies the default TOAST compression method for values of compressible columns (the default is lz4).idle_in_transaction_session_timeout
(integer, Minimum: 0, Maximum: 604800000). Time out sessions with open transactions after this number of milliseconds.jit
(boolean). Controls system-wide use of Just-in-Time Compilation (JIT).log_autovacuum_min_duration
(integer, Minimum: -1, Maximum: 2147483647). Causes each action executed by autovacuum to be logged if it ran for at least the specified number of milliseconds. Setting this to zero logs all autovacuum actions. Minus-one (the default) disables logging autovacuum actions.log_error_verbosity
(string, Enum: TERSE
, DEFAULT
, VERBOSE
). Controls the amount of detail written in the server log for each message that is logged.log_line_prefix
(string, Enum: 'pid=%p,user=%u,db=%d,app=%a,client=%h '
, '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
, '%m [%p] %q[user=%u,db=%d,app=%a] '
). Choose from one of the available log-formats. These can support popular log analyzers like pgbadger, pganalyze etc.log_min_duration_statement
(integer, Minimum: -1, Maximum: 86400000). Log statements that take more than this number of milliseconds to run, -1 disables.log_temp_files
(integer, Minimum: -1, Maximum: 2147483647). Log statements for each temporary file created larger than this number of kilobytes, -1 disables.max_files_per_process
(integer, Minimum: 1000, Maximum: 4096). PostgreSQL maximum number of files that can be open per process.max_locks_per_transaction
(integer, Minimum: 64, Maximum: 6400). PostgreSQL maximum locks per transaction.max_logical_replication_workers
(integer, Minimum: 4, Maximum: 64). PostgreSQL maximum logical replication workers (taken from the pool of max_parallel_workers).max_parallel_workers
(integer, Minimum: 0, Maximum: 96). Sets the maximum number of workers that the system can support for parallel queries.max_parallel_workers_per_gather
(integer, Minimum: 0, Maximum: 96). Sets the maximum number of workers that can be started by a single Gather or Gather Merge node.max_pred_locks_per_transaction
(integer, Minimum: 64, Maximum: 5120). PostgreSQL maximum predicate locks per transaction.max_prepared_transactions
(integer, Minimum: 0, Maximum: 10000). PostgreSQL maximum prepared transactions.max_replication_slots
(integer, Minimum: 8, Maximum: 64). PostgreSQL maximum replication slots.max_slot_wal_keep_size
(integer, Minimum: -1, Maximum: 2147483647). PostgreSQL maximum WAL size (MB) reserved for replication slots. Default is -1 (unlimited). wal_keep_size minimum WAL size setting takes precedence over this.max_stack_depth
(integer, Minimum: 2097152, Maximum: 6291456). Maximum depth of the stack in bytes.max_standby_archive_delay
(integer, Minimum: 1, Maximum: 43200000). Max standby archive delay in milliseconds.max_standby_streaming_delay
(integer, Minimum: 1, Maximum: 43200000). Max standby streaming delay in milliseconds.max_wal_senders
(integer, Minimum: 20, Maximum: 64). PostgreSQL maximum WAL senders.max_worker_processes
(integer, Minimum: 8, Maximum: 96). Sets the maximum number of background processes that the system can support.pg_partman_bgw.interval
(integer, Minimum: 3600, Maximum: 604800). Sets the time interval to run pg_partman's scheduled tasks.pg_partman_bgw.role
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Controls which role to use for pg_partman's scheduled background tasks.pg_stat_monitor.pgsm_enable_query_plan
(boolean). Enables or disables query plan monitoring.pg_stat_monitor.pgsm_max_buckets
(integer, Minimum: 1, Maximum: 10). Sets the maximum number of buckets.pg_stat_statements.track
(string, Enum: all
, top
, none
). Controls which statements are counted. Specify top to track top-level statements (those issued directly by clients), all to also track nested statements (such as statements invoked within functions), or none to disable statement statistics collection. The default value is top.temp_file_limit
(integer, Minimum: -1, Maximum: 2147483647). PostgreSQL temporary file limit in KiB, -1 for unlimited.timezone
(string, MaxLength: 64). PostgreSQL service timezone.track_activity_query_size
(integer, Minimum: 1024, Maximum: 10240). Specifies the number of bytes reserved to track the currently executing command for each active session.track_commit_timestamp
(string, Enum: off
, on
). Record commit time of transactions.track_functions
(string, Enum: all
, pl
, none
). Enables tracking of function call counts and time used.track_io_timing
(string, Enum: off
, on
). Enables timing of database I/O calls. This parameter is off by default, because it will repeatedly query the operating system for the current time, which may cause significant overhead on some platforms.wal_sender_timeout
(integer). Terminate replication connections that are inactive for longer than this amount of time, in milliseconds. Setting this value to zero disables the timeout.wal_writer_delay
(integer, Minimum: 10, Maximum: 200). WAL flush interval in milliseconds. Note that setting this value to lower than the default 200ms may negatively impact performance.spec.userConfig
.
"},{"location":"api-reference/postgresql.html#spec.userConfig.pglookout","title":"pglookout","text":"autodb_idle_timeout
(integer, Minimum: 0, Maximum: 86400). If the automatically created database pools have been unused this many seconds, they are freed. If 0 then timeout is disabled. [seconds].autodb_max_db_connections
(integer, Minimum: 0, Maximum: 2147483647). Do not allow more than this many server connections per database (regardless of user). Setting it to 0 means unlimited.autodb_pool_mode
(string, Enum: session
, transaction
, statement
). PGBouncer pool mode.autodb_pool_size
(integer, Minimum: 0, Maximum: 10000). If non-zero then create automatically a pool of that size per user when a pool doesn't exist.ignore_startup_parameters
(array of strings, MaxItems: 32). List of parameters to ignore when given in startup packet.min_pool_size
(integer, Minimum: 0, Maximum: 10000). Add more server connections to pool if below this number. Improves behavior when usual load comes suddenly back after period of total inactivity. The value is effectively capped at the pool size.server_idle_timeout
(integer, Minimum: 0, Maximum: 86400). If a server connection has been idle more than this many seconds it will be dropped. If 0 then timeout is disabled. [seconds].server_lifetime
(integer, Minimum: 60, Maximum: 86400). The pooler will close an unused server connection that has been connected longer than this. [seconds].server_reset_query_always
(boolean). Run server_reset_query (DISCARD ALL) in all pooling modes.spec.userConfig
.
"},{"location":"api-reference/postgresql.html#spec.userConfig.private_access","title":"private_access","text":"max_failover_replication_time_lag
(integer, Minimum: 10). Number of seconds of master unavailability before triggering database failover to standby.spec.userConfig
.
"},{"location":"api-reference/postgresql.html#spec.userConfig.privatelink_access","title":"privatelink_access","text":"pg
(boolean). Allow clients to connect to pg with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.pgbouncer
(boolean). Allow clients to connect to pgbouncer with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.spec.userConfig
.
"},{"location":"api-reference/postgresql.html#spec.userConfig.public_access","title":"public_access","text":"pg
(boolean). Enable pg.pgbouncer
(boolean). Enable pgbouncer.prometheus
(boolean). Enable prometheus.spec.userConfig
.
"},{"location":"api-reference/postgresql.html#spec.userConfig.timescaledb","title":"timescaledb","text":"pg
(boolean). Allow clients to connect to pg from the public internet for service nodes that are in a project VPC or another type of private network.pgbouncer
(boolean). Allow clients to connect to pgbouncer from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.spec.userConfig
.
"},{"location":"api-reference/project.html","title":"Project","text":""},{"location":"api-reference/project.html#usage-example","title":"Usage example","text":"max_background_workers
(integer, Minimum: 1, Maximum: 4096). The number of background workers for timescaledb operations. You should configure this setting to the sum of your number of databases and the total number of concurrent background workers you want running at any given point in time.
"},{"location":"api-reference/project.html#Project","title":"Project","text":"apiVersion: aiven.io/v1alpha1\nkind: Project\nmetadata:\n name: my-project\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: project-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n tags:\n env: prod\n\n billingAddress: NYC\n cloud: aws-eu-west-1\n
"},{"location":"api-reference/project.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Project
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ProjectSpec defines the desired state of Project. See below for nested schema.Project
.
"},{"location":"api-reference/project.html#spec.authSecretRef","title":"authSecretRef","text":"accountId
(string, MaxLength: 32). Account ID.authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.billingAddress
(string, MaxLength: 1000). Billing name and address of the project.billingCurrency
(string, Enum: AUD
, CAD
, CHF
, DKK
, EUR
, GBP
, NOK
, SEK
, USD
). Billing currency.billingEmails
(array of strings, MaxItems: 10). Billing contact emails of the project.billingExtraText
(string, MaxLength: 1000). Extra text to be included in all project invoices, e.g. purchase order or cost center number.billingGroupId
(string, MinLength: 36, MaxLength: 36). BillingGroup ID.cardId
(string, MaxLength: 64). Credit card ID; The ID may be either last 4 digits of the card or the actual ID.cloud
(string, MaxLength: 256). Target cloud, example: aws-eu-central-1.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: PROJECT_CA_CERT
. See below for nested schema.copyFromProject
(string, MaxLength: 63). Project name from which to copy settings to the new project.countryCode
(string, MinLength: 2, MaxLength: 2). Billing country code of the project.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize projects.technicalEmails
(array of strings, MaxItems: 10). Technical contact emails of the project.spec
.
"},{"location":"api-reference/project.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.PROJECT_CA_CERT
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/projectvpc.html","title":"ProjectVPC","text":""},{"location":"api-reference/projectvpc.html#usage-example","title":"Usage example","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.
"},{"location":"api-reference/projectvpc.html#ProjectVPC","title":"ProjectVPC","text":"apiVersion: aiven.io/v1alpha1\nkind: ProjectVPC\nmetadata:\n name: my-project-vpc\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n cloudName: google-europe-west1\n networkCidr: 10.0.0.0/24\n
"},{"location":"api-reference/projectvpc.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ProjectVPC
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ProjectVPCSpec defines the desired state of ProjectVPC. See below for nested schema.ProjectVPC
.
cloudName
(string, Immutable, MaxLength: 256). Cloud the VPC is in.networkCidr
(string, Immutable, MaxLength: 36). Network address range used by the VPC like 192.168.0.0/24.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). The project the VPC belongs to.
"},{"location":"api-reference/projectvpc.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.spec
.
"},{"location":"api-reference/redis.html","title":"Redis","text":""},{"location":"api-reference/redis.html#usage-example","title":"Usage example","text":"key
(string, MinLength: 1). name
(string, MinLength: 1).
"},{"location":"api-reference/redis.html#Redis","title":"Redis","text":"apiVersion: aiven.io/v1alpha1\nkind: Redis\nmetadata:\n name: k8s-redis\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: redis-token\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n userConfig:\n redis_maxmemory_policy: \"allkeys-random\"\n
"},{"location":"api-reference/redis.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Redis
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). RedisSpec defines the desired state of Redis. See below for nested schema.Redis
.
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.
"},{"location":"api-reference/redis.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: REDIS_HOST
, REDIS_PORT
, REDIS_USER
, REDIS_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Redis specific user configuration options. See below for nested schema.spec
.
"},{"location":"api-reference/redis.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.REDIS_HOST
, REDIS_PORT
, REDIS_USER
, REDIS_PASSWORD
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"api-reference/redis.html#spec.projectVPCRef","title":"projectVPCRef","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.spec
.
name
(string, MinLength: 1).
"},{"location":"api-reference/redis.html#spec.serviceIntegrations","title":"serviceIntegrations","text":"namespace
(string, MinLength: 1). spec
.
"},{"location":"api-reference/redis.html#spec.userConfig","title":"userConfig","text":"integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). spec
.
"},{"location":"api-reference/redis.html#spec.userConfig.ip_filter","title":"ip_filter","text":"additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.redis_acl_channels_default
(string, Enum: allchannels
, resetchannels
). Determines default pub/sub channels' ACL for new users if ACL is not supplied. When this option is not defined, all_channels is assumed to keep backward compatibility. This option doesn't affect Redis configuration acl-pubsub-default.redis_io_threads
(integer, Minimum: 1, Maximum: 32). Set Redis IO thread count. Changing this will cause a restart of the Redis service.redis_lfu_decay_time
(integer, Minimum: 1, Maximum: 120). LFU maxmemory-policy counter decay time in minutes.redis_lfu_log_factor
(integer, Minimum: 0, Maximum: 100). Counter logarithm factor for volatile-lfu and allkeys-lfu maxmemory-policies.redis_maxmemory_policy
(string, Enum: noeviction
, allkeys-lru
, volatile-lru
, allkeys-random
, volatile-random
, volatile-ttl
, volatile-lfu
, allkeys-lfu
). Redis maxmemory-policy.redis_notify_keyspace_events
(string, Pattern: ^[KEg\\$lshzxeA]*$
, MaxLength: 32). Set notify-keyspace-events option.redis_number_of_databases
(integer, Minimum: 1, Maximum: 128). Set number of Redis databases. Changing this will cause a restart of the Redis service.redis_persistence
(string, Enum: off
, rdb
). When persistence is rdb
, Redis does RDB dumps each 10 minutes if any key is changed. Also RDB dumps are done according to backup schedule for backup purposes. When persistence is off
, no RDB dumps and backups are done, so data can be lost at any moment if service is restarted for any reason, or if service is powered off. Also service can't be forked.redis_pubsub_client_output_buffer_limit
(integer, Minimum: 32, Maximum: 512). Set output buffer limit for pub / sub clients in MB. The value is the hard limit, the soft limit is 1/4 of the hard limit. When setting the limit, be mindful of the available memory in the selected service plan.redis_ssl
(boolean). Require SSL to access Redis.redis_timeout
(integer, Minimum: 0, Maximum: 31536000). Redis idle connection timeout in seconds.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.spec.userConfig
.10.20.0.0/16
.
network
(string, MaxLength: 43). CIDR address block.
"},{"location":"api-reference/redis.html#spec.userConfig.migration","title":"migration","text":"description
(string, MaxLength: 1024). Description for IP filter list entry.spec.userConfig
.
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.
"},{"location":"api-reference/redis.html#spec.userConfig.private_access","title":"private_access","text":"dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.spec.userConfig
.
"},{"location":"api-reference/redis.html#spec.userConfig.privatelink_access","title":"privatelink_access","text":"prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.redis
(boolean). Allow clients to connect to redis with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.spec.userConfig
.
"},{"location":"api-reference/redis.html#spec.userConfig.public_access","title":"public_access","text":"prometheus
(boolean). Enable prometheus.redis
(boolean). Enable redis.spec.userConfig
.
"},{"location":"api-reference/serviceintegration.html","title":"ServiceIntegration","text":""},{"location":"api-reference/serviceintegration.html#usage-example","title":"Usage example","text":"prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.redis
(boolean). Allow clients to connect to redis from the public internet for service nodes that are in a project VPC or another type of private network.
"},{"location":"api-reference/serviceintegration.html#ServiceIntegration","title":"ServiceIntegration","text":"apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: my-service-integration\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n\n integrationType: kafka_logs\n sourceServiceName: my-source-service-name\n destinationServiceName: my-destination-service-name\n\n kafkaLogs:\n kafka_topic: my-kafka-topic\n
"},{"location":"api-reference/serviceintegration.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ServiceIntegration
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ServiceIntegrationSpec defines the desired state of ServiceIntegration. See below for nested schema.ServiceIntegration
.
integrationType
(string, Enum: alertmanager
, autoscaler
, caching
, cassandra_cross_service_cluster
, clickhouse_kafka
, clickhouse_postgresql
, dashboard
, datadog
, datasource
, external_aws_cloudwatch_logs
, external_aws_cloudwatch_metrics
, external_elasticsearch_logs
, external_google_cloud_logging
, external_opensearch_logs
, flink
, flink_external_kafka
, internal_connectivity
, jolokia
, kafka_connect
, kafka_logs
, kafka_mirrormaker
, logs
, m3aggregator
, m3coordinator
, metrics
, opensearch_cross_cluster_replication
, opensearch_cross_cluster_search
, prometheus
, read_replica
, rsyslog
, schema_registry_proxy
, stresstester
, thanosquery
, thanosstore
, vmalert
, Immutable). Type of the service integration accepted by Aiven API. Some values may not be supported by the operator.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project the integration belongs to.
"},{"location":"api-reference/serviceintegration.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.clickhouseKafka
(object). Clickhouse Kafka configuration values. See below for nested schema.clickhousePostgresql
(object). Clickhouse PostgreSQL configuration values. See below for nested schema.datadog
(object). Datadog specific user configuration options. See below for nested schema.destinationEndpointId
(string, Immutable, MaxLength: 36). Destination endpoint for the integration (if any).destinationProjectName
(string, Immutable, MaxLength: 63). Destination project for the integration (if any).destinationServiceName
(string, Immutable, MaxLength: 64). Destination service for the integration (if any).externalAWSCloudwatchMetrics
(object). External AWS CloudWatch Metrics integration Logs configuration values. See below for nested schema.kafkaConnect
(object). Kafka Connect service configuration values. See below for nested schema.kafkaLogs
(object). Kafka logs configuration values. See below for nested schema.kafkaMirrormaker
(object). Kafka MirrorMaker configuration values. See below for nested schema.logs
(object). Logs configuration values. See below for nested schema.metrics
(object). Metrics configuration values. See below for nested schema.sourceEndpointID
(string, Immutable, MaxLength: 36). Source endpoint for the integration (if any).sourceProjectName
(string, Immutable, MaxLength: 63). Source project for the integration (if any).sourceServiceName
(string, Immutable, MaxLength: 64). Source service for the integration (if any).spec
.
"},{"location":"api-reference/serviceintegration.html#spec.clickhouseKafka","title":"clickhouseKafka","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.
"},{"location":"api-reference/serviceintegration.html#spec.clickhouseKafka.tables","title":"tables","text":"tables
(array of objects, MaxItems: 100). Tables to create. See below for nested schema.spec.clickhouseKafka
.
columns
(array of objects, MaxItems: 100). Table columns. See below for nested schema.data_format
(string, Enum: Avro
, CSV
, JSONAsString
, JSONCompactEachRow
, JSONCompactStringsEachRow
, JSONEachRow
, JSONStringsEachRow
, MsgPack
, TSKV
, TSV
, TabSeparated
, RawBLOB
, AvroConfluent
). Message data format.group_name
(string, MinLength: 1, MaxLength: 249). Kafka consumers group.name
(string, MinLength: 1, MaxLength: 40). Name of the table.topics
(array of objects, MaxItems: 100). Kafka topics. See below for nested schema.
"},{"location":"api-reference/serviceintegration.html#spec.clickhouseKafka.tables.columns","title":"columns","text":"auto_offset_reset
(string, Enum: smallest
, earliest
, beginning
, largest
, latest
, end
). Action to take when there is no initial offset in offset store or the desired offset is out of range.date_time_input_format
(string, Enum: basic
, best_effort
, best_effort_us
). Method to read DateTime from text input formats.handle_error_mode
(string, Enum: default
, stream
). How to handle errors for Kafka engine.max_block_size
(integer, Minimum: 0, Maximum: 1000000000). Number of row collected by poll(s) for flushing data from Kafka.max_rows_per_message
(integer, Minimum: 1, Maximum: 1000000000). The maximum number of rows produced in one kafka message for row-based formats.num_consumers
(integer, Minimum: 1, Maximum: 10). The number of consumers per table per replica.poll_max_batch_size
(integer, Minimum: 0, Maximum: 1000000000). Maximum amount of messages to be polled in a single Kafka poll.skip_broken_messages
(integer, Minimum: 0, Maximum: 1000000000). Skip at least this number of broken messages from Kafka topic per block.spec.clickhouseKafka.tables
.
"},{"location":"api-reference/serviceintegration.html#spec.clickhouseKafka.tables.topics","title":"topics","text":"name
(string, MinLength: 1, MaxLength: 40). Column name.type
(string, MinLength: 1, MaxLength: 1000). Column type.spec.clickhouseKafka.tables
.
"},{"location":"api-reference/serviceintegration.html#spec.clickhousePostgresql","title":"clickhousePostgresql","text":"name
(string, MinLength: 1, MaxLength: 249). Name of the topic.spec
.
"},{"location":"api-reference/serviceintegration.html#spec.clickhousePostgresql.databases","title":"databases","text":"databases
(array of objects, MaxItems: 10). Databases to expose. See below for nested schema.spec.clickhousePostgresql
.
"},{"location":"api-reference/serviceintegration.html#spec.datadog","title":"datadog","text":"database
(string, MinLength: 1, MaxLength: 63). PostgreSQL database to expose.schema
(string, MinLength: 1, MaxLength: 63). PostgreSQL schema to expose.spec
.
"},{"location":"api-reference/serviceintegration.html#spec.datadog.datadog_tags","title":"datadog_tags","text":"datadog_dbm_enabled
(boolean). Enable Datadog Database Monitoring.datadog_tags
(array of objects, MaxItems: 32). Custom tags provided by user. See below for nested schema.exclude_consumer_groups
(array of strings, MaxItems: 1024). List of custom metrics.exclude_topics
(array of strings, MaxItems: 1024). List of topics to exclude.include_consumer_groups
(array of strings, MaxItems: 1024). List of custom metrics.include_topics
(array of strings, MaxItems: 1024). List of topics to include.kafka_custom_metrics
(array of strings, MaxItems: 1024). List of custom metrics.max_jmx_metrics
(integer, Minimum: 10, Maximum: 100000). Maximum number of JMX metrics to send.opensearch
(object). Datadog Opensearch Options. See below for nested schema.redis
(object). Datadog Redis Options. See below for nested schema.spec.datadog
.
tag
(string, MinLength: 1, MaxLength: 200). Tag format and usage are described here: https://docs.datadoghq.com/getting_started/tagging. Tags with prefix aiven-
are reserved for Aiven.
"},{"location":"api-reference/serviceintegration.html#spec.datadog.opensearch","title":"opensearch","text":"comment
(string, MaxLength: 1024). Optional tag explanation.spec.datadog
.
"},{"location":"api-reference/serviceintegration.html#spec.datadog.redis","title":"redis","text":"index_stats_enabled
(boolean). Enable Datadog Opensearch Index Monitoring.pending_task_stats_enabled
(boolean). Enable Datadog Opensearch Pending Task Monitoring.pshard_stats_enabled
(boolean). Enable Datadog Opensearch Primary Shard Monitoring.spec.datadog
.
"},{"location":"api-reference/serviceintegration.html#spec.externalAWSCloudwatchMetrics","title":"externalAWSCloudwatchMetrics","text":"command_stats_enabled
(boolean). Enable command_stats option in the agent's configuration.spec
.
"},{"location":"api-reference/serviceintegration.html#spec.externalAWSCloudwatchMetrics.dropped_metrics","title":"dropped_metrics","text":"dropped_metrics
(array of objects, MaxItems: 1024). Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics). See below for nested schema.extra_metrics
(array of objects, MaxItems: 1024). Metrics to allow through to AWS CloudWatch (in addition to default metrics). See below for nested schema.spec.externalAWSCloudwatchMetrics
.
"},{"location":"api-reference/serviceintegration.html#spec.externalAWSCloudwatchMetrics.extra_metrics","title":"extra_metrics","text":"field
(string, MaxLength: 1000). Identifier of a value in the metric.metric
(string, MaxLength: 1000). Identifier of the metric.spec.externalAWSCloudwatchMetrics
.
"},{"location":"api-reference/serviceintegration.html#spec.kafkaConnect","title":"kafkaConnect","text":"field
(string, MaxLength: 1000). Identifier of a value in the metric.metric
(string, MaxLength: 1000). Identifier of the metric.spec
.
"},{"location":"api-reference/serviceintegration.html#spec.kafkaConnect.kafka_connect","title":"kafka_connect","text":"kafka_connect
(object). Kafka Connect service configuration values. See below for nested schema.spec.kafkaConnect
.
"},{"location":"api-reference/serviceintegration.html#spec.kafkaLogs","title":"kafkaLogs","text":"config_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration data are stored.This must be the same for all workers with the same group_id.group_id
(string, MaxLength: 249). A unique string that identifies the Connect cluster group this worker belongs to.offset_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration offsets are stored.This must be the same for all workers with the same group_id.status_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration status updates are stored.This must be the same for all workers with the same group_id.spec
.
kafka_topic
(string, MinLength: 1, MaxLength: 249). Topic name.
"},{"location":"api-reference/serviceintegration.html#spec.kafkaMirrormaker","title":"kafkaMirrormaker","text":"selected_log_fields
(array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.spec
.
"},{"location":"api-reference/serviceintegration.html#spec.kafkaMirrormaker.kafka_mirrormaker","title":"kafka_mirrormaker","text":"cluster_alias
(string, Pattern: ^[a-zA-Z0-9_.-]+$
, MaxLength: 128). The alias under which the Kafka cluster is known to MirrorMaker. Can contain the following symbols: ASCII alphanumerics, .
, _
, and -
.kafka_mirrormaker
(object). Kafka MirrorMaker configuration values. See below for nested schema.spec.kafkaMirrormaker
.
"},{"location":"api-reference/serviceintegration.html#spec.logs","title":"logs","text":"consumer_fetch_min_bytes
(integer, Minimum: 1, Maximum: 5242880). The minimum amount of data the server should return for a fetch request.producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). The batch size in bytes producer will attempt to collect before publishing to broker.producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The amount of bytes producer can use for buffering data before publishing to broker.producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). The linger time (ms) for waiting new data to arrive for publishing.producer_max_request_size
(integer, Minimum: 0, Maximum: 268435456). The maximum request size in bytes.spec
.
"},{"location":"api-reference/serviceintegration.html#spec.metrics","title":"metrics","text":"elasticsearch_index_days_max
(integer, Minimum: 1, Maximum: 10000). Elasticsearch index retention limit.elasticsearch_index_prefix
(string, MinLength: 1, MaxLength: 1024). Elasticsearch index prefix.selected_log_fields
(array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.spec
.
"},{"location":"api-reference/serviceintegration.html#spec.metrics.source_mysql","title":"source_mysql","text":"database
(string, Pattern: ^[_A-Za-z0-9][-_A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of the database where to store metric datapoints. Only affects PostgreSQL destinations. Defaults to metrics
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.retention_days
(integer, Minimum: 0, Maximum: 10000). Number of days to keep old metrics. Only affects PostgreSQL destinations. Set to 0 for no automatic cleanup. Defaults to 30 days.ro_username
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of a user that can be used to read metrics. This will be used for Grafana integration (if enabled) to prevent Grafana users from making undesired changes. Only affects PostgreSQL destinations. Defaults to metrics_reader
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.source_mysql
(object). Configuration options for metrics where source service is MySQL. See below for nested schema.username
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of the user used to write metrics. Only affects PostgreSQL destinations. Defaults to metrics_writer
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.spec.metrics
.
"},{"location":"api-reference/serviceintegration.html#spec.metrics.source_mysql.telegraf","title":"telegraf","text":"telegraf
(object). Configuration options for Telegraf MySQL input plugin. See below for nested schema.spec.metrics.source_mysql
.
"},{"location":"api-reference/serviceuser.html","title":"ServiceUser","text":""},{"location":"api-reference/serviceuser.html#usage-example","title":"Usage example","text":"gather_event_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS.gather_file_events_stats
(boolean). gather metrics from PERFORMANCE_SCHEMA.FILE_SUMMARY_BY_EVENT_NAME.gather_index_io_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE.gather_info_schema_auto_inc
(boolean). Gather auto_increment columns and max values from information schema.gather_innodb_metrics
(boolean). Gather metrics from INFORMATION_SCHEMA.INNODB_METRICS.gather_perf_events_statements
(boolean). Gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_DIGEST.gather_process_list
(boolean). Gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST.gather_slave_status
(boolean). Gather metrics from SHOW SLAVE STATUS command output.gather_table_io_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE.gather_table_lock_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS.gather_table_schema
(boolean). Gather metrics from INFORMATION_SCHEMA.TABLES.perf_events_statements_digest_text_limit
(integer, Minimum: 1, Maximum: 2048). Truncates digest text from perf_events_statements into this many characters.perf_events_statements_limit
(integer, Minimum: 1, Maximum: 4000). Limits metrics from perf_events_statements.perf_events_statements_time_limit
(integer, Minimum: 1, Maximum: 2592000). Only include perf_events_statements whose last seen is less than this many seconds.
"},{"location":"api-reference/serviceuser.html#ServiceUser","title":"ServiceUser","text":"apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: my-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: service-user-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n serviceName: my-service-name\n
"},{"location":"api-reference/serviceuser.html#spec","title":"spec","text":"apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ServiceUser
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ServiceUserSpec defines the desired state of ServiceUser. See below for nested schema.ServiceUser
.
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the user to.serviceName
(string, MaxLength: 63). Service to link the user to.
"},{"location":"api-reference/serviceuser.html#spec.authSecretRef","title":"authSecretRef","text":"authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.authentication
(string, Enum: caching_sha2_password
, mysql_native_password
). Authentication details.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: SERVICEUSER_HOST
, SERVICEUSER_PORT
, SERVICEUSER_USERNAME
, SERVICEUSER_PASSWORD
, SERVICEUSER_CA_CERT
, SERVICEUSER_ACCESS_CERT
, SERVICEUSER_ACCESS_KEY
. See below for nested schema.spec
.
"},{"location":"api-reference/serviceuser.html#spec.connInfoSecretTarget","title":"connInfoSecretTarget","text":"key
(string, MinLength: 1). name
(string, MinLength: 1). spec
.SERVICEUSER_HOST
, SERVICEUSER_PORT
, SERVICEUSER_USERNAME
, SERVICEUSER_PASSWORD
, SERVICEUSER_CA_CERT
, SERVICEUSER_ACCESS_CERT
, SERVICEUSER_ACCESS_KEY
.
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.
"},{"location":"contributing/index.html","title":"Contributing Guidelines","text":"annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.
"},{"location":"contributing/index.html#code-of-conduct","title":"Code of Conduct","text":""},{"location":"contributing/index.html#our-pledge","title":"Our Pledge","text":"
"},{"location":"contributing/index.html#commit-messages","title":"Commit Messages","text":"
"},{"location":"contributing/developer-guide.html#resource-generation","title":"Resource generation","text":"git clone git@github.com:aiven/aiven-operator.git\ncd aiven-operator\n
make
build system.
"},{"location":"contributing/developer-guide.html#testing","title":"Testing","text":"make build\n
-w0
flag, some tests may not work properly kind create cluster --image kindest/node:v1.24.0 --wait 5m\n
AIVEN_TOKEN
\u2014 your authentication token AIVEN_PROJECT_NAME
\u2014 your Aiven project name to run services inmake e2e-setup-kind\n
WEBHOOKS_ENABLED=false make e2e-setup-kind\n
AIVEN_PROJECT_NAME
):make test-e2e-preinstalled \n
"},{"location":"contributing/developer-guide.html#documentation","title":"Documentation","text":"kind delete cluster\n
make serve-docs\n
http://localhost:8000/aiven-operator/
page in your web browser.
"},{"location":"contributing/resource-generation.html","title":"Resource generation","text":"make docs\n
make generate
command is called by GitHub action. And the PR is ready for review.
"},{"location":"contributing/resource-generation.html#make-generate","title":"make generate","text":"flowchart TB\n API(Aiven API) <-.->|polls schema updates| Schema([go-api-schemas])\n Bot(dependabot) <-.->|polls updates| Schema \n Bot-->|pull request|UpdateOP[/\"\u2728 $ make generate \u2728\"/]\n UpdateOP-->|review| OP([operator repository])
./<api-reference-docs>/example/
, if it finds one, it validates that with the CRD. Each CRD has an OpenAPI v3 schema as a part of it. This is also used by Kubernetes itself to validate user input.
"},{"location":"contributing/resource-generation.html#charts-version-bump","title":"Charts version bump","text":"flowchart TB\n Make[/$ make generate/]-->Generator(userconfig generator<br> creates/updates structs using updated spec)\n Generator-->|go: KafkaUserConfig struct| K8S(controller-gen<br> adds k8s methods to structs)\n K8S-->|go files| CRD(controller-gen<br> creates CRDs out of structs)\n CRD-->|CRD: aiven.io_kafkas.yaml| Docs(docs generator)\n subgraph API reference generation\n Docs-->|aiven.io_kafkas.yaml|Reference(creates reference<br> out of CRD)\n Docs-->|examples/kafka.yaml,<br> aiven.io_kafkas.yaml|Examples(validates example<br> using CRD)\n Examples--> Markdown(creates docs out of CRDs, adds examples)\n Reference-->Markdown(kafka.md)\n end\n CRD-->|yaml files|Charts(charts generator<br> updates helm charts<br> and the changelog)\n Charts-->ToRelease(\"Ready to release \ud83c\udf89\")\n Markdown-->ToRelease
"},{"location":"installation/helm.html","title":"Installing with Helm (recommended)","text":""},{"location":"installation/helm.html#installing","title":"Installing","text":"make version=v1.0.0 charts\n
"},{"location":"installation/helm.html#installing-custom-resource-definitions","title":"Installing Custom Resource Definitions","text":"helm repo add aiven https://aiven.github.io/aiven-charts && helm repo update\n
helm install aiven-operator-crds aiven/aiven-operator-crds\n
The output is similar to the following:kubectl api-resources --api-group=aiven.io\n
NAME SHORTNAMES APIVERSION NAMESPACED KIND\nconnectionpools aiven.io/v1alpha1 true ConnectionPool\ndatabases aiven.io/v1alpha1 true Database\n... < several omitted lines >\n
"},{"location":"installation/helm.html#installing-the-operator","title":"Installing the Operator","text":"helm install aiven-operator aiven/aiven-operator\n
Note
Installation will fail if webhooks are enabled and the CRDs for the cert-manager are not installed.
Verify the installation:
helm status aiven-operator\n
The output is similar to the following:
NAME: aiven-operator\nLAST DEPLOYED: Fri Sep 10 15:23:26 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\n
It is also possible to install the operator without webhooks enabled:
helm install aiven-operator aiven/aiven-operator --set webhooks.enabled=false\n
"},{"location":"installation/helm.html#configuration-options","title":"Configuration Options","text":"Please refer to the values.yaml of the chart.
"},{"location":"installation/helm.html#installing-without-full-cluster-administrator-access","title":"Installing without full cluster administrator access","text":"There can be some scenarios where the individual installing the Helm chart does not have the ability to provision cluster-wide resources (e.g. ClusterRoles/ClusterRoleBindings). In this scenario, you can have a cluster administrator manually install the ClusterRole and ClusterRoleBinding the operator requires prior to installing the Helm chart specifying false
for the clusterRole.create
attribute.
Important
Please see this page for more information.
Find out the name of your deployment:
helm list\n
The output has the name of each deployment similar to the following:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION\naiven-operator default 1 2021-09-09 10:56:14.623700249 +0200 CEST deployed aiven-operator-v0.1.0 v0.1.0 \naiven-operator-crds default 1 2021-09-09 10:56:05.736411868 +0200 CEST deployed aiven-operator-crds-v0.1.0 v0.1.0\n
Remove the CRDs:
helm uninstall aiven-operator-crds\n
The confirmation message is similar to the following:
release \"aiven-operator-crds\" uninstalled\n
Remove the operator:
helm uninstall aiven-operator\n
The confirmation message is similar to the following:
release \"aiven-operator\" uninstalled\n
"},{"location":"installation/kubectl.html","title":"Installing with kubectl","text":""},{"location":"installation/kubectl.html#installing","title":"Installing","text":"Before you start, make sure you have the prerequisites.
All Aiven Operator for Kubernetes components can be installed from one YAML file that is uploaded for every release:
kubectl apply -f https://github.com/aiven/aiven-operator/releases/latest/download/deployment.yaml\n
By default the Deployment is installed into the aiven-operator-system
namespace.
Assuming you installed version vX.Y.Z
of the operator it can be uninstalled via
kubectl delete -f https://github.com/aiven/aiven-operator/releases/download/vX.Y.Z/deployment.yaml\n
"},{"location":"installation/prerequisites.html","title":"Prerequisites","text":"The Aiven Operator for Kubernetes supports all major Kubernetes distributions, both locally and in the cloud.
Make sure you have the following:
The Aiven Operator for Kubernetes uses cert-manager
to configure the service reference of our webhooks.
Please follow the installation instructions on their website.
Note
This is not required in the Helm installation if you select to disable webhooks, but that is not recommended outside of playground use. The Aiven Operator for Kubernetes uses webhooks for setting defaults and enforcing invariants that are expected by the aiven API and will lead to errors if ignored. In the future webhooks will also be used for conversion and supporting multiple CRD versions.
"},{"location":"installation/uninstalling.html","title":"Uninstalling","text":"Danger
Uninstalling the Aiven Operator for Kubernetes can remove the resources created in Aiven, possibly resulting in data loss.
Depending on your installation, please follow one of:
Aiven resources need to have an accompanying secret that contains the token that is used to authorize the manipulation of that resource. If that token expired then you will not be able to delete the custom resource and deletion will also hang until the situation is resolved. The recommended approach to deal with that situation is to patch a valid token into the secret again so that proper cleanup of aiven resources can take place.
"},{"location":"installation/uninstalling.html#hanging-deletions","title":"Hanging deletions","text":"To protect the secrets that the operator is using from deletion, it adds the finalizer finalizers.aiven.io/needed-to-delete-services
to the secret. This solves a race condition that happens when deleting a namespace, where there is a possibility of the secret getting deleted before the resource that uses it. When the controller is deleted it may not cleanup the finalizers from all secrets. If there is a secret with this finalizer blocking deletion of a namespace, for now please do
kubectl patch secret <offending-secret> -p '{\"metadata\":{\"finalizers\":null}}' --type=merge\n
to remove the finalizer.
"},{"location":"resources/cassandra.html","title":"Cassandra","text":"Aiven for Apache Cassandra\u00ae is a distributed database designed to handle large volumes of writes.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/cassandra.html#creating-a-cassandra-instance","title":"Creating a Cassandra instance","text":"1. Create a file named cassandra-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Cassandra\nmetadata:\n name: cassandra-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Cassandra connection on the `cassandra-secret` Secret\n connInfoSecretTarget:\n name: cassandra-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f cassandra-sample.yaml \n
The output is:
cassandra.aiven.io/cassandra-sample created\n
3. Review the resource you created with this command:
kubectl describe cassandra.aiven.io cassandra-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-31T10:17:25Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-31T10:24:00Z\n Message: Instance is running on Aiven side\n Reason: CheckRunning\n Status: True\n Type: Running\n State: RUNNING\n...\n
The resource can be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the Cassandra connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret cassandra-secret \n
The output is similar to the following:
Name: cassandra-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nCASSANDRA_HOSTS: 59 bytes\nCASSANDRA_PASSWORD: 24 bytes\nCASSANDRA_PORT: 5 bytes\nCASSANDRA_URI: 66 bytes\nCASSANDRA_USER: 8 bytes\nCASSANDRA_HOST: 60 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret cassandra-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"CASSANDRA_HOST\": \"<secret>\",\n \"CASSANDRA_HOSTS\": \"<secret>\",\n \"CASSANDRA_PASSWORD\": \"<secret>\",\n \"CASSANDRA_PORT\": \"14609\",\n \"CASSANDRA_URI\": \"<secret>\",\n \"CASSANDRA_USER\": \"avnadmin\"\n}\n
"},{"location":"resources/cassandra.html#creating-a-cassandra-user","title":"Creating a Cassandra user","text":"You can create service users for your instance of Aiven for Apache Cassandra. Service users are unique to this instance and are not shared with any other services.
1. Create a file named cassandra-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: cassandra-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: cassandra-service-user-secret\n\n project: <your-project-name>\n serviceName: cassandra-sample\n
2. Create the user by applying the configuration:
kubectl apply -f cassandra-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using the following command:
kubectl get secret cassandra-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"<secret>\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"cassandra-service-user\"\n}\n
You can connect to the Cassandra instance using these credentials and the host information from the cassandra-secret
Secret.
Aiven for MySQL is a fully managed relational database service, deployable in the cloud of your choice.
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/mysql.html#creating-a-mysql-instance","title":"Creating a MySQL instance","text":"1. Create a file named mysql-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: MySQL\nmetadata:\n name: mysql-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the MySQL connection on the `mysql-secret` Secret\n connInfoSecretTarget:\n name: mysql-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f mysql-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe mysql.aiven.io mysql-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-02-22T15:43:44Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-02-22T15:43:44Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the MySQL connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret mysql-secret \n
The output is similar to the following:
Name: mysql-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nMYSQL_PORT: 5 bytes\nMYSQL_SSL_MODE: 8 bytes\nMYSQL_URI: 115 bytes\nMYSQL_USER: 8 bytes\nMYSQL_DATABASE: 9 bytes\nMYSQL_HOST: 39 bytes\nMYSQL_PASSWORD: 24 bytes\n
You can use jq to quickly decode the Secret:
kubectl get secret mysql-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"MYSQL_DATABASE\": \"defaultdb\",\n \"MYSQL_HOST\": \"<secret>\",\n \"MYSQL_PASSWORD\": \"<secret>\",\n \"MYSQL_PORT\": \"12691\",\n \"MYSQL_SSL_MODE\": \"REQUIRED\",\n \"MYSQL_URI\": \"<secret>\",\n \"MYSQL_USER\": \"avnadmin\"\n}\n
"},{"location":"resources/mysql.html#creating-a-mysql-user","title":"Creating a MySQL user","text":"You can create service users for your instance of Aiven for MySQL. Service users are unique to this instance and are not shared with any other services.
1. Create a file named mysql-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: mysql-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: mysql-service-user-secret\n\n project: <your-project-name>\n serviceName: mysql-sample\n
2. Create the user by applying the configuration:
kubectl apply -f mysql-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using jq:
kubectl get secret mysql-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"<secret>\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"mysql-service-user\"\n}\n
You can connect to the MySQL instance using these credentials and the host information from the mysql-secret
Secret.
OpenSearch\u00ae is an open source search and analytics suite including search engine, NoSQL document database, and visualization interface. OpenSearch offers a distributed, full-text search engine based on Apache Lucene\u00ae with a RESTful API interface and support for JSON documents.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/opensearch.html#creating-an-opensearch-instance","title":"Creating an OpenSearch instance","text":"1. Create a file named os-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: OpenSearch\nmetadata:\n name: os-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the OpenSearch connection on the `os-secret` Secret\n connInfoSecretTarget:\n name: os-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f os-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe opensearch.aiven.io os-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-19T14:41:43Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-19T14:41:43Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the OpenSearch connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret os-secret \n
The output is similar to the following:
Name: os-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nHOST: 61 bytes\nPASSWORD: 24 bytes\nPORT: 5 bytes\nUSER: 8 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret os-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"HOST\": \"os-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"13041\",\n \"USER\": \"avnadmin\"\n}\n
"},{"location":"resources/opensearch.html#creating-an-opensearch-user","title":"Creating an OpenSearch user","text":"You can create service users for your instance of Aiven for OpenSearch. Service users are unique to this instance and are not shared with any other services.
1. Create a file named os-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: os-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: os-service-user-secret\n\n project: <your-project-name>\n serviceName: os-sample\n
2. Create the user by applying the configuration:
kubectl apply -f os-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using the following command:
kubectl get secret os-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"os-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"os-service-user\"\n}\n
You can connect to the OpenSearch instance using these credentials and the host information from the os-secret
Secret.
PostgreSQL is an open source, relational database. It's ideal for organisations that need a well organised tabular datastore. On top of the strict table and columns formats, PostgreSQL also offers solutions for nested datasets with the native jsonb
format and advanced set of extensions including PostGIS, a spatial database extender for location queries. Aiven for PostgreSQL is the perfect fit for your relational data.
With Aiven Kubernetes Operator, you can manage Aiven for PostgreSQL through the well defined Kubernetes API.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/postgresql.html#creating-a-postgresql-instance","title":"Creating a PostgreSQL instance","text":"1. Create a file named pg-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the PostgreSQL connection on the `pg-connection` Secret\n connInfoSecretTarget:\n name: pg-connection\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific PostgreSQL configuration\n userConfig:\n pg_version: '11'\n
2. Create the service by applying the configuration:
kubectl apply -f pg-sample.yaml\n
3. Review the resource you created with the following command:
kubectl get postgresqls.aiven.io pg-sample\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\npg-sample your-project google-europe-west1 startup-4 RUNNING\n
The resource can stay in the BUILDING
state for a couple of minutes. Once the state changes to RUNNING
, you are ready to access it.
For your convenience, the operator automatically stores the PostgreSQL connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
kubectl describe secret pg-connection\n
The output is similar to the following:
Name: pg-connection\nNamespace: default\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nDATABASE_URI: 107 bytes\nPGDATABASE: 9 bytes\nPGHOST: 38 bytes\nPGPASSWORD: 16 bytes\nPGPORT: 5 bytes\nPGSSLMODE: 7 bytes\nPGUSER: 8 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret pg-connection -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"DATABASE_URI\": \"postgres://avnadmin:<secret-password>@pg-sample-your-project.aivencloud.com:13039/defaultdb?sslmode=require\",\n \"PGDATABASE\": \"defaultdb\",\n \"PGHOST\": \"pg-sample-your-project.aivencloud.com\",\n \"PGPASSWORD\": \"<secret-password>\",\n \"PGPORT\": \"13039\",\n \"PGSSLMODE\": \"require\",\n \"PGUSER\": \"avnadmin\"\n}\n
"},{"location":"resources/postgresql.html#testing-the-connection","title":"Testing the connection","text":"You can verify your PostgreSQL connection from a Kubernetes workload by deploying a Pod that runs the psql
command.
1. Create a file named pod-psql.yaml
apiVersion: v1\nkind: Pod\nmetadata:\n name: psql-test-connection\nspec:\n restartPolicy: Never\n containers:\n - image: postgres:11-alpine\n name: postgres\n command: [ 'psql', '$(DATABASE_URI)', '-c', 'SELECT version();' ]\n\n # the pg-connection Secret becomes environment variables \n envFrom:\n - secretRef:\n name: pg-connection\n
It runs once and stops, due to the restartPolicy: Never
flag.
2. Inspect the log:
kubectl logs psql-test-connection\n
The output is similar to the following:
version \n---------------------------------------------------------------------------------------------\n PostgreSQL 11.12 on x86_64-pc-linux-gnu, compiled by gcc, a 68c5366192 p 6b9244f01a, 64-bit\n(1 row)\n
You have now connected to the PostgreSQL, and executed the SELECT version();
query.
The Database
Kubernetes resource allows you to create a logical database within the PostgreSQL instance.
Create the pg-database-sample.yaml
file with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Database\nmetadata:\n name: pg-database-sample\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n # the name of the previously created PostgreSQL instance\n serviceName: pg-sample\n\n project: <your-project-name>\n lcCollate: en_US.UTF-8\n lcCtype: en_US.UTF-8\n
You can now connect to the pg-database-sample
using the credentials stored in the pg-connection
Secret.
Aiven uses the concept of service user that allows you to create users for different services. You can create one for the PostgreSQL instance.
1. Create a file named pg-service-user.yaml
.
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: pg-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: pg-service-user-connection\n\n project: <your-project-name>\n serviceName: pg-sample\n
2. Apply the configuration with the following command.
kubectl apply -f pg-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information, in this case named pg-service-user-connection
:
kubectl get secret pg-service-user-connection -o json | jq '.data | map_values(@base64d)'\n
The output has the password and username:
{\n \"PASSWORD\": \"<secret-password>\",\n \"USERNAME\": \"pg-service-user\"\n}\n
You can now connect to the PostgreSQL instance using the credentials generated above, and the host information from the pg-connection
Secret.
Connection pooling allows you to maintain very large numbers of connections to a database while minimizing the consumption of server resources. For more information, refer to the connection pooling article in Aiven Docs. Aiven for PostgreSQL uses PGBouncer for connection pooling.
You can create a connection pool with the ConnectionPool
resource using the previously created Database
and ServiceUser
:
Create a new file named pg-connection-pool.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: ConnectionPool\nmetadata:\n name: pg-connection-pool\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: pg-connection-pool-connection\n\n project: <your-project-name>\n serviceName: pg-sample\n databaseName: pg-database-sample\n username: pg-service-user\n poolSize: 10\n poolMode: transaction\n
The ConnectionPool
generates a Secret with the connection info using the name from the connInfoSecretTarget.Name
field:
kubectl get secret pg-connection-pool-connection -o json | jq '.data | map_values(@base64d)' \n
The output is similar to the following: {\n \"DATABASE_URI\": \"postgres://pg-service-user:<secret-password>@pg-sample-you-project.aivencloud.com:13040/pg-connection-pool?sslmode=require\",\n \"PGDATABASE\": \"pg-database-sample\",\n \"PGHOST\": \"pg-sample-your-project.aivencloud.com\",\n \"PGPASSWORD\": \"<secret-password>\",\n \"PGPORT\": \"13040\",\n \"PGSSLMODE\": \"require\",\n \"PGUSER\": \"pg-service-user\"\n}\n
"},{"location":"resources/postgresql.html#creating-a-postgresql-read-only-replica","title":"Creating a PostgreSQL read-only replica","text":"Read-only replicas can be used to reduce the load on the primary service by making read-only queries against the replica service.
To create a read-only replica for a PostgreSQL service, you create a second PostgreSQL service and use serviceIntegrations to replicate data from your primary service.
The example that follows creates a primary service and a read-only replica.
1. Create a new file named pg-read-replica.yaml
with the following:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: primary-pg-service\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your project's name here\n project: <your-project-name>\n\n # add the cloud provider and plan of your choice\n # you can see all of the options at https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n userConfig:\n pg_version: '15'\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: read-replica-pg\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your project's name here\n project: <your-project-name>\n\n # add the cloud provider and plan of your choice\n # you can see all of the options at https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: saturday\n maintenanceWindowTime: 23:00:00\n userConfig:\n pg_version: '15'\n\n # use the read_replica integration and point it to your primary service\n serviceIntegrations:\n - integrationType: read_replica\n sourceServiceName: primary-pg-service\n
Note
You can create the replica service in a different region or on a different cloud provider.
2. Apply the configuration with the following command:
kubectl apply -f pg-read-replica.yaml\n
The output is similar to the following:
postgresql.aiven.io/primary-pg-service created\npostgresql.aiven.io/read-replica-pg created\n
3. Check the status of the primary service with the following command:
kubectl get postgresqls.aiven.io primary-pg-service\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\nprimary-pg-service <your-project-name> google-europe-west1 startup-4 RUNNING\n
The resource can be in the BUILDING
state for a few minutes. After the state of the primary service changes to RUNNING
, the read-only replica is created. You can check the status of the replica using the same command with the name of the replica:
kubectl get postgresqls.aiven.io read-replica-pg\n
"},{"location":"resources/project-vpc.html","title":"Aiven Project VPC","text":"Virtual Private Cloud (VPC) peering is a method of connecting separate AWS, Google Cloud or Microsoft Azure private networks to each other. It makes it possible for the virtual machines in the different VPCs to talk to each other directly without going through the public internet.
Within the Aiven Kubernetes Operator, you can create a ProjectVPC
on Aiven's side to connect to your cloud provider.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/project-vpc.html#creating-an-aiven-vpc","title":"Creating an Aiven VPC","text":"1. Create a file named vpc-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: ProjectVPC\nmetadata:\n name: vpc-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # creates a VPC to link an AWS account on the South Africa region\n cloudName: aws-af-south-1\n\n # the network range used by the VPC\n networkCidr: 192.168.0.0/24\n
2. Create the Project by applying the configuration:
kubectl apply -f vpc-sample.yaml\n
3. Review the resource you created with the following command:
kubectl get projects.aiven.io vpc-sample\n
The output is similar to the following:
NAME PROJECT CLOUD NETWORK CIDR\nvpc-sample <your-project> aws-af-south-1 192.168.0.0/24\n
"},{"location":"resources/project-vpc.html#using-the-aiven-vpc","title":"Using the Aiven VPC","text":"Follow the official VPC documentation to complete the VPC peering on your cloud of choice.
"},{"location":"resources/project.html","title":"Aiven Project","text":"Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
The Project
CRD allows you to create Aiven Projects, where your resources can be located.
To create a fully working Aiven Project with the Aiven Operator you need a source Aiven Project already created with a working billing configuration, like a credit card.
Create a file named project-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Project\nmetadata:\n name: project-sample\nspec:\n # the source Project to copy the billing information from\n copyFromProject: <your-source-project>\n\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: project-sample\n
Apply the resource with:
kubectl apply -f project-sample.yaml\n
Verify the newly created Project:
kubectl get projects.aiven.io project-sample\n
The output is similar to the following:
NAME AGE\nproject-sample 22s\n
"},{"location":"resources/redis.html","title":"Redis","text":"Aiven for Redis\u00ae* is a fully managed in-memory NoSQL database that you can deploy in the cloud of your choice to store and access data quickly and efficiently.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/redis.html#creating-a-redis-instance","title":"Creating a Redis instance","text":"1. Create a file named redis-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Redis\nmetadata:\n name: redis-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Redis connection on the `redis-secret` Secret\n connInfoSecretTarget:\n name: redis-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Redis configuration\n userConfig:\n redis_maxmemory_policy: \"allkeys-random\"\n
2. Create the service by applying the configuration:
kubectl apply -f redis-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe redis.aiven.io redis-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-19T14:48:59Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-19T14:48:59Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the Redis connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret redis-secret \n
The output is similar to the following:
Name: redis-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nSSL: 8 bytes\nUSER: 7 bytes\nHOST: 60 bytes\nPASSWORD: 24 bytes\nPORT: 5 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret redis-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"HOST\": \"redis-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret-password>\",\n \"PORT\": \"14610\",\n \"SSL\": \"required\",\n \"USER\": \"default\"\n}\n
"},{"location":"resources/service-integrations.html","title":"Service Integrations","text":"Service Integrations provide additional functionality and features by connecting different Aiven services together.
See our Getting Started with Service Integrations guide for more information.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/service-integrations.html#send-kafka-logs-to-a-kafka-topic","title":"Send Kafka logs to a Kafka Topic","text":"This integration allows you to send Kafka service logs to a specific Kafka Topic.
First, let's create a Kafka service and a topic.
1. Create a new file named kafka-sample-topic.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-2\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: logs\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # here we can specify how many partitions the topic should have\n partitions: 3\n # and the topic replication factor\n replication: 2\n\n # we also support various topic-specific configurations\n config:\n flush_ms: 100\n
2. Create the resource on Kubernetes:
kubectl apply -f kafka-sample-topic.yaml \n
3. Now, create a ServiceIntegration
resource to send the Kafka logs to the created topic. In the same file, add the following YAML:
apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: service-integration-kafka-logs\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # indicates the type of the integration\n integrationType: kafka_logs\n\n # we will send the logs to the same kafka-sample instance\n # the source and destination are the same\n sourceServiceName: kafka-sample\n destinationServiceName: kafka-sample\n\n # the topic name we will send to\n kafkaLogs:\n kafka_topic: logs\n
4. Reapply the resource on Kubernetes:
kubectl apply -f kafka-sample-topic.yaml \n
5. Let's check the created service integration:
kubectl get serviceintegrations.aiven.io service-integration-kafka-logs\n
The output is similar to the following:
NAME PROJECT TYPE SOURCE SERVICE NAME DESTINATION SERVICE NAME SOURCE ENDPOINT ID DESTINATION ENDPOINT ID\nservice-integration-kafka-logs your-project kafka_logs kafka-sample kafka-sample \n
Your Kafka service logs are now being streamed to the logs
Kafka topic.
Aiven for Apache Kafka is an excellent option if you need to run Apache Kafka at scale. With Aiven Kubernetes Operator you can get up and running with a suitably sized Apache Kafka service in a few minutes.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/kafka/index.html#creating-a-kafka-instance","title":"Creating a Kafka instance","text":"1. Create a file named kafka-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-2\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n
2. Create the following resource on Kubernetes:
kubectl apply -f kafka-sample.yaml \n
3. Inspect the service created using the command below.
kubectl get kafka.aiven.io kafka-sample\n
The output has the project name and state, similar to the following:
NAME PROJECT REGION PLAN STATE\nkafka-sample <your-project> google-europe-west1 startup-2 RUNNING\n
After a couple of minutes, the STATE
field is changed to RUNNING
, and is ready to be used.
For your convenience, the operator automatically stores the Kafka connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
kubectl describe secret kafka-auth \n
The output is similar to the following:
Name: kafka-auth\nNamespace: default\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nCA_CERT: 1537 bytes\nHOST: 41 bytes\nPASSWORD: 16 bytes\nPORT: 5 bytes\nUSERNAME: 8 bytes\nACCESS_CERT: 1533 bytes\nACCESS_KEY: 2484 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret kafka-auth -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"CA_CERT\": \"<secret-ca-cert>\",\n \"ACCESS_CERT\": \"<secret-cert>\",\n \"ACCESS_KEY\": \"<secret-access-key>\",\n \"HOST\": \"kafka-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret-password>\",\n \"PORT\": \"13041\",\n \"USERNAME\": \"avnadmin\"\n}\n
"},{"location":"resources/kafka/index.html#testing-the-connection","title":"Testing the connection","text":"You can verify your access to the Kafka cluster from a Pod using the authentication data from the kafka-auth
Secret. kcat is used for our examples below.
1. Create a file named kafka-test-connection.yaml
, and add the following content:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-test-connection\nspec:\n restartPolicy: Never\n containers:\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n # the command below will connect to the Kafka cluster\n # and output its metadata\n command: [\n 'kcat', '-b', '$(HOST):$(PORT)',\n '-X', 'security.protocol=SSL',\n '-X', 'ssl.key.location=/kafka-auth/ACCESS_KEY',\n '-X', 'ssl.key.password=$(PASSWORD)',\n '-X', 'ssl.certificate.location=/kafka-auth/ACCESS_CERT',\n '-X', 'ssl.ca.location=/kafka-auth/CA_CERT',\n '-L'\n ]\n\n # loading the data from the Secret as environment variables\n # useful to access the Kafka information, like hostname and port\n envFrom:\n - secretRef:\n name: kafka-auth\n\n volumeMounts:\n - name: kafka-auth\n mountPath: \"/kafka-auth\"\n\n # loading the data from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: kafka-auth\n secret:\n secretName: kafka-auth\n
2. Apply the file.
kubectl apply -f kafka-test-connection.yaml\n
Once successfully applied, you have a log with the metadata information about the Kafka cluster.
kubectl logs kafka-test-connection \n
The output is similar to the following:
Metadata for all topics (from broker -1: ssl://kafka-sample-your-project.aivencloud.com:13041/bootstrap):\n 3 brokers:\n broker 2 at 35.205.234.70:13041\n broker 3 at 34.77.127.70:13041 (controller)\n broker 1 at 34.78.146.156:13041\n 0 topics:\n
"},{"location":"resources/kafka/index.html#creating-a-kafkatopic-and-kafkaacl","title":"Creating a KafkaTopic
and KafkaACL
","text":"To properly produce and consume content on Kafka, you need topics and ACLs. The operator supports both with the KafkaTopic
and KafkaACL
resources.
Below, here is how to create a Kafka topic named random-strings
where random string messages will be sent.
1. Create a file named kafka-topic-random-strings.yaml
with the content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: random-strings\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # here we can specify how many partitions the topic should have\n partitions: 3\n # and the topic replication factor\n replication: 2\n\n # we also support various topic-specific configurations\n config:\n flush_ms: 100\n
2. Create the resource on Kubernetes:
kubectl apply -f kafka-topic-random-strings.yaml\n
3. Create a user and an ACL. To use the Kafka topic, create a new user with the ServiceUser
resource (in order to avoid using the avnadmin
superuser), and the KafkaACL
to allow the user access to the topic.
In a file named kafka-acl-user-crab.yaml
, add the following two resources:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n # the name of our user \ud83e\udd80\n name: crab\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n # the Secret name we will store the users' connection information\n # looks exactly the same as the Secret generated when creating the Kafka cluster\n # we will use this Secret to produce and consume events later!\n connInfoSecretTarget:\n name: kafka-crab-connection\n\n # the Aiven project the user is related to\n project: <your-project-name>\n\n # the name of our Kafka Service\n serviceName: kafka-sample\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaACL\nmetadata:\n name: crab\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # the username from the ServiceUser above\n username: crab\n\n # the ACL allows to produce and consume on the topic\n permission: readwrite\n\n # specify the topic we created before\n topic: random-strings\n
To create the crab
user and its permissions, execute the following command:
kubectl apply -f kafka-acl-user-crab.yaml\n
"},{"location":"resources/kafka/index.html#producing-and-consuming-events","title":"Producing and consuming events","text":"Using the previously created KafkaTopic
, ServiceUser
, KafkaACL
, you can produce and consume events.
You can use kcat to produce a message into Kafka, and the -t random-strings
argument to select the desired topic, and use the content of the /etc/issue
file as the message's body.
1. Create a kafka-crab-produce.yaml
file with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-crab-produce\nspec:\n restartPolicy: Never\n containers:\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n # the command below will produce a message with the /etc/issue file content\n command: [\n 'kcat', '-b', '$(HOST):$(PORT)',\n '-X', 'security.protocol=SSL',\n '-X', 'ssl.key.location=/crab-auth/ACCESS_KEY',\n '-X', 'ssl.key.password=$(PASSWORD)',\n '-X', 'ssl.certificate.location=/crab-auth/ACCESS_CERT',\n '-X', 'ssl.ca.location=/crab-auth/CA_CERT',\n '-P', '-t', 'random-strings', '/etc/issue',\n ]\n\n # loading the crab user data from the Secret as environment variables\n # useful to access the Kafka information, like hostname and port\n envFrom:\n - secretRef:\n name: kafka-crab-connection\n\n volumeMounts:\n - name: crab-auth\n mountPath: \"/crab-auth\"\n\n # loading the crab user information from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: crab-auth\n secret:\n secretName: kafka-crab-connection\n
2. Create the Pod with the following content:
kubectl apply -f kafka-crab-produce.yaml\n
Now your event is stored in Kafka.
To consume a message, you can use a graphical interface called Kowl. It allows you to explore information about our Kafka cluster, such as brokers, topics, or consumer groups.
1. Create a Kubernetes Pod and service to deploy and access Kowl. Create a file named kafka-crab-consume.yaml
with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-crab-consume\n labels:\n app: kafka-crab-consume\nspec:\n containers:\n - image: quay.io/cloudhut/kowl:v1.4.0\n name: kowl\n\n # kowl configuration values\n env:\n - name: KAFKA_TLS_ENABLED\n value: 'true'\n\n - name: KAFKA_BROKERS\n value: $(HOST):$(PORT)\n - name: KAFKA_TLS_PASSPHRASE\n value: $(PASSWORD)\n\n - name: KAFKA_TLS_CAFILEPATH\n value: /crab-auth/CA_CERT\n - name: KAFKA_TLS_CERTFILEPATH\n value: /crab-auth/ACCESS_CERT\n - name: KAFKA_TLS_KEYFILEPATH\n value: /crab-auth/ACCESS_KEY\n\n # inject all connection information as environment variables\n envFrom:\n - secretRef:\n name: kafka-crab-connection\n\n volumeMounts:\n - name: crab-auth\n mountPath: /crab-auth\n\n # loading the crab user information from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: crab-auth\n secret:\n secretName: kafka-crab-connection\n\n---\n\n# we will be using a simple service to access Kowl on port 8080\napiVersion: v1\nkind: Service\nmetadata:\n name: kafka-crab-consume\nspec:\n selector:\n app: kafka-crab-consume\n ports:\n - port: 8080\n targetPort: 8080\n
2. Create the resources with:
kubectl apply -f kafka-crab-consume.yaml\n
3. In another terminal create a port-forward tunnel to your Pod:
kubectl port-forward kafka-crab-consume 8080:8080\n
4. In the browser of your choice, access the http://localhost:8080 address. You now see a page with the random-strings
topic listed:
5. Click the topic name to see the message.
You have now consumed the message.
"},{"location":"resources/kafka/connect.html","title":"Kafka Connect","text":"Aiven for Apache Kafka Connect is a framework and a runtime for integrating Kafka with other systems. Kafka connectors can either be a source (for pulling data from other systems into Kafka) or sink (for pushing data into other systems from Kafka).
This section involves a few different Kubernetes CRDs: 1. A KafkaService
service with a KafkaTopic
2. A KafkaConnect
service 3. A ServiceIntegration
to integrate the Kafka
and KafkaConnect
services 4. A PostgreSQL
used as a sink to receive messages from Kafka
5. A KafkaConnector
to finally connect the Kafka
with the PostgreSQL
Create a file named kafka-sample-connect.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample-connect\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: business-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n kafka_connect: true\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: kafka-topic-connect\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample-connect\n\n replication: 2\n partitions: 1\n
Next, create a file named kafka-connect.yaml
and add the following KafkaConnect
resource:
apiVersion: aiven.io/v1alpha1\nkind: KafkaConnect\nmetadata:\n name: kafka-connect\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
Now let's create a ServiceIntegration
. It will use the fields sourceServiceName
and destinationServiceName
to integrate the previously created kafka-sample-connect
and kafka-connect
. Open a new file named service-integration-connect.yaml
and add the content below:
apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: service-integration-kafka-connect\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # indicates the type of the integration\n integrationType: kafka_connect\n\n # we will send messages from the `kafka-sample-connect` to `kafka-connect`\n sourceServiceName: kafka-sample-connect\n destinationServiceName: kafka-connect\n
Let's add an Aiven for PostgreSQL service. It will be the service used as a sink, receiving messages from the kafka-sample-connect
cluster. Create a file named pg-sample-connect.yaml
with the content below:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-connect\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the PostgreSQL connection on the `pg-connection` Secret\n connInfoSecretTarget:\n name: pg-connection\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
Finally, let's add the glue of everything: a KafkaConnector
. As described in the specification, it will send receive messages from the kafka-sample-connect
and send them to the pg-connect
service. Check our official documentation for more connectors.
Create a file named kafka-connector-connect.yaml
and with the content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaConnector\nmetadata:\n name: kafka-connector\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # the Kafka cluster name\n serviceName: kafka-sample-connect\n\n # the connector we will be using\n connectorClass: io.aiven.connect.jdbc.JdbcSinkConnector\n\n userConfig:\n auto.create: \"true\"\n\n # constructs the pg-connect connection information\n connection.url: 'jdbc:postgresql://{{ fromSecret \"pg-connection\" \"PGHOST\"}}:{{ fromSecret \"pg-connection\" \"PGPORT\" }}/{{ fromSecret \"pg-connection\" \"PGDATABASE\" }}'\n connection.user: '{{ fromSecret \"pg-connection\" \"PGUSER\" }}'\n connection.password: '{{ fromSecret \"pg-connection\" \"PGPASSWORD\" }}'\n\n # specify which topics it will watch\n topics: kafka-topic-connect\n\n key.converter: org.apache.kafka.connect.json.JsonConverter\n value.converter: org.apache.kafka.connect.json.JsonConverter\n value.converter.schemas.enable: \"true\"\n
With all the files created, apply the new Kubernetes resources:
kubectl apply \\\n -f kafka-sample-connect.yaml \\\n -f kafka-connect.yaml \\\n -f service-integration-connect.yaml \\\n -f pg-sample-connect.yaml \\\n -f kafka-connector-connect.yaml\n
It will take some time for all the services to be up and running. You can check their status with the following command:
kubectl get \\\n kafkas.aiven.io/kafka-sample-connect \\\n kafkaconnects.aiven.io/kafka-connect \\\n postgresqls.aiven.io/pg-connect \\\n kafkaconnectors.aiven.io/kafka-connector\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\nkafka.aiven.io/kafka-sample-connect your-project google-europe-west1 business-4 RUNNING\n\nNAME STATE\nkafkaconnect.aiven.io/kafka-connect RUNNING\n\nNAME PROJECT REGION PLAN STATE\npostgresql.aiven.io/pg-connect your-project google-europe-west1 startup-4 RUNNING\n\nNAME SERVICE NAME PROJECT CONNECTOR CLASS STATE TASKS TOTAL TASKS RUNNING\nkafkaconnector.aiven.io/kafka-connector kafka-sample-connect your-project io.aiven.connect.jdbc.JdbcSinkConnector RUNNING 1 1\n
The deployment is finished when all services have the state RUNNING
."},{"location":"resources/kafka/connect.html#testing","title":"Testing","text":"To test the connection integration, let's produce a Kafka message using kcat from within the Kubernetes cluster. We will deploy a Pod responsible for crafting a message and sending to the Kafka cluster, using the kafka-auth
secret generate by the Kafka
CRD.
Create a new file named kcat-connect.yaml
and add the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-message\nspec:\n containers:\n\n restartPolicy: Never\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n command: ['/bin/sh']\n args: [\n '-c',\n 'echo {\\\"schema\\\":{\\\"type\\\":\\\"struct\\\",\\\"fields\\\":[{ \\\"field\\\": \\\"text\\\", \\\"type\\\": \\\"string\\\", \\\"optional\\\": false } ] }, \\\"payload\\\": { \\\"text\\\": \\\"Hello World\\\" } } > /tmp/msg;\n\n kcat\n -b $(HOST):$(PORT)\n -X security.protocol=SSL\n -X ssl.key.location=/kafka-auth/ACCESS_KEY\n -X ssl.key.password=$(PASSWORD)\n -X ssl.certificate.location=/kafka-auth/ACCESS_CERT\n -X ssl.ca.location=/kafka-auth/CA_CERT\n -P -t kafka-topic-connect /tmp/msg'\n ]\n\n envFrom:\n - secretRef:\n name: kafka-auth\n\n volumeMounts:\n - name: kafka-auth\n mountPath: \"/kafka-auth\"\n\n volumes:\n - name: kafka-auth\n secret:\n secretName: kafka-auth\n
Apply the file with:
kubectl apply -f kcat-connect.yaml\n
The Pod will execute the commands and finish. You can confirm its Completed
state with:
kubectl get pod kafka-message\n
The output is similar to the following:
NAME READY STATUS RESTARTS AGE\nkafka-message 0/1 Completed 0 5m35s\n
If everything went smoothly, we should have our produced message in the PostgreSQL service. Let's check that out.
Create a file named psql-connect.yaml
with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: psql-connect\nspec:\n restartPolicy: Never\n containers:\n - image: postgres:13\n name: postgres\n # \"kafka-topic-connect\" is the table automatically created by KafkaConnect\n command: ['psql', '$(DATABASE_URI)', '-c', 'SELECT * from \"kafka-topic-connect\";']\n\n envFrom:\n - secretRef:\n name: pg-connection\n
Apply the file with:
kubectl apply -f psql-connect.yaml\n
After a couple of seconds, inspect its log with this command:
kubectl logs psql-connect \n
The output is similar to the following:
text \n-------------\n Hello World\n(1 row)\n
"},{"location":"resources/kafka/connect.html#clean-up","title":"Clean up","text":"To clean up all the created resources, use the following command:
kubectl delete \\\n -f kafka-sample-connect.yaml \\\n -f kafka-connect.yaml \\\n -f service-integration-connect.yaml \\\n -f pg-sample-connect.yaml \\\n -f kafka-connector-connect.yaml \\\n -f kcat-connect.yaml \\\n -f psql-connect.yaml\n
"},{"location":"resources/kafka/schema.html","title":"Kafka Schema","text":""},{"location":"resources/kafka/schema.html#creating-a-kafkaschema","title":"Creating a KafkaSchema
","text":"Aiven develops and maintain Karapace, an open source implementation of Kafka REST and schema registry. Is available out of the box for our managed Kafka service.
The schema registry address and authentication is the same as the Kafka broker, the only different is the usage of the port 13044.
First, let's create an Aiven for Apache Kafka service.
1. Create a file named kafka-sample-schema.yaml
and add the content below:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: kafka-auth\n\n project: <your-project-name>\n cloudName: google-europe-west1\n plan: startup-2\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n userConfig:\n kafka_version: '2.7'\n\n # this flag enables the Schema registry\n schema_registry: true\n
2. Apply the changes with the following command:
kubectl apply -f kafka-schema.yaml \n
Now, let's create the schema itself.
1. Create a new file named kafka-sample-schema.yaml
and add the YAML content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaSchema\nmetadata:\n name: kafka-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample-schema\n\n # the name of the Schema\n subjectName: MySchema\n\n # the schema itself, in JSON format\n schema: |\n {\n \"type\": \"record\",\n \"name\": \"MySchema\",\n \"fields\": [\n {\n \"name\": \"field\",\n \"type\": \"string\"\n }\n ]\n }\n\n # sets the schema compatibility level \n compatibilityLevel: BACKWARD\n
2. Create the schema with the command:
kubectl apply -f kafka-schema.yaml\n
3. Review the resource you created with the following command:
kubectl get kafkaschemas.aiven.io kafka-schema\n
The output is similar to the following:
NAME SERVICE NAME PROJECT SUBJECT COMPATIBILITY LEVEL VERSION\nkafka-schema kafka-sample <your-project> MySchema BACKWARD 1\n
Now you can follow the instructions to use a schema registry in Java on how to use the schema created.
"}]} \ No newline at end of file +var __index = {"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Welcome to Aiven Operator for Kubernetes","text":"Provision and manage Aiven services from your Kubernetes cluster.
"},{"location":"index.html#what-is-aiven","title":"What is Aiven?","text":"Aiven offers managed services for the best open source data technologies, on a cloud of your choice.
We offer multiple cloud options because we believe that everyone should have access to great data platforms wherever they host their applications. Our customers tell us they love it because they know that they aren\u2019t locked in to one particular cloud platform for all their data needs.
"},{"location":"index.html#contributing","title":"Contributing","text":"The contribution guide covers everything you need to know about how you can contribute to Aiven Operator for Kubernetes. The developer guide will help you onboard as a developer.
"},{"location":"authentication.html","title":"Authenticating","text":"To get authenticated and authorized, set up the communication between the Aiven Operator for Kubernetes and Aiven by using a token stored in a Kubernetes secret. You can then refer to the secret name on every custom resource in the authSecretRef
field.
If you don't have an Aiven account yet, sign up here for a free trial. \ud83e\udd80
1. Generate an authentication token
Refer to our documentation article to generate your authentication token.
2. Create the Kubernetes Secret
The following command creates a secret named aiven-token
with a token
field containing the authentication token:
kubectl create secret generic aiven-token --from-literal=token=\"<your-token-here>\"\n
When managing your Aiven resources, we will be using the created Secret in the authSecretRef
field. It will look like the example below:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n [ ... ]\n
Also, note that within Aiven, all resources are conceptually inside a Project. By default, a random project name is generated when you signup, but you can also create new projects.
The Project name is required in most of the resources. It will look like the example below:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n project: <your-project-name-here>\n [ ... ]\n
"},{"location":"changelog.html","title":"Changelog","text":""},{"location":"changelog.html#v0160-2023-12-07","title":"v0.16.0 - 2023-12-07","text":"Preconditions
, CreateOrUpdate
, Delete
. Thanks to @ataraxKafka
field userConfig.kafka.transaction_partition_verification_enable
, type boolean
: Enable verification that checks that the partition has been added to the transaction before writing transactional records to the partitionCassandra
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleClickhouse
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleGrafana
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleKafkaConnect
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleKafka
field userConfig.kafka_rest_config.name_strategy_validation
, type boolean
: If true, validate that given schema is registered under expected subject name by the used name strategy when producing messagesKafka
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleMySQL
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleOpenSearch
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consolePostgreSQL
field userConfig.pg_qualstats
, type object
: System-wide settings for the pg_qualstats extensionPostgreSQL
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleRedis
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleServiceIntegration
: do not send empty user config to the API string
type fields to the documentationClickhouse
field userConfig.private_access.clickhouse_mysql
, type boolean
: Allow clients to connect to clickhouse_mysql with a DNS name that always resolves to the service's private IP addressesClickhouse
field userConfig.privatelink_access.clickhouse_mysql
, type boolean
: Enable clickhouse_mysqlClickhouse
field userConfig.public_access.clickhouse_mysql
, type boolean
: Allow clients to connect to clickhouse_mysql from the public internet for service nodes that are in a project VPC or another type of private networkGrafana
field userConfig.unified_alerting_enabled
, type boolean
: Enable or disable Grafana unified alerting functionalityKafka
field userConfig.aiven_kafka_topic_messages
, type boolean
: Allow access to read Kafka topic messages in the Aiven Console and REST APIKafka
field userConfig.kafka.sasl_oauthbearer_expected_audience
, type string
: The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiencesKafka
field userConfig.kafka.sasl_oauthbearer_expected_issuer
, type string
: Optional setting for the broker to use to verify that the JWT was created by the expected issuerKafka
field userConfig.kafka.sasl_oauthbearer_jwks_endpoint_url
, type string
: OIDC JWKS endpoint URL. By setting this the SASL SSL OAuth2/OIDC authentication is enabledKafka
field userConfig.kafka.sasl_oauthbearer_sub_claim_name
, type string
: Name of the scope from which to extract the subject claim from the JWT. Defaults to subKafka
field userConfig.kafka_version
: enum [3.1, 3.3, 3.4, 3.5]
\u2192 [3.1, 3.3, 3.4, 3.5, 3.6]
Kafka
field userConfig.tiered_storage.local_cache.size
: deprecatedOpenSearch
field userConfig.opensearch.indices_memory_max_index_buffer_size
, type integer
: Absolute value. Default is unbound. Doesn't work without indices.memory.index_buffer_sizeOpenSearch
field userConfig.opensearch.indices_memory_min_index_buffer_size
, type integer
: Absolute value. Default is 48mb. Doesn't work without indices.memory.index_buffer_sizeOpenSearch
field userConfig.opensearch.auth_failure_listeners.internal_authentication_backend_limiting.authentication_backend
: enum [internal]
OpenSearch
field userConfig.opensearch.auth_failure_listeners.internal_authentication_backend_limiting.type
: enum [username]
OpenSearch
field userConfig.opensearch.auth_failure_listeners.ip_rate_limiting.type
: enum [ip]
OpenSearch
field userConfig.opensearch.search_max_buckets
: maximum 65536
\u2192 1000000
ServiceIntegration
field kafkaMirrormaker.kafka_mirrormaker.producer_max_request_size
: maximum 67108864
\u2192 268435456
projectVpcId
and projectVPCRef
mutablenil
user config conversionCassandra
kind option additional_backup_regions
Grafana
kind option auto_login
Kafka
kind properties log_local_retention_bytes
, log_local_retention_ms
Kafka
kind option remote_log_storage_system_enable
OpenSearch
kind option auth_failure_listeners
OpenSearch
kind Index State Management optionsKafka
Kafka
version 3.5
Kafka
spec property scheduled_rebalance_max_delay_ms
Kafka
spec property remote_log_storage_system_enable
KafkaConnect
spec property scheduled_rebalance_max_delay_ms
OpenSearch
spec property openid
KAFKA_SCHEMA_REGISTRY_HOST
and KAFKA_SCHEMA_REGISTRY_PORT
for Kafka
KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
and KAFKA_REST_PORT
for Kafka
. Thanks to @Dariuschunclean_leader_election_enable
from KafkaTopic
kind configKAFKA_SASL_PORT
for Kafka
kind if SASL
authentication method is enabledredis
options to datadog ServiceIntegration
Cassandra
version 3
Kafka
versions 3.1
and 3.4
kafka_rest_config.producer_max_request_size
optionkafka_mirrormaker.producer_compression_type
optionclusterRole.create
option to Helm chart. Thanks to @ryaneorthOpenSearch.spec.userConfig.idp_pemtrustedcas_content
option. Specifies the PEM-encoded root certificate authority (CA) content for the SAML identity provider (IdP) server verification.ServiceIntegration
kind SourceProjectName
and DestinationProjectName
fieldsServiceIntegration
fields MaxLength
validationServiceIntegration
validation: multiple user configs cannot be setServiceIntegration
, should not require destinationServiceName
or sourceEndpointID
fieldServiceIntegration
, add missing external_aws_cloudwatch_metrics
type config serializationServiceIntegration
integration type listannotations
and labels
fields to connInfoSecretTarget
OpenSearch.spec.userConfig.opensearch.search_max_buckets
maximum to 65536
plan
as a required fieldminumim
, maximum
validations for number
typeip_filter
backward compatabilityclickhouseKafka.tables.data_format-property
enum RawBLOB
valueuserConfig.opensearch.email_sender_username
validation patternlog_cleaner_min_cleanable_ratio
minimum and maximum validation rules3.2
, reached EOL10
, reached EOLProjectVPC
by ID
to avoid conflicts ProjectVPC
deletion by exiting on DELETING
statusClickhouseUser
controllerClickhouseUser.spec.project
and ClickhouseUser.spec.serviceName
as immutablesignalfx
AuthSecretRef
fields marked as requireddatadog
, kafka_connect
, kafka_logs
, metrics
clickhouse_postgresql
, clickhouse_kafka
, clickhouse_kafka
, logs
, external_aws_cloudwatch_metrics
KafkaTopic.Spec.topicName
field. Unlike the metadata.name
, supports additional characters and has a longer length. KafkaTopic.Spec.topicName
replaces metadata.name
in future releases and will be marked as required.false
value for termination_protection
propertymin_cleanable_dirty_ratio
. Thanks to @TV2rdImportant: This release brings breaking changes to the userConfig
property. After new charts are installed, update your existing instances manually using the kubectl edit
command according to the API reference.
Note: It is now recommended to disable webhooks for Kubernetes version 1.25 and higher, as native CRD validation rules are used.
ip_filter
field is now of object
typeserviceIntegrations
on service types. Only the read_replica
type is available.min_cleanable_dirty_ratio
config field supportspec.disk_space
propertylinux/amd64
build. Thanks to @christoffer-eidenever
from choices of maintenance dowdevelopment
flag to configure logger's behaviormake generate-user-configs
)genericServiceHandler
to generalize service management KafkaACL
deletionProjectVPCRef
property to Kafka
, OpenSearch
, Clickhouse
and Redis
kinds to get ProjectVPC
ID when resource is readyProjectVPC
deletion, deletes by ID first if possible, then tries by nameclient.Object
storage update data lossfeatures: * add Redis CRD
improvements: * watch CRDs to reconcile token secrets
fixes: * fix RBACs of KafkaACL CRD
"},{"location":"changelog.html#v011-2021-09-13","title":"v0.1.1 - 2021-09-13","text":"improvements: * update helm installation docs
fixes: * fix typo in a kafka-connector kuttl test
"},{"location":"changelog.html#v010-2021-09-10","title":"v0.1.0 - 2021-09-10","text":"features: * initial release
"},{"location":"troubleshooting.html","title":"Troubleshooting","text":""},{"location":"troubleshooting.html#verifying-operator-status","title":"Verifying operator status","text":"Use the following checks to help you troubleshoot the Aiven Operator for Kubernetes.
"},{"location":"troubleshooting.html#checking-the-pods","title":"Checking the Pods","text":"Verify that all the operator Pods are READY
, and the STATUS
is Running
.
kubectl get pod -n aiven-operator-system\n
The output is similar to the following:
NAME READY STATUS RESTARTS AGE\naiven-operator-controller-manager-576d944499-ggttj 1/1 Running 0 12m\n
Verify that the cert-manager
Pods are also running.
kubectl get pod -n cert-manager\n
The output has the status:
NAME READY STATUS RESTARTS AGE\ncert-manager-7dd5854bb4-85cpv 1/1 Running 0 76s\ncert-manager-cainjector-64c949654c-n2z8l 1/1 Running 0 77s\ncert-manager-webhook-6bdffc7c9d-47w6z 1/1 Running 0 76s\n
"},{"location":"troubleshooting.html#visualizing-the-operator-logs","title":"Visualizing the operator logs","text":"Use the following command to visualize all the logs from the operator.
kubectl logs -n aiven-operator-system -l control-plane=controller-manager\n
"},{"location":"troubleshooting.html#verifing-the-operator-version","title":"Verifing the operator version","text":"kubectl get pod -n aiven-operator-system -l control-plane=controller-manager -o jsonpath=\"{.items[0].spec.containers[0].image}\"\n
"},{"location":"troubleshooting.html#known-issues-and-limitations","title":"Known issues and limitations","text":"We're always working to resolve problems that pop up in Aiven products. If your problem is listed below, we know about it and are working to fix it. If your problem isn't listed below, report it as an issue.
"},{"location":"troubleshooting.html#cert-manager","title":"cert-manager","text":""},{"location":"troubleshooting.html#issue","title":"Issue","text":"The following event appears on the operator Pod:
MountVolume.SetUp failed for volume \"cert\" : secret \"webhook-server-cert\" not found\n
"},{"location":"troubleshooting.html#impact","title":"Impact","text":"You cannot run the operator.
"},{"location":"troubleshooting.html#solution","title":"Solution","text":"Make sure that cert-manager is up and running.
kubectl get pod -n cert-manager\n
The output shows the status of each cert-manager:
NAME READY STATUS RESTARTS AGE\ncert-manager-7dd5854bb4-85cpv 1/1 Running 0 76s\ncert-manager-cainjector-64c949654c-n2z8l 1/1 Running 0 77s\ncert-manager-webhook-6bdffc7c9d-47w6z 1/1 Running 0 76s\n
"},{"location":"api-reference/index.html","title":"aiven.io/v1alpha1","text":"Autogenerated from CRD files.
"},{"location":"api-reference/cassandra.html","title":"Cassandra","text":""},{"location":"api-reference/cassandra.html#usage-example","title":"Usage example","text":"apiVersion: aiven.io/v1alpha1\nkind: Cassandra\nmetadata:\n name: my-cassandra\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: cassandra-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n migrate_sstableloader: true\n public_access:\n prometheus: true\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/cassandra.html#Cassandra","title":"Cassandra","text":"Cassandra is the Schema for the cassandras API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Cassandra
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). CassandraSpec defines the desired state of Cassandra. See below for nested schema.Appears on Cassandra
.
CassandraSpec defines the desired state of Cassandra.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CASSANDRA_HOST
, CASSANDRA_PORT
, CASSANDRA_USER
, CASSANDRA_PASSWORD
, CASSANDRA_URI
, CASSANDRA_HOSTS
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Cassandra specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CASSANDRA_HOST
, CASSANDRA_PORT
, CASSANDRA_USER
, CASSANDRA_PASSWORD
, CASSANDRA_URI
, CASSANDRA_HOSTS
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Cassandra specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Deprecated. Additional Cloud Regions for Backup Replication.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.cassandra
(object). cassandra configuration values. See below for nested schema.cassandra_version
(string, Enum: 4
, 3
). Cassandra major version.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migrate_sstableloader
(boolean). Sets the service into migration mode enabling the sstableloader utility to be used to upload Cassandra data files. Available only on service create.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.service_to_join_with
(string, MaxLength: 64). When bootstrapping, instead of creating a new Cassandra cluster try to join an existing one from another service. Can only be set on service creation.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
cassandra configuration values.
Optional
batch_size_fail_threshold_in_kb
(integer, Minimum: 1, Maximum: 1000000). Fail any multiple-partition batch exceeding this value. 50kb (10x warn threshold) by default.batch_size_warn_threshold_in_kb
(integer, Minimum: 1, Maximum: 1000000). Log a warning message on any multiple-partition batch size exceeding this value.5kb per batch by default.Caution should be taken on increasing the size of this thresholdas it can lead to node instability.datacenter
(string, MaxLength: 128). Name of the datacenter to which nodes of this service belong. Can be set only when creating the service.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Required
prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Required
prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: Clickhouse\nmetadata:\n name: my-clickhouse\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: clickhouse-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-16\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/clickhouse.html#Clickhouse","title":"Clickhouse","text":"Clickhouse is the Schema for the clickhouses API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Clickhouse
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ClickhouseSpec defines the desired state of Clickhouse. See below for nested schema.Appears on Clickhouse
.
ClickhouseSpec defines the desired state of Clickhouse.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CLICKHOUSE_HOST
, CLICKHOUSE_PORT
, CLICKHOUSE_USER
, CLICKHOUSE_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). OpenSearch specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CLICKHOUSE_HOST
, CLICKHOUSE_PORT
, CLICKHOUSE_USER
, CLICKHOUSE_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
OpenSearch specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
clickhouse
(boolean). Allow clients to connect to clickhouse with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.clickhouse_https
(boolean). Allow clients to connect to clickhouse_https with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.clickhouse_mysql
(boolean). Allow clients to connect to clickhouse_mysql with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
clickhouse
(boolean). Enable clickhouse.clickhouse_https
(boolean). Enable clickhouse_https.clickhouse_mysql
(boolean). Enable clickhouse_mysql.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
clickhouse
(boolean). Allow clients to connect to clickhouse from the public internet for service nodes that are in a project VPC or another type of private network.clickhouse_https
(boolean). Allow clients to connect to clickhouse_https from the public internet for service nodes that are in a project VPC or another type of private network.clickhouse_mysql
(boolean). Allow clients to connect to clickhouse_mysql from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: ClickhouseUser\nmetadata:\n name: my-clickhouse-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: clickhouse-user-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n serviceName: my-clickhouse\n
"},{"location":"api-reference/clickhouseuser.html#ClickhouseUser","title":"ClickhouseUser","text":"ClickhouseUser is the Schema for the clickhouseusers API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ClickhouseUser
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ClickhouseUserSpec defines the desired state of ClickhouseUser. See below for nested schema.Appears on ClickhouseUser
.
ClickhouseUserSpec defines the desired state of ClickhouseUser.
Required
project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the user to.serviceName
(string, Immutable, MaxLength: 63). Service to link the user to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CLICKHOUSEUSER_HOST
, CLICKHOUSEUSER_PORT
, CLICKHOUSEUSER_USER
, CLICKHOUSEUSER_PASSWORD
. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CLICKHOUSEUSER_HOST
, CLICKHOUSEUSER_PORT
, CLICKHOUSEUSER_USER
, CLICKHOUSEUSER_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.apiVersion: aiven.io/v1alpha1\nkind: ConnectionPool\nmetadata:\n name: my-connection-pool\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: connection-pool-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n serviceName: google-europe-west1\n databaseName: my-db\n username: my-user\n poolMode: transaction\n poolSize: 25\n
"},{"location":"api-reference/connectionpool.html#ConnectionPool","title":"ConnectionPool","text":"ConnectionPool is the Schema for the connectionpools API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ConnectionPool
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ConnectionPoolSpec defines the desired state of ConnectionPool. See below for nested schema.Appears on ConnectionPool
.
ConnectionPoolSpec defines the desired state of ConnectionPool.
Required
databaseName
(string, MaxLength: 40). Name of the database the pool connects to.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.serviceName
(string, MaxLength: 63). Service name.username
(string, MaxLength: 64). Name of the service user used to connect to the database.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CONNECTIONPOOL_HOST
, CONNECTIONPOOL_PORT
, CONNECTIONPOOL_DATABASE
, CONNECTIONPOOL_USER
, CONNECTIONPOOL_PASSWORD
, CONNECTIONPOOL_SSLMODE
, CONNECTIONPOOL_DATABASE_URI
. See below for nested schema.poolMode
(string, Enum: session
, transaction
, statement
). Mode the pool operates in (session, transaction, statement).poolSize
(integer). Number of connections the pool may create towards the backend server.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CONNECTIONPOOL_HOST
, CONNECTIONPOOL_PORT
, CONNECTIONPOOL_DATABASE
, CONNECTIONPOOL_USER
, CONNECTIONPOOL_PASSWORD
, CONNECTIONPOOL_SSLMODE
, CONNECTIONPOOL_DATABASE_URI
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.apiVersion: aiven.io/v1alpha1\nkind: Database\nmetadata:\n name: my-db\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n serviceName: google-europe-west1\n\n lcCtype: en_US.UTF-8\n lcCollate: en_US.UTF-8\n
"},{"location":"api-reference/database.html#Database","title":"Database","text":"Database is the Schema for the databases API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Database
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). DatabaseSpec defines the desired state of Database. See below for nested schema.Appears on Database
.
DatabaseSpec defines the desired state of Database.
Required
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the database to.serviceName
(string, MaxLength: 63). PostgreSQL service to link the database to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.lcCollate
(string, MaxLength: 128). Default string sort order (LC_COLLATE) of the database. Default value: en_US.UTF-8.lcCtype
(string, MaxLength: 128). Default character classification (LC_CTYPE) of the database. Default value: en_US.UTF-8.terminationProtection
(boolean). It is a Kubernetes side deletion protections, which prevents the database from being deleted by Kubernetes. It is recommended to enable this for any production databases containing critical data.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: Grafana\nmetadata:\n name: my-grafana\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: grafana-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-1\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n public_access:\n grafana: true\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/grafana.html#Grafana","title":"Grafana","text":"Grafana is the Schema for the grafanas API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Grafana
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). GrafanaSpec defines the desired state of Grafana. See below for nested schema.Appears on Grafana
.
GrafanaSpec defines the desired state of Grafana.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: GRAFANA_HOST
, GRAFANA_PORT
, GRAFANA_USER
, GRAFANA_PASSWORD
, GRAFANA_URI
, GRAFANA_HOSTS
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Cassandra specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: GRAFANA_HOST
, GRAFANA_PORT
, GRAFANA_USER
, GRAFANA_PASSWORD
, GRAFANA_URI
, GRAFANA_HOSTS
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Cassandra specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.alerting_enabled
(boolean). Enable or disable Grafana legacy alerting functionality. This should not be enabled with unified_alerting_enabled.alerting_error_or_timeout
(string, Enum: alerting
, keep_state
). Default error or timeout setting for new alerting rules.alerting_max_annotations_to_keep
(integer, Minimum: 0, Maximum: 1000000). Max number of alert annotations that Grafana stores. 0 (default) keeps all alert annotations.alerting_nodata_or_nullvalues
(string, Enum: alerting
, no_data
, keep_state
, ok
). Default value for 'no data or null values' for new alerting rules.allow_embedding
(boolean). Allow embedding Grafana dashboards with iframe/frame/object/embed tags. Disabled by default to limit impact of clickjacking.auth_azuread
(object). Azure AD OAuth integration. See below for nested schema.auth_basic_enabled
(boolean). Enable or disable basic authentication form, used by Grafana built-in login.auth_generic_oauth
(object). Generic OAuth integration. See below for nested schema.auth_github
(object). Github Auth integration. See below for nested schema.auth_gitlab
(object). GitLab Auth integration. See below for nested schema.auth_google
(object). Google Auth integration. See below for nested schema.cookie_samesite
(string, Enum: lax
, strict
, none
). Cookie SameSite attribute: strict
prevents sending cookie for cross-site requests, effectively disabling direct linking from other sites to Grafana. lax
is the default value.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.dashboard_previews_enabled
(boolean). This feature is new in Grafana 9 and is quite resource intensive. It may cause low-end plans to work more slowly while the dashboard previews are rendering.dashboards_min_refresh_interval
(string, Pattern: ^[0-9]+(ms|s|m|h|d)$
, MaxLength: 16). Signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s, 1h.dashboards_versions_to_keep
(integer, Minimum: 1, Maximum: 100). Dashboard versions to keep per dashboard.dataproxy_send_user_header
(boolean). Send X-Grafana-User
header to data source.dataproxy_timeout
(integer, Minimum: 15, Maximum: 90). Timeout for data proxy requests in seconds.date_formats
(object). Grafana date format specifications. See below for nested schema.disable_gravatar
(boolean). Set to true to disable gravatar. Defaults to false (gravatar is enabled).editors_can_admin
(boolean). Editors can manage folders, teams and dashboards created by them.external_image_storage
(object). External image store settings. See below for nested schema.google_analytics_ua_id
(string, Pattern: ^(G|UA|YT|MO)-[a-zA-Z0-9-]+$
, MaxLength: 64). Google Analytics ID.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.metrics_enabled
(boolean). Enable Grafana /metrics endpoint.oauth_allow_insecure_email_lookup
(boolean). Enforce user lookup based on email instead of the unique ID provided by the IdP.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.smtp_server
(object). SMTP server settings. See below for nested schema.static_ips
(boolean). Use static public IP addresses.unified_alerting_enabled
(boolean). Enable or disable Grafana unified alerting functionality. By default this is enabled and any legacy alerts will be migrated on upgrade to Grafana 9+. To stay on legacy alerting, set unified_alerting_enabled to false and alerting_enabled to true. See https://grafana.com/docs/grafana/latest/alerting/set-up/migrating-alerts/ for more details.user_auto_assign_org
(boolean). Auto-assign new users on signup to main organization. Defaults to false.user_auto_assign_org_role
(string, Enum: Viewer
, Admin
, Editor
). Set role for new signups. Defaults to Viewer.viewers_can_edit
(boolean). Users with view-only permission can edit but not save dashboards.Appears on spec.userConfig
.
Azure AD OAuth integration.
Required
auth_url
(string, MaxLength: 2048). Authorization URL.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.token_url
(string, MaxLength: 2048). Token URL.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_domains
(array of strings, MaxItems: 50). Allowed domains.allowed_groups
(array of strings, MaxItems: 50). Require users to belong to one of given groups.Appears on spec.userConfig
.
Generic OAuth integration.
Required
api_url
(string, MaxLength: 2048). API URL.auth_url
(string, MaxLength: 2048). Authorization URL.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.token_url
(string, MaxLength: 2048). Token URL.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_domains
(array of strings, MaxItems: 50). Allowed domains.allowed_organizations
(array of strings, MaxItems: 50). Require user to be member of one of the listed organizations.auto_login
(boolean). Allow users to bypass the login screen and automatically log in.name
(string, Pattern: ^[a-zA-Z0-9_\\- ]+$
, MaxLength: 128). Name of the OAuth integration.scopes
(array of strings, MaxItems: 50). OAuth scopes.Appears on spec.userConfig
.
Github Auth integration.
Required
client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_organizations
(array of strings, MaxItems: 50). Require users to belong to one of given organizations.team_ids
(array of integers, MaxItems: 50). Require users to belong to one of given team IDs.Appears on spec.userConfig
.
GitLab Auth integration.
Required
allowed_groups
(array of strings, MaxItems: 50). Require users to belong to one of given groups.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.api_url
(string, MaxLength: 2048). API URL. This only needs to be set when using self hosted GitLab.auth_url
(string, MaxLength: 2048). Authorization URL. This only needs to be set when using self hosted GitLab.token_url
(string, MaxLength: 2048). Token URL. This only needs to be set when using self hosted GitLab.Appears on spec.userConfig
.
Google Auth integration.
Required
allowed_domains
(array of strings, MaxItems: 64). Domains allowed to sign-in to this Grafana.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.Appears on spec.userConfig
.
Grafana date format specifications.
Optional
default_timezone
(string, MaxLength: 64). Default time zone for user preferences. Value browser
uses browser local time zone.full_date
(string, MaxLength: 128). Moment.js style format string for cases where full date is shown.interval_day
(string, MaxLength: 128). Moment.js style format string used when a time requiring day accuracy is shown.interval_hour
(string, MaxLength: 128). Moment.js style format string used when a time requiring hour accuracy is shown.interval_minute
(string, MaxLength: 128). Moment.js style format string used when a time requiring minute accuracy is shown.interval_month
(string, MaxLength: 128). Moment.js style format string used when a time requiring month accuracy is shown.interval_second
(string, MaxLength: 128). Moment.js style format string used when a time requiring second accuracy is shown.interval_year
(string, MaxLength: 128). Moment.js style format string used when a time requiring year accuracy is shown.Appears on spec.userConfig
.
External image store settings.
Required
access_key
(string, Pattern: ^[A-Z0-9]+$
, MaxLength: 4096). S3 access key. Requires permissions to the S3 bucket for the s3:PutObject and s3:PutObjectAcl actions.bucket_url
(string, MaxLength: 2048). Bucket URL for S3.provider
(string, Enum: s3
). Provider type.secret_key
(string, Pattern: ^[A-Za-z0-9/+=]+$
, MaxLength: 4096). S3 secret key.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Required
grafana
(boolean). Allow clients to connect to grafana with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Required
grafana
(boolean). Enable grafana.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Required
grafana
(boolean). Allow clients to connect to grafana from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
SMTP server settings.
Required
from_address
(string, MaxLength: 319). Address used for sending emails.host
(string, MaxLength: 255). Server hostname or IP.port
(integer, Minimum: 1, Maximum: 65535). SMTP server port.Optional
from_name
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 128). Name used in outgoing emails, defaults to Grafana.password
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 255). Password for SMTP authentication.skip_verify
(boolean). Skip verifying server certificate. Defaults to false.starttls_policy
(string, Enum: OpportunisticStartTLS
, MandatoryStartTLS
, NoStartTLS
). Either OpportunisticStartTLS, MandatoryStartTLS or NoStartTLS. Default is OpportunisticStartTLS.username
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 255). Username for SMTP authentication.apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: my-kafka\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: kafka-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-2\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/kafka.html#Kafka","title":"Kafka","text":"Kafka is the Schema for the kafkas API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Kafka
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaSpec defines the desired state of Kafka. See below for nested schema.Appears on Kafka
.
KafkaSpec defines the desired state of Kafka.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: KAFKA_HOST
, KAFKA_PORT
, KAFKA_USERNAME
, KAFKA_PASSWORD
, KAFKA_ACCESS_CERT
, KAFKA_ACCESS_KEY
, KAFKA_SASL_HOST
, KAFKA_SASL_PORT
, KAFKA_SCHEMA_REGISTRY_HOST
, KAFKA_SCHEMA_REGISTRY_PORT
, KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
, KAFKA_REST_PORT
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.karapace
(boolean). Switch the service to use Karapace for schema registry and REST proxy.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Kafka specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: KAFKA_HOST
, KAFKA_PORT
, KAFKA_USERNAME
, KAFKA_PASSWORD
, KAFKA_ACCESS_CERT
, KAFKA_ACCESS_KEY
, KAFKA_SASL_HOST
, KAFKA_SASL_PORT
, KAFKA_SCHEMA_REGISTRY_HOST
, KAFKA_SCHEMA_REGISTRY_PORT
, KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
, KAFKA_REST_PORT
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Kafka specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.aiven_kafka_topic_messages
(boolean). Allow access to read Kafka topic messages in the Aiven Console and REST API.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.kafka
(object). Kafka broker configuration values. See below for nested schema.kafka_authentication_methods
(object). Kafka authentication methods. See below for nested schema.kafka_connect
(boolean). Enable Kafka Connect service.kafka_connect_config
(object). Kafka Connect configuration values. See below for nested schema.kafka_rest
(boolean). Enable Kafka-REST service.kafka_rest_authorization
(boolean). Enable authorization in Kafka-REST service.kafka_rest_config
(object). Kafka REST configuration. See below for nested schema.kafka_version
(string, Enum: 3.3
, 3.1
, 3.4
, 3.5
, 3.6
). Kafka major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.schema_registry
(boolean). Enable Schema-Registry service.schema_registry_config
(object). Schema Registry configuration. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.static_ips
(boolean). Use static public IP addresses.tiered_storage
(object). Tiered storage configuration. See below for nested schema.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Kafka broker configuration values.
Optional
auto_create_topics_enable
(boolean). Enable auto creation of topics.compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, uncompressed
, producer
). Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts uncompressed
which is equivalent to no compression; and producer
which means retain the original compression codec set by the producer.connections_max_idle_ms
(integer, Minimum: 1000, Maximum: 3600000). Idle connections timeout: the server socket processor threads close the connections that idle for longer than this.default_replication_factor
(integer, Minimum: 1, Maximum: 10). Replication factor for autocreated topics.group_initial_rebalance_delay_ms
(integer, Minimum: 0, Maximum: 300000). The amount of time, in milliseconds, the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins. The default value for this is 3 seconds. During development and testing it might be desirable to set this to 0 in order to not delay test execution time.group_max_session_timeout_ms
(integer, Minimum: 0, Maximum: 1800000). The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.group_min_session_timeout_ms
(integer, Minimum: 0, Maximum: 60000). The minimum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.log_cleaner_delete_retention_ms
(integer, Minimum: 0, Maximum: 315569260000). How long are delete records retained?.log_cleaner_max_compaction_lag_ms
(integer, Minimum: 30000). The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted.log_cleaner_min_cleanable_ratio
(number, Minimum: 0.2, Maximum: 0.9). Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option.log_cleaner_min_compaction_lag_ms
(integer, Minimum: 0). The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.log_cleanup_policy
(string, Enum: delete
, compact
, compact,delete
). The default cleanup policy for segments beyond the retention window.log_flush_interval_messages
(integer, Minimum: 1). The number of messages accumulated on a log partition before messages are flushed to disk.log_flush_interval_ms
(integer, Minimum: 0). The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used.log_index_interval_bytes
(integer, Minimum: 0, Maximum: 104857600). The interval with which Kafka adds an entry to the offset index.log_index_size_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). The maximum size in bytes of the offset index.log_local_retention_bytes
(integer, Minimum: -2). The maximum size of local log segments that can grow for a partition before it gets eligible for deletion. If set to -2, the value of log.retention.bytes is used. The effective value should always be less than or equal to log.retention.bytes value.log_local_retention_ms
(integer, Minimum: -2). The number of milliseconds to keep the local log segments before it gets eligible for deletion. If set to -2, the value of log.retention.ms is used. The effective value should always be less than or equal to log.retention.ms value.log_message_downconversion_enable
(boolean). This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.log_message_timestamp_difference_max_ms
(integer, Minimum: 0). The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message.log_message_timestamp_type
(string, Enum: CreateTime
, LogAppendTime
). Define whether the timestamp in the message is message create time or log append time.log_preallocate
(boolean). Should pre allocate file when create new segment?.log_retention_bytes
(integer, Minimum: -1). The maximum size of the log before deleting messages.log_retention_hours
(integer, Minimum: -1, Maximum: 2147483647). The number of hours to keep a log file before deleting it.log_retention_ms
(integer, Minimum: -1). The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.log_roll_jitter_ms
(integer, Minimum: 0). The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used.log_roll_ms
(integer, Minimum: 1). The maximum time before a new log segment is rolled out (in milliseconds).log_segment_bytes
(integer, Minimum: 10485760, Maximum: 1073741824). The maximum size of a single log file.log_segment_delete_delay_ms
(integer, Minimum: 0, Maximum: 3600000). The amount of time to wait before deleting a file from the filesystem.max_connections_per_ip
(integer, Minimum: 256, Maximum: 2147483647). The maximum number of connections allowed from each ip address (defaults to 2147483647).max_incremental_fetch_session_cache_slots
(integer, Minimum: 1000, Maximum: 10000). The maximum number of incremental fetch sessions that the broker will maintain.message_max_bytes
(integer, Minimum: 0, Maximum: 100001200). The maximum size of message that the server can receive.min_insync_replicas
(integer, Minimum: 1, Maximum: 7). When a producer sets acks to all
(or -1
), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.num_partitions
(integer, Minimum: 1, Maximum: 1000). Number of partitions for autocreated topics.offsets_retention_minutes
(integer, Minimum: 1, Maximum: 2147483647). Log retention window in minutes for offsets topic.producer_purgatory_purge_interval_requests
(integer, Minimum: 10, Maximum: 10000). The purge interval (in number of requests) of the producer request purgatory(defaults to 1000).replica_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). The number of bytes of messages to attempt to fetch for each partition (defaults to 1048576). This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made.replica_fetch_response_max_bytes
(integer, Minimum: 10485760, Maximum: 1048576000). Maximum bytes expected for the entire fetch response (defaults to 10485760). Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum.sasl_oauthbearer_expected_audience
(string, MaxLength: 128). The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences.sasl_oauthbearer_expected_issuer
(string, MaxLength: 128). Optional setting for the broker to use to verify that the JWT was created by the expected issuer.sasl_oauthbearer_jwks_endpoint_url
(string, MaxLength: 2048). OIDC JWKS endpoint URL. By setting this the SASL SSL OAuth2/OIDC authentication is enabled. See also other options for SASL OAuth2/OIDC.sasl_oauthbearer_sub_claim_name
(string, MaxLength: 128). Name of the scope from which to extract the subject claim from the JWT. Defaults to sub.socket_request_max_bytes
(integer, Minimum: 10485760, Maximum: 209715200). The maximum number of bytes in a socket request (defaults to 104857600).transaction_partition_verification_enable
(boolean). Enable verification that checks that the partition has been added to the transaction before writing transactional records to the partition.transaction_remove_expired_transaction_cleanup_interval_ms
(integer, Minimum: 600000, Maximum: 3600000). The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing (defaults to 3600000 (1 hour)).transaction_state_log_segment_bytes
(integer, Minimum: 1048576, Maximum: 2147483647). The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads (defaults to 104857600 (100 mebibytes)).Appears on spec.userConfig
.
Kafka authentication methods.
Optional
certificate
(boolean). Enable certificate/SSL authentication.sasl
(boolean). Enable SASL authentication.Appears on spec.userConfig
.
Kafka Connect configuration values.
Optional
connector_client_config_override_policy
(string, Enum: None
, All
). Defines what client configurations can be overridden by the connector. Default is None.consumer_auto_offset_reset
(string, Enum: earliest
, latest
). What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.consumer_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.consumer_isolation_level
(string, Enum: read_uncommitted
, read_committed
). Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.consumer_max_partition_fetch_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.consumer_max_poll_interval_ms
(integer, Minimum: 1, Maximum: 2147483647). The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).consumer_max_poll_records
(integer, Minimum: 1, Maximum: 10000). The maximum number of records returned in a single call to poll() (defaults to 500).offset_flush_interval_ms
(integer, Minimum: 1, Maximum: 100000000). The interval at which to try committing offsets for tasks (defaults to 60000).offset_flush_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger
for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger
for the specified time waiting for more records to show up. Defaults to 0.producer_max_request_size
(integer, Minimum: 131072, Maximum: 67108864). This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.scheduled_rebalance_max_delay_ms
(integer, Minimum: 0, Maximum: 600000). The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.session_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). The timeout in milliseconds used to detect failures when using Kafka\u2019s group management facilities (defaults to 10000).Appears on spec.userConfig
.
Kafka REST configuration.
Optional
consumer_enable_auto_commit
(boolean). If true the consumer's offset will be periodically committed to Kafka in the background.consumer_request_max_bytes
(integer, Minimum: 0, Maximum: 671088640). Maximum number of bytes in unencoded message keys and values by a single request.consumer_request_timeout_ms
(integer, Enum: 1000
, 15000
, 30000
, Minimum: 1000, Maximum: 30000). The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached.name_strategy_validation
(boolean). If true, validate that given schema is registered under expected subject name by the used name strategy when producing messages.producer_acks
(string, Enum: all
, -1
, 0
, 1
). The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to all
or -1
, the leader will wait for the full set of in-sync replicas to acknowledge the record.producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). Wait for up to the given delay to allow batching records together.producer_max_request_size
(integer, Minimum: 0, Maximum: 2147483647). The maximum size of a request in bytes. Note that Kafka broker can also cap the record batch size.simpleconsumer_pool_size_max
(integer, Minimum: 10, Maximum: 250). Maximum number of SimpleConsumers that can be instantiated per broker.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
kafka
(boolean). Allow clients to connect to kafka with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.kafka_connect
(boolean). Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.kafka_rest
(boolean). Allow clients to connect to kafka_rest with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.schema_registry
(boolean). Allow clients to connect to schema_registry with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
jolokia
(boolean). Enable jolokia.kafka
(boolean). Enable kafka.kafka_connect
(boolean). Enable kafka_connect.kafka_rest
(boolean). Enable kafka_rest.prometheus
(boolean). Enable prometheus.schema_registry
(boolean). Enable schema_registry.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
kafka
(boolean). Allow clients to connect to kafka from the public internet for service nodes that are in a project VPC or another type of private network.kafka_connect
(boolean). Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.kafka_rest
(boolean). Allow clients to connect to kafka_rest from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.schema_registry
(boolean). Allow clients to connect to schema_registry from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
Schema Registry configuration.
Optional
leader_eligibility
(boolean). If true, Karapace / Schema Registry on the service nodes can participate in leader election. It might be needed to disable this when the schemas topic is replicated to a secondary cluster and Karapace / Schema Registry there must not participate in leader election. Defaults to true
.topic_name
(string, MinLength: 1, MaxLength: 249). The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy. Please note that changing this configuration in an existing Schema Registry / Karapace setup leads to previous schemas being inaccessible, data encoded with them potentially unreadable and schema ID sequence put out of order. It's only possible to do the switch while Schema Registry / Karapace is disabled. Defaults to _schemas
.Appears on spec.userConfig
.
Tiered storage configuration.
Optional
enabled
(boolean). Whether to enable the tiered storage functionality.local_cache
(object). Deprecated. Local cache configuration. See below for nested schema.Appears on spec.userConfig.tiered_storage
.
Deprecated. Local cache configuration.
Required
size
(integer, Minimum: 1, Maximum: 107374182400). Deprecated. Local cache size in bytes.apiVersion: aiven.io/v1alpha1\nkind: KafkaACL\nmetadata:\n name: my-kafka-acl\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n topic: my-topic\n username: my-user\n permission: admin\n
"},{"location":"api-reference/kafkaacl.html#KafkaACL","title":"KafkaACL","text":"KafkaACL is the Schema for the kafkaacls API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaACL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaACLSpec defines the desired state of KafkaACL. See below for nested schema.Appears on KafkaACL
.
KafkaACLSpec defines the desired state of KafkaACL.
Required
permission
(string, Enum: admin
, read
, readwrite
, write
). Kafka permission to grant (admin, read, readwrite, write).project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the Kafka ACL to.serviceName
(string, MaxLength: 63). Service to link the Kafka ACL to.topic
(string). Topic name pattern for the ACL entry.username
(string). Username pattern for the ACL entry.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: KafkaConnect\nmetadata:\n name: my-kafka-connect\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: business-4\n\n userConfig:\n kafka_connect:\n consumer_isolation_level: read_committed\n public_access:\n kafka_connect: true\n
"},{"location":"api-reference/kafkaconnect.html#KafkaConnect","title":"KafkaConnect","text":"KafkaConnect is the Schema for the kafkaconnects API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaConnect
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaConnectSpec defines the desired state of KafkaConnect. See below for nested schema.Appears on KafkaConnect
.
KafkaConnectSpec defines the desired state of KafkaConnect.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). KafkaConnect specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
KafkaConnect specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.kafka_connect
(object). Kafka Connect configuration values. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Kafka Connect configuration values.
Optional
connector_client_config_override_policy
(string, Enum: None
, All
). Defines what client configurations can be overridden by the connector. Default is None.consumer_auto_offset_reset
(string, Enum: earliest
, latest
). What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.consumer_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.consumer_isolation_level
(string, Enum: read_uncommitted
, read_committed
). Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.consumer_max_partition_fetch_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.consumer_max_poll_interval_ms
(integer, Minimum: 1, Maximum: 2147483647). The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).consumer_max_poll_records
(integer, Minimum: 1, Maximum: 10000). The maximum number of records returned in a single call to poll() (defaults to 500).offset_flush_interval_ms
(integer, Minimum: 1, Maximum: 100000000). The interval at which to try committing offsets for tasks (defaults to 60000).offset_flush_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger
for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger
for the specified time waiting for more records to show up. Defaults to 0.producer_max_request_size
(integer, Minimum: 131072, Maximum: 67108864). This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.scheduled_rebalance_max_delay_ms
(integer, Minimum: 0, Maximum: 600000). The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.session_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). The timeout in milliseconds used to detect failures when using Kafka\u2019s group management facilities (defaults to 10000).Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
kafka_connect
(boolean). Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
jolokia
(boolean). Enable jolokia.kafka_connect
(boolean). Enable kafka_connect.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
kafka_connect
(boolean). Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.KafkaConnector is the Schema for the kafkaconnectors API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaConnector
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaConnectorSpec defines the desired state of KafkaConnector. See below for nested schema.Appears on KafkaConnector
.
KafkaConnectorSpec defines the desired state of KafkaConnector.
Required
connectorClass
(string, MaxLength: 1024). The Java class of the connector.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.serviceName
(string, MaxLength: 63). Service name.userConfig
(object, AdditionalProperties: string). The connector specific configuration To build config values from secret the template function {{ fromSecret \"name\" \"key\" }}
is provided when interpreting the keys.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: KafkaSchema\nmetadata:\n name: my-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n subjectName: mny-subject\n compatibilityLevel: BACKWARD\n schema: |\n {\n \"doc\": \"example_doc\",\n \"fields\": [{\n \"default\": 5,\n \"doc\": \"field_doc\",\n \"name\": \"field_name\",\n \"namespace\": \"field_namespace\",\n \"type\": \"int\"\n }],\n \"name\": \"example_name\",\n \"namespace\": \"example_namespace\",\n \"type\": \"record\"\n }\n
"},{"location":"api-reference/kafkaschema.html#KafkaSchema","title":"KafkaSchema","text":"KafkaSchema is the Schema for the kafkaschemas API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaSchema
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaSchemaSpec defines the desired state of KafkaSchema. See below for nested schema.Appears on KafkaSchema
.
KafkaSchemaSpec defines the desired state of KafkaSchema.
Required
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the Kafka Schema to.schema
(string). Kafka Schema configuration should be a valid Avro Schema JSON format.serviceName
(string, MaxLength: 63). Service to link the Kafka Schema to.subjectName
(string, MaxLength: 63). Kafka Schema Subject name.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.compatibilityLevel
(string, Enum: BACKWARD
, BACKWARD_TRANSITIVE
, FORWARD
, FORWARD_TRANSITIVE
, FULL
, FULL_TRANSITIVE
, NONE
). Kafka Schemas compatibility level.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: kafka-topic\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n topicName: my-kafka-topic\n\n replication: 2\n partitions: 1\n\n config:\n min_cleanable_dirty_ratio: 0.2\n
"},{"location":"api-reference/kafkatopic.html#KafkaTopic","title":"KafkaTopic","text":"KafkaTopic is the Schema for the kafkatopics API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaTopic
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaTopicSpec defines the desired state of KafkaTopic. See below for nested schema.Appears on KafkaTopic
.
KafkaTopicSpec defines the desired state of KafkaTopic.
Required
partitions
(integer, Minimum: 1, Maximum: 1000000). Number of partitions to create in the topic.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.replication
(integer, Minimum: 2). Replication factor for the topic.serviceName
(string, MaxLength: 63). Service name.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.config
(object). Kafka topic configuration. See below for nested schema.tags
(array of objects). Kafka topic tags. See below for nested schema.termination_protection
(boolean). It is a Kubernetes side deletion protections, which prevents the kafka topic from being deleted by Kubernetes. It is recommended to enable this for any production databases containing critical data.topicName
(string, Immutable, MinLength: 1, MaxLength: 249). Topic name. If provided, is used instead of metadata.name. This field supports additional characters, has a longer length, and will replace metadata.name in future releases.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Kafka topic configuration.
Optional
cleanup_policy
(string). cleanup.policy value.compression_type
(string). compression.type value.delete_retention_ms
(integer). delete.retention.ms value.file_delete_delay_ms
(integer). file.delete.delay.ms value.flush_messages
(integer). flush.messages value.flush_ms
(integer). flush.ms value.index_interval_bytes
(integer). index.interval.bytes value.max_compaction_lag_ms
(integer). max.compaction.lag.ms value.max_message_bytes
(integer). max.message.bytes value.message_downconversion_enable
(boolean). message.downconversion.enable value.message_format_version
(string). message.format.version value.message_timestamp_difference_max_ms
(integer). message.timestamp.difference.max.ms value.message_timestamp_type
(string). message.timestamp.type value.min_cleanable_dirty_ratio
(number). min.cleanable.dirty.ratio value.min_compaction_lag_ms
(integer). min.compaction.lag.ms value.min_insync_replicas
(integer). min.insync.replicas value.preallocate
(boolean). preallocate value.retention_bytes
(integer). retention.bytes value.retention_ms
(integer). retention.ms value.segment_bytes
(integer). segment.bytes value.segment_index_bytes
(integer). segment.index.bytes value.segment_jitter_ms
(integer). segment.jitter.ms value.segment_ms
(integer). segment.ms value.Appears on spec
.
Kafka topic tags.
Required
key
(string, MinLength: 1, MaxLength: 64, Format: ^[a-zA-Z0-9_-]*$
). Optional
value
(string, MaxLength: 256, Format: ^[a-zA-Z0-9_-]*$
). apiVersion: aiven.io/v1alpha1\nkind: MySQL\nmetadata:\n name: my-mysql\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: mysql-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: business-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n backup_hour: 17\n backup_minute: 11\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/mysql.html#MySQL","title":"MySQL","text":"MySQL is the Schema for the mysqls API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value MySQL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). MySQLSpec defines the desired state of MySQL. See below for nested schema.Appears on MySQL
.
MySQLSpec defines the desired state of MySQL.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: MYSQL_HOST
, MYSQL_PORT
, MYSQL_DATABASE
, MYSQL_USER
, MYSQL_PASSWORD
, MYSQL_SSL_MODE
, MYSQL_URI
, MYSQL_REPLICA_URI
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). MySQL specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: MYSQL_HOST
, MYSQL_PORT
, MYSQL_DATABASE
, MYSQL_USER
, MYSQL_PASSWORD
, MYSQL_SSL_MODE
, MYSQL_URI
, MYSQL_REPLICA_URI
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
MySQL specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.admin_password
(string, Immutable, Pattern: ^[a-zA-Z0-9-_]+$
, MinLength: 8, MaxLength: 256). Custom password for admin user. Defaults to random string. This must be set only when a new service is being created.admin_username
(string, Immutable, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Custom username for admin user. This must be set only when a new service is being created.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.binlog_retention_period
(integer, Minimum: 600, Maximum: 86400). The minimum amount of time in seconds to keep binlog entries before deletion. This may be extended for services that require binlog entries for longer than the default for example if using the MySQL Debezium Kafka connector.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.mysql
(object). mysql.conf configuration values. See below for nested schema.mysql_version
(string, Enum: 8
). MySQL major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_target_time
(string, Immutable, MaxLength: 32). Recovery target time when forking a service. This has effect only when a new service is being created.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Migrate data from existing server.
Required
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.Optional
dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.Appears on spec.userConfig
.
mysql.conf configuration values.
Optional
connect_timeout
(integer, Minimum: 2, Maximum: 3600). The number of seconds that the mysqld server waits for a connect packet before responding with Bad handshake.default_time_zone
(string, MinLength: 2, MaxLength: 100). Default server time zone as an offset from UTC (from -12:00 to +12:00), a time zone name, or SYSTEM
to use the MySQL server default.group_concat_max_len
(integer, Minimum: 4). The maximum permitted result length in bytes for the GROUP_CONCAT() function.information_schema_stats_expiry
(integer, Minimum: 900, Maximum: 31536000). The time, in seconds, before cached statistics expire.innodb_change_buffer_max_size
(integer, Minimum: 0, Maximum: 50). Maximum size for the InnoDB change buffer, as a percentage of the total size of the buffer pool. Default is 25.innodb_flush_neighbors
(integer, Minimum: 0, Maximum: 2). Specifies whether flushing a page from the InnoDB buffer pool also flushes other dirty pages in the same extent (default is 1): 0 - dirty pages in the same extent are not flushed, 1 - flush contiguous dirty pages in the same extent, 2 - flush dirty pages in the same extent.innodb_ft_min_token_size
(integer, Minimum: 0, Maximum: 16). Minimum length of words that are stored in an InnoDB FULLTEXT index. Changing this parameter will lead to a restart of the MySQL service.innodb_ft_server_stopword_table
(string, Pattern: ^.+/.+$
, MaxLength: 1024). This option is used to specify your own InnoDB FULLTEXT index stopword list for all InnoDB tables.innodb_lock_wait_timeout
(integer, Minimum: 1, Maximum: 3600). The length of time in seconds an InnoDB transaction waits for a row lock before giving up. Default is 120.innodb_log_buffer_size
(integer, Minimum: 1048576, Maximum: 4294967295). The size in bytes of the buffer that InnoDB uses to write to the log files on disk.innodb_online_alter_log_max_size
(integer, Minimum: 65536, Maximum: 1099511627776). The upper limit in bytes on the size of the temporary log files used during online DDL operations for InnoDB tables.innodb_print_all_deadlocks
(boolean). When enabled, information about all deadlocks in InnoDB user transactions is recorded in the error log. Disabled by default.innodb_read_io_threads
(integer, Minimum: 1, Maximum: 64). The number of I/O threads for read operations in InnoDB. Default is 4. Changing this parameter will lead to a restart of the MySQL service.innodb_rollback_on_timeout
(boolean). When enabled a transaction timeout causes InnoDB to abort and roll back the entire transaction. Changing this parameter will lead to a restart of the MySQL service.innodb_thread_concurrency
(integer, Minimum: 0, Maximum: 1000). Defines the maximum number of threads permitted inside of InnoDB. Default is 0 (infinite concurrency - no limit).innodb_write_io_threads
(integer, Minimum: 1, Maximum: 64). The number of I/O threads for write operations in InnoDB. Default is 4. Changing this parameter will lead to a restart of the MySQL service.interactive_timeout
(integer, Minimum: 30, Maximum: 604800). The number of seconds the server waits for activity on an interactive connection before closing it.internal_tmp_mem_storage_engine
(string, Enum: TempTable
, MEMORY
). The storage engine for in-memory internal temporary tables.long_query_time
(number, Minimum: 0, Maximum: 3600). The slow_query_logs work as SQL statements that take more than long_query_time seconds to execute. Default is 10s.max_allowed_packet
(integer, Minimum: 102400, Maximum: 1073741824). Size of the largest message in bytes that can be received by the server. Default is 67108864 (64M).max_heap_table_size
(integer, Minimum: 1048576, Maximum: 1073741824). Limits the size of internal in-memory tables. Also set tmp_table_size. Default is 16777216 (16M).net_buffer_length
(integer, Minimum: 1024, Maximum: 1048576). Start sizes of connection buffer and result buffer. Default is 16384 (16K). Changing this parameter will lead to a restart of the MySQL service.net_read_timeout
(integer, Minimum: 1, Maximum: 3600). The number of seconds to wait for more data from a connection before aborting the read.net_write_timeout
(integer, Minimum: 1, Maximum: 3600). The number of seconds to wait for a block to be written to a connection before aborting the write.slow_query_log
(boolean). Slow query log enables capturing of slow queries. Setting slow_query_log to false also truncates the mysql.slow_log table. Default is off.sort_buffer_size
(integer, Minimum: 32768, Maximum: 1073741824). Sort buffer size in bytes for ORDER BY optimization. Default is 262144 (256K).sql_mode
(string, Pattern: ^[A-Z_]*(,[A-Z_]+)*$
, MaxLength: 1024). Global SQL mode. Set to empty to use MySQL server defaults. When creating a new service and not setting this field Aiven default SQL mode (strict, SQL standard compliant) will be assigned.sql_require_primary_key
(boolean). Require primary key to be defined for new tables or old tables modified with ALTER TABLE and fail if missing. It is recommended to always have primary keys because various functionality may break if any large table is missing them.tmp_table_size
(integer, Minimum: 1048576, Maximum: 1073741824). Limits the size of internal in-memory tables. Also set max_heap_table_size. Default is 16777216 (16M).wait_timeout
(integer, Minimum: 1, Maximum: 2147483). The number of seconds the server waits for activity on a noninteractive connection before closing it.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
mysql
(boolean). Allow clients to connect to mysql with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.mysqlx
(boolean). Allow clients to connect to mysqlx with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
mysql
(boolean). Enable mysql.mysqlx
(boolean). Enable mysqlx.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
mysql
(boolean). Allow clients to connect to mysql from the public internet for service nodes that are in a project VPC or another type of private network.mysqlx
(boolean). Allow clients to connect to mysqlx from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: OpenSearch\nmetadata:\n name: my-os\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: os-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-4\n disk_space: 80Gib\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/opensearch.html#OpenSearch","title":"OpenSearch","text":"OpenSearch is the Schema for the opensearches API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value OpenSearch
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). OpenSearchSpec defines the desired state of OpenSearch. See below for nested schema.Appears on OpenSearch
.
OpenSearchSpec defines the desired state of OpenSearch.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: OPENSEARCH_HOST
, OPENSEARCH_PORT
, OPENSEARCH_USER
, OPENSEARCH_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). OpenSearch specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: OPENSEARCH_HOST
, OPENSEARCH_PORT
, OPENSEARCH_USER
, OPENSEARCH_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
OpenSearch specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.disable_replication_factor_adjustment
(boolean). DEPRECATED: Disable automatic replication factor adjustment for multi-node services. By default, Aiven ensures all indexes are replicated at least to two nodes. Note: Due to potential data loss in case of losing a service node, this setting can no longer be activated.index_patterns
(array of objects, MaxItems: 512). Index patterns. See below for nested schema.index_template
(object). Template settings for all new indexes. See below for nested schema.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.keep_index_refresh_interval
(boolean). Aiven automation resets index.refresh_interval to default value for every index to be sure that indices are always visible to search. If it doesn't fit your case, you can disable this by setting up this flag to true.max_index_count
(integer, Minimum: 0). DEPRECATED: use index_patterns instead.openid
(object). OpenSearch OpenID Connect Configuration. See below for nested schema.opensearch
(object). OpenSearch settings. See below for nested schema.opensearch_dashboards
(object). OpenSearch Dashboards settings. See below for nested schema.opensearch_version
(string, Enum: 1
, 2
). OpenSearch major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.saml
(object). OpenSearch SAML configuration. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Index patterns.
Required
max_index_count
(integer, Minimum: 0). Maximum number of indexes to keep.pattern
(string, Pattern: ^[A-Za-z0-9-_.*?]+$
, MaxLength: 1024). fnmatch pattern.Optional
sorting_algorithm
(string, Enum: alphabetical
, creation_date
). Deletion sorting algorithm.Appears on spec.userConfig
.
Template settings for all new indexes.
Optional
mapping_nested_objects_limit
(integer, Minimum: 0, Maximum: 100000). The maximum number of nested JSON objects that a single document can contain across all nested types. This limit helps to prevent out of memory errors when a document contains too many nested objects. Default is 10000.number_of_replicas
(integer, Minimum: 0, Maximum: 29). The number of replicas each primary shard has.number_of_shards
(integer, Minimum: 1, Maximum: 1024). The number of primary shards that an index should have.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
OpenSearch OpenID Connect Configuration.
Required
client_id
(string, MinLength: 1, MaxLength: 1024). The ID of the OpenID Connect client configured in your IdP. Required.client_secret
(string, MinLength: 1, MaxLength: 1024). The client secret of the OpenID Connect client configured in your IdP. Required.connect_url
(string, MaxLength: 2048). The URL of your IdP where the Security plugin can find the OpenID Connect metadata/configuration settings.Optional
enabled
(boolean). Enables or disables OpenID Connect authentication for OpenSearch. When enabled, users can authenticate using OpenID Connect with an Identity Provider.header
(string, MinLength: 1, MaxLength: 1024). HTTP header name of the JWT token. Optional. Default is Authorization.jwt_header
(string, MinLength: 1, MaxLength: 1024). The HTTP header that stores the token. Typically the Authorization header with the Bearer schema: Authorization: Bearer . Optional. Default is Authorization. jwt_url_parameter
(string, MinLength: 1, MaxLength: 1024). If the token is not transmitted in the HTTP header, but as an URL parameter, define the name of the parameter here. Optional.refresh_rate_limit_count
(integer, Minimum: 10). The maximum number of unknown key IDs in the time frame. Default is 10. Optional.refresh_rate_limit_time_window_ms
(integer, Minimum: 10000). The time frame to use when checking the maximum number of unknown key IDs, in milliseconds. Optional.Default is 10000 (10 seconds).roles_key
(string, MinLength: 1, MaxLength: 1024). The key in the JSON payload that stores the user\u2019s roles. The value of this key must be a comma-separated list of roles. Required only if you want to use roles in the JWT.scope
(string, MinLength: 1, MaxLength: 1024). The scope of the identity token issued by the IdP. Optional. Default is openid profile email address phone.subject_key
(string, MinLength: 1, MaxLength: 1024). The key in the JSON payload that stores the user\u2019s name. If not defined, the subject registered claim is used. Most IdP providers use the preferred_username claim. Optional.Appears on spec.userConfig
.
OpenSearch settings.
Optional
action_auto_create_index_enabled
(boolean). Explicitly allow or block automatic creation of indices. Defaults to true.action_destructive_requires_name
(boolean). Require explicit index names when deleting.auth_failure_listeners
(object). Opensearch Security Plugin Settings. See below for nested schema.cluster_max_shards_per_node
(integer, Minimum: 100, Maximum: 10000). Controls the number of shards allowed in the cluster per data node.cluster_routing_allocation_node_concurrent_recoveries
(integer, Minimum: 2, Maximum: 16). How many concurrent incoming/outgoing shard recoveries (normally replicas) are allowed to happen on a node. Defaults to 2.email_sender_name
(string, Pattern: ^[a-zA-Z0-9-_]+$
, MaxLength: 40). Sender name placeholder to be used in Opensearch Dashboards and Opensearch keystore.email_sender_password
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 1024). Sender password for Opensearch alerts to authenticate with SMTP server.email_sender_username
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 320). Sender username for Opensearch alerts.http_max_content_length
(integer, Minimum: 1, Maximum: 2147483647). Maximum content length for HTTP requests to the OpenSearch HTTP API, in bytes.http_max_header_size
(integer, Minimum: 1024, Maximum: 262144). The max size of allowed headers, in bytes.http_max_initial_line_length
(integer, Minimum: 1024, Maximum: 65536). The max length of an HTTP URL, in bytes.indices_fielddata_cache_size
(integer, Minimum: 3, Maximum: 100). Relative amount. Maximum amount of heap memory used for field data cache. This is an expert setting; decreasing the value too much will increase overhead of loading field data; too much memory used for field data cache will decrease amount of heap available for other operations.indices_memory_index_buffer_size
(integer, Minimum: 3, Maximum: 40). Percentage value. Default is 10%. Total amount of heap used for indexing buffer, before writing segments to disk. This is an expert setting. Too low value will slow down indexing; too high value will increase indexing performance but causes performance issues for query performance.indices_memory_max_index_buffer_size
(integer, Minimum: 3, Maximum: 2048). Absolute value. Default is unbound. Doesn't work without indices.memory.index_buffer_size. Maximum amount of heap used for query cache, an absolute indices.memory.index_buffer_size maximum hard limit.indices_memory_min_index_buffer_size
(integer, Minimum: 3, Maximum: 2048). Absolute value. Default is 48mb. Doesn't work without indices.memory.index_buffer_size. Minimum amount of heap used for query cache, an absolute indices.memory.index_buffer_size minimal hard limit.indices_queries_cache_size
(integer, Minimum: 3, Maximum: 40). Percentage value. Default is 10%. Maximum amount of heap used for query cache. This is an expert setting. Too low value will decrease query performance and increase performance for other operations; too high value will cause issues with other OpenSearch functionality.indices_query_bool_max_clause_count
(integer, Minimum: 64, Maximum: 4096). Maximum number of clauses Lucene BooleanQuery can have. The default value (1024) is relatively high, and increasing it may cause performance issues. Investigate other approaches first before increasing this value.indices_recovery_max_bytes_per_sec
(integer, Minimum: 40, Maximum: 400). Limits total inbound and outbound recovery traffic for each node. Applies to both peer recoveries as well as snapshot recoveries (i.e., restores from a snapshot). Defaults to 40mb.indices_recovery_max_concurrent_file_chunks
(integer, Minimum: 2, Maximum: 5). Number of file chunks sent in parallel for each recovery. Defaults to 2.ism_enabled
(boolean). Specifies whether ISM is enabled or not.ism_history_enabled
(boolean). Specifies whether audit history is enabled or not. The logs from ISM are automatically indexed to a logs document.ism_history_max_age
(integer, Minimum: 1, Maximum: 2147483647). The maximum age before rolling over the audit history index in hours.ism_history_max_docs
(integer, Minimum: 1). The maximum number of documents before rolling over the audit history index.ism_history_rollover_check_period
(integer, Minimum: 1, Maximum: 2147483647). The time between rollover checks for the audit history index in hours.ism_history_rollover_retention_period
(integer, Minimum: 1, Maximum: 2147483647). How long audit history indices are kept in days.override_main_response_version
(boolean). Compatibility mode sets OpenSearch to report its version as 7.10 so clients continue to work. Default is false.reindex_remote_whitelist
(array of strings, MaxItems: 32). Whitelisted addresses for reindexing. Changing this value will cause all OpenSearch instances to restart.script_max_compilations_rate
(string, MaxLength: 1024). Script compilation circuit breaker limits the number of inline script compilations within a period of time. Default is use-context.search_max_buckets
(integer, Minimum: 1, Maximum: 1000000). Maximum number of aggregation buckets allowed in a single response. OpenSearch default value is used when this is not defined.thread_pool_analyze_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_analyze_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_force_merge_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_get_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_get_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_search_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_search_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_search_throttled_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_search_throttled_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_write_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_write_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.Appears on spec.userConfig.opensearch
.
Opensearch Security Plugin Settings.
Optional
internal_authentication_backend_limiting
(object). See below for nested schema.ip_rate_limiting
(object). IP address rate limiting settings. See below for nested schema.Appears on spec.userConfig.opensearch.auth_failure_listeners
.
Optional
allowed_tries
(integer, Minimum: 0, Maximum: 2147483647). The number of login attempts allowed before login is blocked.authentication_backend
(string, Enum: internal
, MaxLength: 1024). internal_authentication_backend_limiting.authentication_backend.block_expiry_seconds
(integer, Minimum: 0, Maximum: 2147483647). The duration of time that login remains blocked after a failed login.max_blocked_clients
(integer, Minimum: 0, Maximum: 2147483647). internal_authentication_backend_limiting.max_blocked_clients.max_tracked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of tracked IP addresses that have failed login.time_window_seconds
(integer, Minimum: 0, Maximum: 2147483647). The window of time in which the value for allowed_tries
is enforced.type
(string, Enum: username
, MaxLength: 1024). internal_authentication_backend_limiting.type.Appears on spec.userConfig.opensearch.auth_failure_listeners
.
IP address rate limiting settings.
Optional
allowed_tries
(integer, Minimum: 1, Maximum: 2147483647). The number of login attempts allowed before login is blocked.block_expiry_seconds
(integer, Minimum: 1, Maximum: 36000). The duration of time that login remains blocked after a failed login.max_blocked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of blocked IP addresses.max_tracked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of tracked IP addresses that have failed login.time_window_seconds
(integer, Minimum: 1, Maximum: 36000). The window of time in which the value for allowed_tries
is enforced.type
(string, Enum: ip
, MaxLength: 1024). The type of rate limiting.Appears on spec.userConfig
.
OpenSearch Dashboards settings.
Optional
enabled
(boolean). Enable or disable OpenSearch Dashboards.max_old_space_size
(integer, Minimum: 64, Maximum: 2048). Limits the maximum amount of memory (in MiB) the OpenSearch Dashboards process can use. This sets the max_old_space_size option of the nodejs running the OpenSearch Dashboards. Note: the memory reserved by OpenSearch Dashboards is not available for OpenSearch.opensearch_request_timeout
(integer, Minimum: 5000, Maximum: 120000). Timeout in milliseconds for requests made by OpenSearch Dashboards towards OpenSearch.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
opensearch
(boolean). Allow clients to connect to opensearch with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.opensearch_dashboards
(boolean). Allow clients to connect to opensearch_dashboards with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
opensearch
(boolean). Enable opensearch.opensearch_dashboards
(boolean). Enable opensearch_dashboards.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
opensearch
(boolean). Allow clients to connect to opensearch from the public internet for service nodes that are in a project VPC or another type of private network.opensearch_dashboards
(boolean). Allow clients to connect to opensearch_dashboards from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
OpenSearch SAML configuration.
Required
enabled
(boolean). Enables or disables SAML-based authentication for OpenSearch. When enabled, users can authenticate using SAML with an Identity Provider.idp_entity_id
(string, MinLength: 1, MaxLength: 1024). The unique identifier for the Identity Provider (IdP) entity that is used for SAML authentication. This value is typically provided by the IdP.idp_metadata_url
(string, MinLength: 1, MaxLength: 2048). The URL of the SAML metadata for the Identity Provider (IdP). This is used to configure SAML-based authentication with the IdP.sp_entity_id
(string, MinLength: 1, MaxLength: 1024). The unique identifier for the Service Provider (SP) entity that is used for SAML authentication. This value is typically provided by the SP.Optional
idp_pemtrustedcas_content
(string, MaxLength: 16384). This parameter specifies the PEM-encoded root certificate authority (CA) content for the SAML identity provider (IdP) server verification. The root CA content is used to verify the SSL/TLS certificate presented by the server.roles_key
(string, MinLength: 1, MaxLength: 256). Optional. Specifies the attribute in the SAML response where role information is stored, if available. Role attributes are not required for SAML authentication, but can be included in SAML assertions by most Identity Providers (IdPs) to determine user access levels or permissions.subject_key
(string, MinLength: 1, MaxLength: 256). Optional. Specifies the attribute in the SAML response where the subject identifier is stored. If not configured, the NameID attribute is used by default.apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: my-postgresql\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: postgresql-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n pg_version: \"15\"\n
"},{"location":"api-reference/postgresql.html#PostgreSQL","title":"PostgreSQL","text":"PostgreSQL is the Schema for the postgresql API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value PostgreSQL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). PostgreSQLSpec defines the desired state of postgres instance. See below for nested schema.Appears on PostgreSQL
.
PostgreSQLSpec defines the desired state of postgres instance.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: POSTGRESQL_HOST
, POSTGRESQL_PORT
, POSTGRESQL_DATABASE
, POSTGRESQL_USER
, POSTGRESQL_PASSWORD
, POSTGRESQL_SSLMODE
, POSTGRESQL_DATABASE_URI
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). PostgreSQL specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: POSTGRESQL_HOST
, POSTGRESQL_PORT
, POSTGRESQL_DATABASE
, POSTGRESQL_USER
, POSTGRESQL_PASSWORD
, POSTGRESQL_SSLMODE
, POSTGRESQL_DATABASE_URI
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
PostgreSQL specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.admin_password
(string, Immutable, Pattern: ^[a-zA-Z0-9-_]+$
, MinLength: 8, MaxLength: 256). Custom password for admin user. Defaults to random string. This must be set only when a new service is being created.admin_username
(string, Immutable, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Custom username for admin user. This must be set only when a new service is being created.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.enable_ipv6
(boolean). Register AAAA DNS records for the service, and allow IPv6 packets to service ports.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.pg
(object). postgresql.conf configuration values. See below for nested schema.pg_qualstats
(object). System-wide settings for the pg_qualstats extension. See below for nested schema.pg_read_replica
(boolean). Should the service which is being forked be a read replica (deprecated, use read_replica service integration instead).pg_service_to_fork_from
(string, Immutable, MaxLength: 64). Name of the PG Service from which to fork (deprecated, use service_to_fork_from). This has effect only when a new service is being created.pg_stat_monitor_enable
(boolean). Enable the pg_stat_monitor extension. Enabling this extension will cause the cluster to be restarted.When this extension is enabled, pg_stat_statements results for utility commands are unreliable.pg_version
(string, Enum: 11
, 12
, 13
, 14
, 15
). PostgreSQL major version.pgbouncer
(object). PGBouncer connection pooling settings. See below for nested schema.pglookout
(object). System-wide settings for pglookout. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_target_time
(string, Immutable, MaxLength: 32). Recovery target time when forking a service. This has effect only when a new service is being created.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.shared_buffers_percentage
(number, Minimum: 20, Maximum: 60). Percentage of total RAM that the database server uses for shared memory buffers. Valid range is 20-60 (float), which corresponds to 20% - 60%. This setting adjusts the shared_buffers configuration value.static_ips
(boolean). Use static public IP addresses.synchronous_replication
(string, Enum: quorum
, off
). Synchronous replication type. Note that the service plan also needs to support synchronous replication.timescaledb
(object). System-wide settings for the timescaledb extension. See below for nested schema.variant
(string, Enum: aiven
, timescale
). Variant of the PostgreSQL service, may affect the features that are exposed by default.work_mem
(integer, Minimum: 1, Maximum: 1024). Sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files, in MB. Default is 1MB + 0.075% of total RAM (up to 32MB).Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Migrate data from existing server.
Required
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.Optional
dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.Appears on spec.userConfig
.
postgresql.conf configuration values.
Optional
autovacuum_analyze_scale_factor
(number, Minimum: 0, Maximum: 1). Specifies a fraction of the table size to add to autovacuum_analyze_threshold when deciding whether to trigger an ANALYZE. The default is 0.2 (20% of table size).autovacuum_analyze_threshold
(integer, Minimum: 0, Maximum: 2147483647). Specifies the minimum number of inserted, updated or deleted tuples needed to trigger an ANALYZE in any one table. The default is 50 tuples.autovacuum_freeze_max_age
(integer, Minimum: 200000000, Maximum: 1500000000). Specifies the maximum age (in transactions) that a table's pg_class.relfrozenxid field can attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. This parameter will cause the server to be restarted.autovacuum_max_workers
(integer, Minimum: 1, Maximum: 20). Specifies the maximum number of autovacuum processes (other than the autovacuum launcher) that may be running at any one time. The default is three. This parameter can only be set at server start.autovacuum_naptime
(integer, Minimum: 1, Maximum: 86400). Specifies the minimum delay between autovacuum runs on any given database. The delay is measured in seconds, and the default is one minute.autovacuum_vacuum_cost_delay
(integer, Minimum: -1, Maximum: 100). Specifies the cost delay value that will be used in automatic VACUUM operations. If -1 is specified, the regular vacuum_cost_delay value will be used. The default value is 20 milliseconds.autovacuum_vacuum_cost_limit
(integer, Minimum: -1, Maximum: 10000). Specifies the cost limit value that will be used in automatic VACUUM operations. If -1 is specified (which is the default), the regular vacuum_cost_limit value will be used.autovacuum_vacuum_scale_factor
(number, Minimum: 0, Maximum: 1). Specifies a fraction of the table size to add to autovacuum_vacuum_threshold when deciding whether to trigger a VACUUM. The default is 0.2 (20% of table size).autovacuum_vacuum_threshold
(integer, Minimum: 0, Maximum: 2147483647). Specifies the minimum number of updated or deleted tuples needed to trigger a VACUUM in any one table. The default is 50 tuples.bgwriter_delay
(integer, Minimum: 10, Maximum: 10000). Specifies the delay between activity rounds for the background writer in milliseconds. Default is 200.bgwriter_flush_after
(integer, Minimum: 0, Maximum: 2048). Whenever more than bgwriter_flush_after bytes have been written by the background writer, attempt to force the OS to issue these writes to the underlying storage. Specified in kilobytes, default is 512. Setting of 0 disables forced writeback.bgwriter_lru_maxpages
(integer, Minimum: 0, Maximum: 1073741823). In each round, no more than this many buffers will be written by the background writer. Setting this to zero disables background writing. Default is 100.bgwriter_lru_multiplier
(number, Minimum: 0, Maximum: 10). The average recent need for new buffers is multiplied by bgwriter_lru_multiplier to arrive at an estimate of the number that will be needed during the next round, (up to bgwriter_lru_maxpages). 1.0 represents a \u201cjust in time\u201d policy of writing exactly the number of buffers predicted to be needed. Larger values provide some cushion against spikes in demand, while smaller values intentionally leave writes to be done by server processes. The default is 2.0.deadlock_timeout
(integer, Minimum: 500, Maximum: 1800000). This is the amount of time, in milliseconds, to wait on a lock before checking to see if there is a deadlock condition.default_toast_compression
(string, Enum: lz4
, pglz
). Specifies the default TOAST compression method for values of compressible columns (the default is lz4).idle_in_transaction_session_timeout
(integer, Minimum: 0, Maximum: 604800000). Time out sessions with open transactions after this number of milliseconds.jit
(boolean). Controls system-wide use of Just-in-Time Compilation (JIT).log_autovacuum_min_duration
(integer, Minimum: -1, Maximum: 2147483647). Causes each action executed by autovacuum to be logged if it ran for at least the specified number of milliseconds. Setting this to zero logs all autovacuum actions. Minus-one (the default) disables logging autovacuum actions.log_error_verbosity
(string, Enum: TERSE
, DEFAULT
, VERBOSE
). Controls the amount of detail written in the server log for each message that is logged.log_line_prefix
(string, Enum: 'pid=%p,user=%u,db=%d,app=%a,client=%h '
, '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
, '%m [%p] %q[user=%u,db=%d,app=%a] '
). Choose from one of the available log-formats. These can support popular log analyzers like pgbadger, pganalyze etc.log_min_duration_statement
(integer, Minimum: -1, Maximum: 86400000). Log statements that take more than this number of milliseconds to run, -1 disables.log_temp_files
(integer, Minimum: -1, Maximum: 2147483647). Log statements for each temporary file created larger than this number of kilobytes, -1 disables.max_files_per_process
(integer, Minimum: 1000, Maximum: 4096). PostgreSQL maximum number of files that can be open per process.max_locks_per_transaction
(integer, Minimum: 64, Maximum: 6400). PostgreSQL maximum locks per transaction.max_logical_replication_workers
(integer, Minimum: 4, Maximum: 64). PostgreSQL maximum logical replication workers (taken from the pool of max_parallel_workers).max_parallel_workers
(integer, Minimum: 0, Maximum: 96). Sets the maximum number of workers that the system can support for parallel queries.max_parallel_workers_per_gather
(integer, Minimum: 0, Maximum: 96). Sets the maximum number of workers that can be started by a single Gather or Gather Merge node.max_pred_locks_per_transaction
(integer, Minimum: 64, Maximum: 5120). PostgreSQL maximum predicate locks per transaction.max_prepared_transactions
(integer, Minimum: 0, Maximum: 10000). PostgreSQL maximum prepared transactions.max_replication_slots
(integer, Minimum: 8, Maximum: 64). PostgreSQL maximum replication slots.max_slot_wal_keep_size
(integer, Minimum: -1, Maximum: 2147483647). PostgreSQL maximum WAL size (MB) reserved for replication slots. Default is -1 (unlimited). wal_keep_size minimum WAL size setting takes precedence over this.max_stack_depth
(integer, Minimum: 2097152, Maximum: 6291456). Maximum depth of the stack in bytes.max_standby_archive_delay
(integer, Minimum: 1, Maximum: 43200000). Max standby archive delay in milliseconds.max_standby_streaming_delay
(integer, Minimum: 1, Maximum: 43200000). Max standby streaming delay in milliseconds.max_wal_senders
(integer, Minimum: 20, Maximum: 64). PostgreSQL maximum WAL senders.max_worker_processes
(integer, Minimum: 8, Maximum: 96). Sets the maximum number of background processes that the system can support.pg_partman_bgw.interval
(integer, Minimum: 3600, Maximum: 604800). Sets the time interval to run pg_partman's scheduled tasks.pg_partman_bgw.role
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Controls which role to use for pg_partman's scheduled background tasks.pg_stat_monitor.pgsm_enable_query_plan
(boolean). Enables or disables query plan monitoring.pg_stat_monitor.pgsm_max_buckets
(integer, Minimum: 1, Maximum: 10). Sets the maximum number of buckets.pg_stat_statements.track
(string, Enum: all
, top
, none
). Controls which statements are counted. Specify top to track top-level statements (those issued directly by clients), all to also track nested statements (such as statements invoked within functions), or none to disable statement statistics collection. The default value is top.temp_file_limit
(integer, Minimum: -1, Maximum: 2147483647). PostgreSQL temporary file limit in KiB, -1 for unlimited.timezone
(string, MaxLength: 64). PostgreSQL service timezone.track_activity_query_size
(integer, Minimum: 1024, Maximum: 10240). Specifies the number of bytes reserved to track the currently executing command for each active session.track_commit_timestamp
(string, Enum: off
, on
). Record commit time of transactions.track_functions
(string, Enum: all
, pl
, none
). Enables tracking of function call counts and time used.track_io_timing
(string, Enum: off
, on
). Enables timing of database I/O calls. This parameter is off by default, because it will repeatedly query the operating system for the current time, which may cause significant overhead on some platforms.wal_sender_timeout
(integer). Terminate replication connections that are inactive for longer than this amount of time, in milliseconds. Setting this value to zero disables the timeout.wal_writer_delay
(integer, Minimum: 10, Maximum: 200). WAL flush interval in milliseconds. Note that setting this value to lower than the default 200ms may negatively impact performance.Appears on spec.userConfig
.
System-wide settings for the pg_qualstats extension.
Optional
enabled
(boolean). Enable / Disable pg_qualstats.min_err_estimate_num
(integer, Minimum: 0). Error estimation num threshold to save quals.min_err_estimate_ratio
(integer, Minimum: 0). Error estimation ratio threshold to save quals.track_constants
(boolean). Enable / Disable pg_qualstats constants tracking.track_pg_catalog
(boolean). Track quals on system catalogs too.Appears on spec.userConfig
.
PGBouncer connection pooling settings.
Optional
autodb_idle_timeout
(integer, Minimum: 0, Maximum: 86400). If the automatically created database pools have been unused this many seconds, they are freed. If 0 then timeout is disabled. [seconds].autodb_max_db_connections
(integer, Minimum: 0, Maximum: 2147483647). Do not allow more than this many server connections per database (regardless of user). Setting it to 0 means unlimited.autodb_pool_mode
(string, Enum: session
, transaction
, statement
). PGBouncer pool mode.autodb_pool_size
(integer, Minimum: 0, Maximum: 10000). If non-zero then create automatically a pool of that size per user when a pool doesn't exist.ignore_startup_parameters
(array of strings, MaxItems: 32). List of parameters to ignore when given in startup packet.min_pool_size
(integer, Minimum: 0, Maximum: 10000). Add more server connections to pool if below this number. Improves behavior when usual load comes suddenly back after period of total inactivity. The value is effectively capped at the pool size.server_idle_timeout
(integer, Minimum: 0, Maximum: 86400). If a server connection has been idle more than this many seconds it will be dropped. If 0 then timeout is disabled. [seconds].server_lifetime
(integer, Minimum: 60, Maximum: 86400). The pooler will close an unused server connection that has been connected longer than this. [seconds].server_reset_query_always
(boolean). Run server_reset_query (DISCARD ALL) in all pooling modes.Appears on spec.userConfig
.
System-wide settings for pglookout.
Required
max_failover_replication_time_lag
(integer, Minimum: 10). Number of seconds of master unavailability before triggering database failover to standby.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
pg
(boolean). Allow clients to connect to pg with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.pgbouncer
(boolean). Allow clients to connect to pgbouncer with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
pg
(boolean). Enable pg.pgbouncer
(boolean). Enable pgbouncer.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
pg
(boolean). Allow clients to connect to pg from the public internet for service nodes that are in a project VPC or another type of private network.pgbouncer
(boolean). Allow clients to connect to pgbouncer from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
System-wide settings for the timescaledb extension.
Required
max_background_workers
(integer, Minimum: 1, Maximum: 4096). The number of background workers for timescaledb operations. You should configure this setting to the sum of your number of databases and the total number of concurrent background workers you want running at any given point in time.apiVersion: aiven.io/v1alpha1\nkind: Project\nmetadata:\n name: my-project\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: project-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n tags:\n env: prod\n\n billingAddress: NYC\n cloud: aws-eu-west-1\n
"},{"location":"api-reference/project.html#Project","title":"Project","text":"Project is the Schema for the projects API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Project
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ProjectSpec defines the desired state of Project. See below for nested schema.Appears on Project
.
ProjectSpec defines the desired state of Project.
Optional
accountId
(string, MaxLength: 32). Account ID.authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.billingAddress
(string, MaxLength: 1000). Billing name and address of the project.billingCurrency
(string, Enum: AUD
, CAD
, CHF
, DKK
, EUR
, GBP
, NOK
, SEK
, USD
). Billing currency.billingEmails
(array of strings, MaxItems: 10). Billing contact emails of the project.billingExtraText
(string, MaxLength: 1000). Extra text to be included in all project invoices, e.g. purchase order or cost center number.billingGroupId
(string, MinLength: 36, MaxLength: 36). BillingGroup ID.cardId
(string, MaxLength: 64). Credit card ID; The ID may be either last 4 digits of the card or the actual ID.cloud
(string, MaxLength: 256). Target cloud, example: aws-eu-central-1.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: PROJECT_CA_CERT
. See below for nested schema.copyFromProject
(string, MaxLength: 63). Project name from which to copy settings to the new project.countryCode
(string, MinLength: 2, MaxLength: 2). Billing country code of the project.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize projects.technicalEmails
(array of strings, MaxItems: 10). Technical contact emails of the project.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: PROJECT_CA_CERT
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.apiVersion: aiven.io/v1alpha1\nkind: ProjectVPC\nmetadata:\n name: my-project-vpc\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n cloudName: google-europe-west1\n networkCidr: 10.0.0.0/24\n
"},{"location":"api-reference/projectvpc.html#ProjectVPC","title":"ProjectVPC","text":"ProjectVPC is the Schema for the projectvpcs API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ProjectVPC
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ProjectVPCSpec defines the desired state of ProjectVPC. See below for nested schema.Appears on ProjectVPC
.
ProjectVPCSpec defines the desired state of ProjectVPC.
Required
cloudName
(string, Immutable, MaxLength: 256). Cloud the VPC is in.networkCidr
(string, Immutable, MaxLength: 36). Network address range used by the VPC like 192.168.0.0/24.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). The project the VPC belongs to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: Redis\nmetadata:\n name: k8s-redis\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: redis-token\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n userConfig:\n redis_maxmemory_policy: \"allkeys-random\"\n
"},{"location":"api-reference/redis.html#Redis","title":"Redis","text":"Redis is the Schema for the redis API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Redis
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). RedisSpec defines the desired state of Redis. See below for nested schema.Appears on Redis
.
RedisSpec defines the desired state of Redis.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: REDIS_HOST
, REDIS_PORT
, REDIS_USER
, REDIS_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Redis specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: REDIS_HOST
, REDIS_PORT
, REDIS_USER
, REDIS_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Redis specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.redis_acl_channels_default
(string, Enum: allchannels
, resetchannels
). Determines default pub/sub channels' ACL for new users if ACL is not supplied. When this option is not defined, all_channels is assumed to keep backward compatibility. This option doesn't affect Redis configuration acl-pubsub-default.redis_io_threads
(integer, Minimum: 1, Maximum: 32). Set Redis IO thread count. Changing this will cause a restart of the Redis service.redis_lfu_decay_time
(integer, Minimum: 1, Maximum: 120). LFU maxmemory-policy counter decay time in minutes.redis_lfu_log_factor
(integer, Minimum: 0, Maximum: 100). Counter logarithm factor for volatile-lfu and allkeys-lfu maxmemory-policies.redis_maxmemory_policy
(string, Enum: noeviction
, allkeys-lru
, volatile-lru
, allkeys-random
, volatile-random
, volatile-ttl
, volatile-lfu
, allkeys-lfu
). Redis maxmemory-policy.redis_notify_keyspace_events
(string, Pattern: ^[KEg\\$lshzxeA]*$
, MaxLength: 32). Set notify-keyspace-events option.redis_number_of_databases
(integer, Minimum: 1, Maximum: 128). Set number of Redis databases. Changing this will cause a restart of the Redis service.redis_persistence
(string, Enum: off
, rdb
). When persistence is rdb
, Redis does RDB dumps each 10 minutes if any key is changed. Also RDB dumps are done according to backup schedule for backup purposes. When persistence is off
, no RDB dumps and backups are done, so data can be lost at any moment if service is restarted for any reason, or if service is powered off. Also service can't be forked.redis_pubsub_client_output_buffer_limit
(integer, Minimum: 32, Maximum: 512). Set output buffer limit for pub / sub clients in MB. The value is the hard limit, the soft limit is 1/4 of the hard limit. When setting the limit, be mindful of the available memory in the selected service plan.redis_ssl
(boolean). Require SSL to access Redis.redis_timeout
(integer, Minimum: 0, Maximum: 31536000). Redis idle connection timeout in seconds.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Migrate data from existing server.
Required
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.Optional
dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.redis
(boolean). Allow clients to connect to redis with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
prometheus
(boolean). Enable prometheus.redis
(boolean). Enable redis.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.redis
(boolean). Allow clients to connect to redis from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: my-service-integration\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n\n integrationType: kafka_logs\n sourceServiceName: my-source-service-name\n destinationServiceName: my-destination-service-name\n\n kafkaLogs:\n kafka_topic: my-kafka-topic\n
"},{"location":"api-reference/serviceintegration.html#ServiceIntegration","title":"ServiceIntegration","text":"ServiceIntegration is the Schema for the serviceintegrations API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ServiceIntegration
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ServiceIntegrationSpec defines the desired state of ServiceIntegration. See below for nested schema.Appears on ServiceIntegration
.
ServiceIntegrationSpec defines the desired state of ServiceIntegration.
Required
integrationType
(string, Enum: alertmanager
, autoscaler
, caching
, cassandra_cross_service_cluster
, clickhouse_kafka
, clickhouse_postgresql
, dashboard
, datadog
, datasource
, external_aws_cloudwatch_logs
, external_aws_cloudwatch_metrics
, external_elasticsearch_logs
, external_google_cloud_logging
, external_opensearch_logs
, flink
, flink_external_kafka
, internal_connectivity
, jolokia
, kafka_connect
, kafka_logs
, kafka_mirrormaker
, logs
, m3aggregator
, m3coordinator
, metrics
, opensearch_cross_cluster_replication
, opensearch_cross_cluster_search
, prometheus
, read_replica
, rsyslog
, schema_registry_proxy
, stresstester
, thanosquery
, thanosstore
, vmalert
, Immutable). Type of the service integration accepted by Aiven API. Some values may not be supported by the operator.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project the integration belongs to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.clickhouseKafka
(object). Clickhouse Kafka configuration values. See below for nested schema.clickhousePostgresql
(object). Clickhouse PostgreSQL configuration values. See below for nested schema.datadog
(object). Datadog specific user configuration options. See below for nested schema.destinationEndpointId
(string, Immutable, MaxLength: 36). Destination endpoint for the integration (if any).destinationProjectName
(string, Immutable, MaxLength: 63). Destination project for the integration (if any).destinationServiceName
(string, Immutable, MaxLength: 64). Destination service for the integration (if any).externalAWSCloudwatchMetrics
(object). External AWS CloudWatch Metrics integration Logs configuration values. See below for nested schema.kafkaConnect
(object). Kafka Connect service configuration values. See below for nested schema.kafkaLogs
(object). Kafka logs configuration values. See below for nested schema.kafkaMirrormaker
(object). Kafka MirrorMaker configuration values. See below for nested schema.logs
(object). Logs configuration values. See below for nested schema.metrics
(object). Metrics configuration values. See below for nested schema.sourceEndpointID
(string, Immutable, MaxLength: 36). Source endpoint for the integration (if any).sourceProjectName
(string, Immutable, MaxLength: 63). Source project for the integration (if any).sourceServiceName
(string, Immutable, MaxLength: 64). Source service for the integration (if any).Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Clickhouse Kafka configuration values.
Required
tables
(array of objects, MaxItems: 100). Tables to create. See below for nested schema.Appears on spec.clickhouseKafka
.
Tables to create.
Required
columns
(array of objects, MaxItems: 100). Table columns. See below for nested schema.data_format
(string, Enum: Avro
, CSV
, JSONAsString
, JSONCompactEachRow
, JSONCompactStringsEachRow
, JSONEachRow
, JSONStringsEachRow
, MsgPack
, TSKV
, TSV
, TabSeparated
, RawBLOB
, AvroConfluent
). Message data format.group_name
(string, MinLength: 1, MaxLength: 249). Kafka consumers group.name
(string, MinLength: 1, MaxLength: 40). Name of the table.topics
(array of objects, MaxItems: 100). Kafka topics. See below for nested schema.Optional
auto_offset_reset
(string, Enum: smallest
, earliest
, beginning
, largest
, latest
, end
). Action to take when there is no initial offset in offset store or the desired offset is out of range.date_time_input_format
(string, Enum: basic
, best_effort
, best_effort_us
). Method to read DateTime from text input formats.handle_error_mode
(string, Enum: default
, stream
). How to handle errors for Kafka engine.max_block_size
(integer, Minimum: 0, Maximum: 1000000000). Number of row collected by poll(s) for flushing data from Kafka.max_rows_per_message
(integer, Minimum: 1, Maximum: 1000000000). The maximum number of rows produced in one kafka message for row-based formats.num_consumers
(integer, Minimum: 1, Maximum: 10). The number of consumers per table per replica.poll_max_batch_size
(integer, Minimum: 0, Maximum: 1000000000). Maximum amount of messages to be polled in a single Kafka poll.skip_broken_messages
(integer, Minimum: 0, Maximum: 1000000000). Skip at least this number of broken messages from Kafka topic per block.Appears on spec.clickhouseKafka.tables
.
Table columns.
Required
name
(string, MinLength: 1, MaxLength: 40). Column name.type
(string, MinLength: 1, MaxLength: 1000). Column type.Appears on spec.clickhouseKafka.tables
.
Kafka topics.
Required
name
(string, MinLength: 1, MaxLength: 249). Name of the topic.Appears on spec
.
Clickhouse PostgreSQL configuration values.
Required
databases
(array of objects, MaxItems: 10). Databases to expose. See below for nested schema.Appears on spec.clickhousePostgresql
.
Databases to expose.
Optional
database
(string, MinLength: 1, MaxLength: 63). PostgreSQL database to expose.schema
(string, MinLength: 1, MaxLength: 63). PostgreSQL schema to expose.Appears on spec
.
Datadog specific user configuration options.
Optional
datadog_dbm_enabled
(boolean). Enable Datadog Database Monitoring.datadog_tags
(array of objects, MaxItems: 32). Custom tags provided by user. See below for nested schema.exclude_consumer_groups
(array of strings, MaxItems: 1024). List of custom metrics.exclude_topics
(array of strings, MaxItems: 1024). List of topics to exclude.include_consumer_groups
(array of strings, MaxItems: 1024). List of custom metrics.include_topics
(array of strings, MaxItems: 1024). List of topics to include.kafka_custom_metrics
(array of strings, MaxItems: 1024). List of custom metrics.max_jmx_metrics
(integer, Minimum: 10, Maximum: 100000). Maximum number of JMX metrics to send.opensearch
(object). Datadog Opensearch Options. See below for nested schema.redis
(object). Datadog Redis Options. See below for nested schema.Appears on spec.datadog
.
Custom tags provided by user.
Required
tag
(string, MinLength: 1, MaxLength: 200). Tag format and usage are described here: https://docs.datadoghq.com/getting_started/tagging. Tags with prefix aiven-
are reserved for Aiven.Optional
comment
(string, MaxLength: 1024). Optional tag explanation.Appears on spec.datadog
.
Datadog Opensearch Options.
Optional
index_stats_enabled
(boolean). Enable Datadog Opensearch Index Monitoring.pending_task_stats_enabled
(boolean). Enable Datadog Opensearch Pending Task Monitoring.pshard_stats_enabled
(boolean). Enable Datadog Opensearch Primary Shard Monitoring.Appears on spec.datadog
.
Datadog Redis Options.
Required
command_stats_enabled
(boolean). Enable command_stats option in the agent's configuration.Appears on spec
.
External AWS CloudWatch Metrics integration Logs configuration values.
Optional
dropped_metrics
(array of objects, MaxItems: 1024). Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics). See below for nested schema.extra_metrics
(array of objects, MaxItems: 1024). Metrics to allow through to AWS CloudWatch (in addition to default metrics). See below for nested schema.Appears on spec.externalAWSCloudwatchMetrics
.
Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics).
Required
field
(string, MaxLength: 1000). Identifier of a value in the metric.metric
(string, MaxLength: 1000). Identifier of the metric.Appears on spec.externalAWSCloudwatchMetrics
.
Metrics to allow through to AWS CloudWatch (in addition to default metrics).
Required
field
(string, MaxLength: 1000). Identifier of a value in the metric.metric
(string, MaxLength: 1000). Identifier of the metric.Appears on spec
.
Kafka Connect service configuration values.
Required
kafka_connect
(object). Kafka Connect service configuration values. See below for nested schema.Appears on spec.kafkaConnect
.
Kafka Connect service configuration values.
Optional
config_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration data are stored.This must be the same for all workers with the same group_id.group_id
(string, MaxLength: 249). A unique string that identifies the Connect cluster group this worker belongs to.offset_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration offsets are stored.This must be the same for all workers with the same group_id.status_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration status updates are stored.This must be the same for all workers with the same group_id.Appears on spec
.
Kafka logs configuration values.
Required
kafka_topic
(string, MinLength: 1, MaxLength: 249). Topic name.Optional
selected_log_fields
(array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.Appears on spec
.
Kafka MirrorMaker configuration values.
Optional
cluster_alias
(string, Pattern: ^[a-zA-Z0-9_.-]+$
, MaxLength: 128). The alias under which the Kafka cluster is known to MirrorMaker. Can contain the following symbols: ASCII alphanumerics, .
, _
, and -
.kafka_mirrormaker
(object). Kafka MirrorMaker configuration values. See below for nested schema.Appears on spec.kafkaMirrormaker
.
Kafka MirrorMaker configuration values.
Optional
consumer_fetch_min_bytes
(integer, Minimum: 1, Maximum: 5242880). The minimum amount of data the server should return for a fetch request.producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). The batch size in bytes producer will attempt to collect before publishing to broker.producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The amount of bytes producer can use for buffering data before publishing to broker.producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). The linger time (ms) for waiting new data to arrive for publishing.producer_max_request_size
(integer, Minimum: 0, Maximum: 268435456). The maximum request size in bytes.Appears on spec
.
Logs configuration values.
Optional
elasticsearch_index_days_max
(integer, Minimum: 1, Maximum: 10000). Elasticsearch index retention limit.elasticsearch_index_prefix
(string, MinLength: 1, MaxLength: 1024). Elasticsearch index prefix.selected_log_fields
(array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.Appears on spec
.
Metrics configuration values.
Optional
database
(string, Pattern: ^[_A-Za-z0-9][-_A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of the database where to store metric datapoints. Only affects PostgreSQL destinations. Defaults to metrics
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.retention_days
(integer, Minimum: 0, Maximum: 10000). Number of days to keep old metrics. Only affects PostgreSQL destinations. Set to 0 for no automatic cleanup. Defaults to 30 days.ro_username
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of a user that can be used to read metrics. This will be used for Grafana integration (if enabled) to prevent Grafana users from making undesired changes. Only affects PostgreSQL destinations. Defaults to metrics_reader
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.source_mysql
(object). Configuration options for metrics where source service is MySQL. See below for nested schema.username
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of the user used to write metrics. Only affects PostgreSQL destinations. Defaults to metrics_writer
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.Appears on spec.metrics
.
Configuration options for metrics where source service is MySQL.
Required
telegraf
(object). Configuration options for Telegraf MySQL input plugin. See below for nested schema.Appears on spec.metrics.source_mysql
.
Configuration options for Telegraf MySQL input plugin.
Optional
gather_event_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS.gather_file_events_stats
(boolean). gather metrics from PERFORMANCE_SCHEMA.FILE_SUMMARY_BY_EVENT_NAME.gather_index_io_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE.gather_info_schema_auto_inc
(boolean). Gather auto_increment columns and max values from information schema.gather_innodb_metrics
(boolean). Gather metrics from INFORMATION_SCHEMA.INNODB_METRICS.gather_perf_events_statements
(boolean). Gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_DIGEST.gather_process_list
(boolean). Gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST.gather_slave_status
(boolean). Gather metrics from SHOW SLAVE STATUS command output.gather_table_io_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE.gather_table_lock_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS.gather_table_schema
(boolean). Gather metrics from INFORMATION_SCHEMA.TABLES.perf_events_statements_digest_text_limit
(integer, Minimum: 1, Maximum: 2048). Truncates digest text from perf_events_statements into this many characters.perf_events_statements_limit
(integer, Minimum: 1, Maximum: 4000). Limits metrics from perf_events_statements.perf_events_statements_time_limit
(integer, Minimum: 1, Maximum: 2592000). Only include perf_events_statements whose last seen is less than this many seconds.apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: my-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: service-user-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n serviceName: my-service-name\n
"},{"location":"api-reference/serviceuser.html#ServiceUser","title":"ServiceUser","text":"ServiceUser is the Schema for the serviceusers API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ServiceUser
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ServiceUserSpec defines the desired state of ServiceUser. See below for nested schema.Appears on ServiceUser
.
ServiceUserSpec defines the desired state of ServiceUser.
Required
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the user to.serviceName
(string, MaxLength: 63). Service to link the user to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.authentication
(string, Enum: caching_sha2_password
, mysql_native_password
). Authentication details.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: SERVICEUSER_HOST
, SERVICEUSER_PORT
, SERVICEUSER_USERNAME
, SERVICEUSER_PASSWORD
, SERVICEUSER_CA_CERT
, SERVICEUSER_ACCESS_CERT
, SERVICEUSER_ACCESS_KEY
. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: SERVICEUSER_HOST
, SERVICEUSER_PORT
, SERVICEUSER_USERNAME
, SERVICEUSER_PASSWORD
, SERVICEUSER_CA_CERT
, SERVICEUSER_ACCESS_CERT
, SERVICEUSER_ACCESS_KEY
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.The Aiven Operator for Kubernetes project accepts contributions via GitHub pull requests. This document outlines the process to help get your contribution accepted.
Please see also the Aiven Operator for Kubernetes Developer Guide.
"},{"location":"contributing/index.html#support-channels","title":"Support Channels","text":"This project offers support through GitHub issues and can be filed here. Moreover, GitHub issues are used as the primary method for tracking anything to do with the Aiven Operator for Kubernetes project.
"},{"location":"contributing/index.html#pull-request-process","title":"Pull Request Process","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
"},{"location":"contributing/index.html#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
Examples of unacceptable behavior by participants include:
This project adheres to the Conventional Commits specification. Please, make sure that your commit messages follow that specification.
"},{"location":"contributing/index.html#our-responsibilities","title":"Our Responsibilities","text":"Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"contributing/index.html#scope","title":"Scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
"},{"location":"contributing/index.html#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at opensource@aiven.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
"},{"location":"contributing/developer-guide.html","title":"Developer guide","text":""},{"location":"contributing/developer-guide.html#getting-started","title":"Getting Started","text":"You must have a working Go environment and then clone the repository:
git clone git@github.com:aiven/aiven-operator.git\ncd aiven-operator\n
"},{"location":"contributing/developer-guide.html#resource-generation","title":"Resource generation","text":"Please see this page for more information.
"},{"location":"contributing/developer-guide.html#building","title":"Building","text":"The project uses the make
build system.
Building the operator binary:
make build\n
"},{"location":"contributing/developer-guide.html#testing","title":"Testing","text":"As of now, we only support integration tests who interact directly with Aiven. To run the tests, you'll need an Aiven account and an Aiven authentication code.
"},{"location":"contributing/developer-guide.html#prerequisites","title":"Prerequisites","text":"Please have installed first:
-w0
flag, some tests may not work properly kind create cluster --image kindest/node:v1.24.0 --wait 5m\n
The following commands must be executed with these environment variables (keep them in secret!):
AIVEN_TOKEN
\u2014 your authentication token AIVEN_PROJECT_NAME
\u2014 your Aiven project name to run services inSetup everything:
make e2e-setup-kind\n
Note
Additionally, webhooks can be disabled, if there are any problems with them.
WEBHOOKS_ENABLED=false make e2e-setup-kind\n
Run e2e tests (creates real services in AIVEN_PROJECT_NAME
):
make test-e2e-preinstalled \n
When you're done, just drop the cluster:
kind delete cluster\n
"},{"location":"contributing/developer-guide.html#documentation","title":"Documentation","text":"The documentation is written in markdown and generated by mkdocs and mkdocs-material.
To run the documentation live preview:
make serve-docs\n
And open the http://localhost:8000/aiven-operator/
page in your web browser.
The documentation API Reference section is generated automatically from the source code during the documentation deployment. To generate it locally, run the following command:
make docs\n
"},{"location":"contributing/resource-generation.html","title":"Resource generation","text":"Aiven Kubernetes Operator generates service configs code (also known as user configs) and documentation from public service types schema.
"},{"location":"contributing/resource-generation.html#flow-overview","title":"Flow overview","text":"When a new schema is issued on the API, a cron job fetches it, parses, patches, and saves in a shared library \u2014 go-api-schemas.
When the library is updated, the GitHub dependabot creates PRs to the dependant repositories, like Aiven Kubernetes Operator and Aiven Terraform Provider.
Then the make generate
command is called by GitHub action. And the PR is ready for review.
flowchart TB\n API(Aiven API) <-.->|polls schema updates| Schema([go-api-schemas])\n Bot(dependabot) <-.->|polls updates| Schema \n Bot-->|pull request|UpdateOP[/\"\u2728 $ make generate \u2728\"/]\n UpdateOP-->|review| OP([operator repository])
"},{"location":"contributing/resource-generation.html#make-generate","title":"make generate","text":"The command runs several generators in a certain sequence. First, the user config generator is called. Then controller-gen cli. Then API reference docs generator and charts generator.
Here how it goes in the details:
./<api-reference-docs>/example/
, if it finds one, it validates that with the CRD. Each CRD has an OpenAPI v3 schema as a part of it. This is also used by Kubernetes itself to validate user input.flowchart TB\n Make[/$ make generate/]-->Generator(userconfig generator<br> creates/updates structs using updated spec)\n Generator-->|go: KafkaUserConfig struct| K8S(controller-gen<br> adds k8s methods to structs)\n K8S-->|go files| CRD(controller-gen<br> creates CRDs out of structs)\n CRD-->|CRD: aiven.io_kafkas.yaml| Docs(docs generator)\n subgraph API reference generation\n Docs-->|aiven.io_kafkas.yaml|Reference(creates reference<br> out of CRD)\n Docs-->|examples/kafka.yaml,<br> aiven.io_kafkas.yaml|Examples(validates example<br> using CRD)\n Examples--> Markdown(creates docs out of CRDs, adds examples)\n Reference-->Markdown(kafka.md)\n end\n CRD-->|yaml files|Charts(charts generator<br> updates helm charts<br> and the changelog)\n Charts-->ToRelease(\"Ready to release \ud83c\udf89\")\n Markdown-->ToRelease
"},{"location":"contributing/resource-generation.html#charts-version-bump","title":"Charts version bump","text":"By default, charts generator keeps the current helm chart's version, because it doesn't know semver. You need it to do manually.
To do so run the following command with the version of your choice:
make version=v1.0.0 charts\n
"},{"location":"installation/helm.html","title":"Installing with Helm (recommended)","text":""},{"location":"installation/helm.html#installing","title":"Installing","text":"The Aiven Operator for Kubernetes can be installed via Helm.
Before you start, make sure you have the prerequisites.
First add the Aiven Helm repository:
helm repo add aiven https://aiven.github.io/aiven-charts && helm repo update\n
"},{"location":"installation/helm.html#installing-custom-resource-definitions","title":"Installing Custom Resource Definitions","text":"helm install aiven-operator-crds aiven/aiven-operator-crds\n
Verify the installation:
kubectl api-resources --api-group=aiven.io\n
The output is similar to the following: NAME SHORTNAMES APIVERSION NAMESPACED KIND\nconnectionpools aiven.io/v1alpha1 true ConnectionPool\ndatabases aiven.io/v1alpha1 true Database\n... < several omitted lines >\n
"},{"location":"installation/helm.html#installing-the-operator","title":"Installing the Operator","text":"helm install aiven-operator aiven/aiven-operator\n
Note
Installation will fail if webhooks are enabled and the CRDs for the cert-manager are not installed.
Verify the installation:
helm status aiven-operator\n
The output is similar to the following:
NAME: aiven-operator\nLAST DEPLOYED: Fri Sep 10 15:23:26 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\n
It is also possible to install the operator without webhooks enabled:
helm install aiven-operator aiven/aiven-operator --set webhooks.enabled=false\n
"},{"location":"installation/helm.html#configuration-options","title":"Configuration Options","text":"Please refer to the values.yaml of the chart.
"},{"location":"installation/helm.html#installing-without-full-cluster-administrator-access","title":"Installing without full cluster administrator access","text":"There can be some scenarios where the individual installing the Helm chart does not have the ability to provision cluster-wide resources (e.g. ClusterRoles/ClusterRoleBindings). In this scenario, you can have a cluster administrator manually install the ClusterRole and ClusterRoleBinding the operator requires prior to installing the Helm chart specifying false
for the clusterRole.create
attribute.
Important
Please see this page for more information.
Find out the name of your deployment:
helm list\n
The output has the name of each deployment similar to the following:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION\naiven-operator default 1 2021-09-09 10:56:14.623700249 +0200 CEST deployed aiven-operator-v0.1.0 v0.1.0 \naiven-operator-crds default 1 2021-09-09 10:56:05.736411868 +0200 CEST deployed aiven-operator-crds-v0.1.0 v0.1.0\n
Remove the CRDs:
helm uninstall aiven-operator-crds\n
The confirmation message is similar to the following:
release \"aiven-operator-crds\" uninstalled\n
Remove the operator:
helm uninstall aiven-operator\n
The confirmation message is similar to the following:
release \"aiven-operator\" uninstalled\n
"},{"location":"installation/kubectl.html","title":"Installing with kubectl","text":""},{"location":"installation/kubectl.html#installing","title":"Installing","text":"Before you start, make sure you have the prerequisites.
All Aiven Operator for Kubernetes components can be installed from one YAML file that is uploaded for every release:
kubectl apply -f https://github.com/aiven/aiven-operator/releases/latest/download/deployment.yaml\n
By default the Deployment is installed into the aiven-operator-system
namespace.
Assuming you installed version vX.Y.Z
of the operator it can be uninstalled via
kubectl delete -f https://github.com/aiven/aiven-operator/releases/download/vX.Y.Z/deployment.yaml\n
"},{"location":"installation/prerequisites.html","title":"Prerequisites","text":"The Aiven Operator for Kubernetes supports all major Kubernetes distributions, both locally and in the cloud.
Make sure you have the following:
The Aiven Operator for Kubernetes uses cert-manager
to configure the service reference of our webhooks.
Please follow the installation instructions on their website.
Note
This is not required in the Helm installation if you select to disable webhooks, but that is not recommended outside of playground use. The Aiven Operator for Kubernetes uses webhooks for setting defaults and enforcing invariants that are expected by the aiven API and will lead to errors if ignored. In the future webhooks will also be used for conversion and supporting multiple CRD versions.
"},{"location":"installation/uninstalling.html","title":"Uninstalling","text":"Danger
Uninstalling the Aiven Operator for Kubernetes can remove the resources created in Aiven, possibly resulting in data loss.
Depending on your installation, please follow one of:
Aiven resources need to have an accompanying secret that contains the token that is used to authorize the manipulation of that resource. If that token expired then you will not be able to delete the custom resource and deletion will also hang until the situation is resolved. The recommended approach to deal with that situation is to patch a valid token into the secret again so that proper cleanup of aiven resources can take place.
"},{"location":"installation/uninstalling.html#hanging-deletions","title":"Hanging deletions","text":"To protect the secrets that the operator is using from deletion, it adds the finalizer finalizers.aiven.io/needed-to-delete-services
to the secret. This solves a race condition that happens when deleting a namespace, where there is a possibility of the secret getting deleted before the resource that uses it. When the controller is deleted it may not cleanup the finalizers from all secrets. If there is a secret with this finalizer blocking deletion of a namespace, for now please do
kubectl patch secret <offending-secret> -p '{\"metadata\":{\"finalizers\":null}}' --type=merge\n
to remove the finalizer.
"},{"location":"resources/cassandra.html","title":"Cassandra","text":"Aiven for Apache Cassandra\u00ae is a distributed database designed to handle large volumes of writes.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/cassandra.html#creating-a-cassandra-instance","title":"Creating a Cassandra instance","text":"1. Create a file named cassandra-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Cassandra\nmetadata:\n name: cassandra-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Cassandra connection on the `cassandra-secret` Secret\n connInfoSecretTarget:\n name: cassandra-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f cassandra-sample.yaml \n
The output is:
cassandra.aiven.io/cassandra-sample created\n
3. Review the resource you created with this command:
kubectl describe cassandra.aiven.io cassandra-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-31T10:17:25Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-31T10:24:00Z\n Message: Instance is running on Aiven side\n Reason: CheckRunning\n Status: True\n Type: Running\n State: RUNNING\n...\n
The resource can be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the Cassandra connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret cassandra-secret \n
The output is similar to the following:
Name: cassandra-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nCASSANDRA_HOSTS: 59 bytes\nCASSANDRA_PASSWORD: 24 bytes\nCASSANDRA_PORT: 5 bytes\nCASSANDRA_URI: 66 bytes\nCASSANDRA_USER: 8 bytes\nCASSANDRA_HOST: 60 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret cassandra-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"CASSANDRA_HOST\": \"<secret>\",\n \"CASSANDRA_HOSTS\": \"<secret>\",\n \"CASSANDRA_PASSWORD\": \"<secret>\",\n \"CASSANDRA_PORT\": \"14609\",\n \"CASSANDRA_URI\": \"<secret>\",\n \"CASSANDRA_USER\": \"avnadmin\"\n}\n
"},{"location":"resources/cassandra.html#creating-a-cassandra-user","title":"Creating a Cassandra user","text":"You can create service users for your instance of Aiven for Apache Cassandra. Service users are unique to this instance and are not shared with any other services.
1. Create a file named cassandra-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: cassandra-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: cassandra-service-user-secret\n\n project: <your-project-name>\n serviceName: cassandra-sample\n
2. Create the user by applying the configuration:
kubectl apply -f cassandra-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using the following command:
kubectl get secret cassandra-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"<secret>\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"cassandra-service-user\"\n}\n
You can connect to the Cassandra instance using these credentials and the host information from the cassandra-secret
Secret.
Aiven for MySQL is a fully managed relational database service, deployable in the cloud of your choice.
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/mysql.html#creating-a-mysql-instance","title":"Creating a MySQL instance","text":"1. Create a file named mysql-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: MySQL\nmetadata:\n name: mysql-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the MySQL connection on the `mysql-secret` Secret\n connInfoSecretTarget:\n name: mysql-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f mysql-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe mysql.aiven.io mysql-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-02-22T15:43:44Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-02-22T15:43:44Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the MySQL connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret mysql-secret \n
The output is similar to the following:
Name: mysql-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nMYSQL_PORT: 5 bytes\nMYSQL_SSL_MODE: 8 bytes\nMYSQL_URI: 115 bytes\nMYSQL_USER: 8 bytes\nMYSQL_DATABASE: 9 bytes\nMYSQL_HOST: 39 bytes\nMYSQL_PASSWORD: 24 bytes\n
You can use jq to quickly decode the Secret:
kubectl get secret mysql-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"MYSQL_DATABASE\": \"defaultdb\",\n \"MYSQL_HOST\": \"<secret>\",\n \"MYSQL_PASSWORD\": \"<secret>\",\n \"MYSQL_PORT\": \"12691\",\n \"MYSQL_SSL_MODE\": \"REQUIRED\",\n \"MYSQL_URI\": \"<secret>\",\n \"MYSQL_USER\": \"avnadmin\"\n}\n
"},{"location":"resources/mysql.html#creating-a-mysql-user","title":"Creating a MySQL user","text":"You can create service users for your instance of Aiven for MySQL. Service users are unique to this instance and are not shared with any other services.
1. Create a file named mysql-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: mysql-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: mysql-service-user-secret\n\n project: <your-project-name>\n serviceName: mysql-sample\n
2. Create the user by applying the configuration:
kubectl apply -f mysql-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using jq:
kubectl get secret mysql-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"<secret>\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"mysql-service-user\"\n}\n
You can connect to the MySQL instance using these credentials and the host information from the mysql-secret
Secret.
OpenSearch\u00ae is an open source search and analytics suite including search engine, NoSQL document database, and visualization interface. OpenSearch offers a distributed, full-text search engine based on Apache Lucene\u00ae with a RESTful API interface and support for JSON documents.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/opensearch.html#creating-an-opensearch-instance","title":"Creating an OpenSearch instance","text":"1. Create a file named os-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: OpenSearch\nmetadata:\n name: os-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the OpenSearch connection on the `os-secret` Secret\n connInfoSecretTarget:\n name: os-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f os-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe opensearch.aiven.io os-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-19T14:41:43Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-19T14:41:43Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the OpenSearch connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret os-secret \n
The output is similar to the following:
Name: os-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nHOST: 61 bytes\nPASSWORD: 24 bytes\nPORT: 5 bytes\nUSER: 8 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret os-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"HOST\": \"os-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"13041\",\n \"USER\": \"avnadmin\"\n}\n
"},{"location":"resources/opensearch.html#creating-an-opensearch-user","title":"Creating an OpenSearch user","text":"You can create service users for your instance of Aiven for OpenSearch. Service users are unique to this instance and are not shared with any other services.
1. Create a file named os-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: os-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: os-service-user-secret\n\n project: <your-project-name>\n serviceName: os-sample\n
2. Create the user by applying the configuration:
kubectl apply -f os-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using the following command:
kubectl get secret os-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"os-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"os-service-user\"\n}\n
You can connect to the OpenSearch instance using these credentials and the host information from the os-secret
Secret.
PostgreSQL is an open source, relational database. It's ideal for organisations that need a well organised tabular datastore. On top of the strict table and columns formats, PostgreSQL also offers solutions for nested datasets with the native jsonb
format and advanced set of extensions including PostGIS, a spatial database extender for location queries. Aiven for PostgreSQL is the perfect fit for your relational data.
With Aiven Kubernetes Operator, you can manage Aiven for PostgreSQL through the well defined Kubernetes API.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/postgresql.html#creating-a-postgresql-instance","title":"Creating a PostgreSQL instance","text":"1. Create a file named pg-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the PostgreSQL connection on the `pg-connection` Secret\n connInfoSecretTarget:\n name: pg-connection\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific PostgreSQL configuration\n userConfig:\n pg_version: '11'\n
2. Create the service by applying the configuration:
kubectl apply -f pg-sample.yaml\n
3. Review the resource you created with the following command:
kubectl get postgresqls.aiven.io pg-sample\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\npg-sample your-project google-europe-west1 startup-4 RUNNING\n
The resource can stay in the BUILDING
state for a couple of minutes. Once the state changes to RUNNING
, you are ready to access it.
For your convenience, the operator automatically stores the PostgreSQL connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
kubectl describe secret pg-connection\n
The output is similar to the following:
Name: pg-connection\nNamespace: default\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nDATABASE_URI: 107 bytes\nPGDATABASE: 9 bytes\nPGHOST: 38 bytes\nPGPASSWORD: 16 bytes\nPGPORT: 5 bytes\nPGSSLMODE: 7 bytes\nPGUSER: 8 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret pg-connection -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"DATABASE_URI\": \"postgres://avnadmin:<secret-password>@pg-sample-your-project.aivencloud.com:13039/defaultdb?sslmode=require\",\n \"PGDATABASE\": \"defaultdb\",\n \"PGHOST\": \"pg-sample-your-project.aivencloud.com\",\n \"PGPASSWORD\": \"<secret-password>\",\n \"PGPORT\": \"13039\",\n \"PGSSLMODE\": \"require\",\n \"PGUSER\": \"avnadmin\"\n}\n
"},{"location":"resources/postgresql.html#testing-the-connection","title":"Testing the connection","text":"You can verify your PostgreSQL connection from a Kubernetes workload by deploying a Pod that runs the psql
command.
1. Create a file named pod-psql.yaml
apiVersion: v1\nkind: Pod\nmetadata:\n name: psql-test-connection\nspec:\n restartPolicy: Never\n containers:\n - image: postgres:11-alpine\n name: postgres\n command: [ 'psql', '$(DATABASE_URI)', '-c', 'SELECT version();' ]\n\n # the pg-connection Secret becomes environment variables \n envFrom:\n - secretRef:\n name: pg-connection\n
It runs once and stops, due to the restartPolicy: Never
flag.
2. Inspect the log:
kubectl logs psql-test-connection\n
The output is similar to the following:
version \n---------------------------------------------------------------------------------------------\n PostgreSQL 11.12 on x86_64-pc-linux-gnu, compiled by gcc, a 68c5366192 p 6b9244f01a, 64-bit\n(1 row)\n
You have now connected to the PostgreSQL, and executed the SELECT version();
query.
The Database
Kubernetes resource allows you to create a logical database within the PostgreSQL instance.
Create the pg-database-sample.yaml
file with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Database\nmetadata:\n name: pg-database-sample\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n # the name of the previously created PostgreSQL instance\n serviceName: pg-sample\n\n project: <your-project-name>\n lcCollate: en_US.UTF-8\n lcCtype: en_US.UTF-8\n
You can now connect to the pg-database-sample
using the credentials stored in the pg-connection
Secret.
Aiven uses the concept of service user that allows you to create users for different services. You can create one for the PostgreSQL instance.
1. Create a file named pg-service-user.yaml
.
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: pg-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: pg-service-user-connection\n\n project: <your-project-name>\n serviceName: pg-sample\n
2. Apply the configuration with the following command.
kubectl apply -f pg-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information, in this case named pg-service-user-connection
:
kubectl get secret pg-service-user-connection -o json | jq '.data | map_values(@base64d)'\n
The output has the password and username:
{\n \"PASSWORD\": \"<secret-password>\",\n \"USERNAME\": \"pg-service-user\"\n}\n
You can now connect to the PostgreSQL instance using the credentials generated above, and the host information from the pg-connection
Secret.
Connection pooling allows you to maintain very large numbers of connections to a database while minimizing the consumption of server resources. For more information, refer to the connection pooling article in Aiven Docs. Aiven for PostgreSQL uses PGBouncer for connection pooling.
You can create a connection pool with the ConnectionPool
resource using the previously created Database
and ServiceUser
:
Create a new file named pg-connection-pool.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: ConnectionPool\nmetadata:\n name: pg-connection-pool\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: pg-connection-pool-connection\n\n project: <your-project-name>\n serviceName: pg-sample\n databaseName: pg-database-sample\n username: pg-service-user\n poolSize: 10\n poolMode: transaction\n
The ConnectionPool
generates a Secret with the connection info using the name from the connInfoSecretTarget.Name
field:
kubectl get secret pg-connection-pool-connection -o json | jq '.data | map_values(@base64d)' \n
The output is similar to the following: {\n \"DATABASE_URI\": \"postgres://pg-service-user:<secret-password>@pg-sample-you-project.aivencloud.com:13040/pg-connection-pool?sslmode=require\",\n \"PGDATABASE\": \"pg-database-sample\",\n \"PGHOST\": \"pg-sample-your-project.aivencloud.com\",\n \"PGPASSWORD\": \"<secret-password>\",\n \"PGPORT\": \"13040\",\n \"PGSSLMODE\": \"require\",\n \"PGUSER\": \"pg-service-user\"\n}\n
"},{"location":"resources/postgresql.html#creating-a-postgresql-read-only-replica","title":"Creating a PostgreSQL read-only replica","text":"Read-only replicas can be used to reduce the load on the primary service by making read-only queries against the replica service.
To create a read-only replica for a PostgreSQL service, you create a second PostgreSQL service and use serviceIntegrations to replicate data from your primary service.
The example that follows creates a primary service and a read-only replica.
1. Create a new file named pg-read-replica.yaml
with the following:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: primary-pg-service\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your project's name here\n project: <your-project-name>\n\n # add the cloud provider and plan of your choice\n # you can see all of the options at https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n userConfig:\n pg_version: '15'\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: read-replica-pg\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your project's name here\n project: <your-project-name>\n\n # add the cloud provider and plan of your choice\n # you can see all of the options at https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: saturday\n maintenanceWindowTime: 23:00:00\n userConfig:\n pg_version: '15'\n\n # use the read_replica integration and point it to your primary service\n serviceIntegrations:\n - integrationType: read_replica\n sourceServiceName: primary-pg-service\n
Note
You can create the replica service in a different region or on a different cloud provider.
2. Apply the configuration with the following command:
kubectl apply -f pg-read-replica.yaml\n
The output is similar to the following:
postgresql.aiven.io/primary-pg-service created\npostgresql.aiven.io/read-replica-pg created\n
3. Check the status of the primary service with the following command:
kubectl get postgresqls.aiven.io primary-pg-service\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\nprimary-pg-service <your-project-name> google-europe-west1 startup-4 RUNNING\n
The resource can be in the BUILDING
state for a few minutes. After the state of the primary service changes to RUNNING
, the read-only replica is created. You can check the status of the replica using the same command with the name of the replica:
kubectl get postgresqls.aiven.io read-replica-pg\n
"},{"location":"resources/project-vpc.html","title":"Aiven Project VPC","text":"Virtual Private Cloud (VPC) peering is a method of connecting separate AWS, Google Cloud or Microsoft Azure private networks to each other. It makes it possible for the virtual machines in the different VPCs to talk to each other directly without going through the public internet.
Within the Aiven Kubernetes Operator, you can create a ProjectVPC
on Aiven's side to connect to your cloud provider.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/project-vpc.html#creating-an-aiven-vpc","title":"Creating an Aiven VPC","text":"1. Create a file named vpc-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: ProjectVPC\nmetadata:\n name: vpc-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # creates a VPC to link an AWS account on the South Africa region\n cloudName: aws-af-south-1\n\n # the network range used by the VPC\n networkCidr: 192.168.0.0/24\n
2. Create the Project by applying the configuration:
kubectl apply -f vpc-sample.yaml\n
3. Review the resource you created with the following command:
kubectl get projects.aiven.io vpc-sample\n
The output is similar to the following:
NAME PROJECT CLOUD NETWORK CIDR\nvpc-sample <your-project> aws-af-south-1 192.168.0.0/24\n
"},{"location":"resources/project-vpc.html#using-the-aiven-vpc","title":"Using the Aiven VPC","text":"Follow the official VPC documentation to complete the VPC peering on your cloud of choice.
"},{"location":"resources/project.html","title":"Aiven Project","text":"Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
The Project
CRD allows you to create Aiven Projects, where your resources can be located.
To create a fully working Aiven Project with the Aiven Operator you need a source Aiven Project already created with a working billing configuration, like a credit card.
Create a file named project-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Project\nmetadata:\n name: project-sample\nspec:\n # the source Project to copy the billing information from\n copyFromProject: <your-source-project>\n\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: project-sample\n
Apply the resource with:
kubectl apply -f project-sample.yaml\n
Verify the newly created Project:
kubectl get projects.aiven.io project-sample\n
The output is similar to the following:
NAME AGE\nproject-sample 22s\n
"},{"location":"resources/redis.html","title":"Redis","text":"Aiven for Redis\u00ae* is a fully managed in-memory NoSQL database that you can deploy in the cloud of your choice to store and access data quickly and efficiently.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/redis.html#creating-a-redis-instance","title":"Creating a Redis instance","text":"1. Create a file named redis-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Redis\nmetadata:\n name: redis-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Redis connection on the `redis-secret` Secret\n connInfoSecretTarget:\n name: redis-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Redis configuration\n userConfig:\n redis_maxmemory_policy: \"allkeys-random\"\n
2. Create the service by applying the configuration:
kubectl apply -f redis-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe redis.aiven.io redis-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-19T14:48:59Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-19T14:48:59Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the Redis connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret redis-secret \n
The output is similar to the following:
Name: redis-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nSSL: 8 bytes\nUSER: 7 bytes\nHOST: 60 bytes\nPASSWORD: 24 bytes\nPORT: 5 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret redis-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"HOST\": \"redis-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret-password>\",\n \"PORT\": \"14610\",\n \"SSL\": \"required\",\n \"USER\": \"default\"\n}\n
"},{"location":"resources/service-integrations.html","title":"Service Integrations","text":"Service Integrations provide additional functionality and features by connecting different Aiven services together.
See our Getting Started with Service Integrations guide for more information.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/service-integrations.html#send-kafka-logs-to-a-kafka-topic","title":"Send Kafka logs to a Kafka Topic","text":"This integration allows you to send Kafka service logs to a specific Kafka Topic.
First, let's create a Kafka service and a topic.
1. Create a new file named kafka-sample-topic.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-2\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: logs\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # here we can specify how many partitions the topic should have\n partitions: 3\n # and the topic replication factor\n replication: 2\n\n # we also support various topic-specific configurations\n config:\n flush_ms: 100\n
2. Create the resource on Kubernetes:
kubectl apply -f kafka-sample-topic.yaml \n
3. Now, create a ServiceIntegration
resource to send the Kafka logs to the created topic. In the same file, add the following YAML:
apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: service-integration-kafka-logs\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # indicates the type of the integration\n integrationType: kafka_logs\n\n # we will send the logs to the same kafka-sample instance\n # the source and destination are the same\n sourceServiceName: kafka-sample\n destinationServiceName: kafka-sample\n\n # the topic name we will send to\n kafkaLogs:\n kafka_topic: logs\n
4. Reapply the resource on Kubernetes:
kubectl apply -f kafka-sample-topic.yaml \n
5. Let's check the created service integration:
kubectl get serviceintegrations.aiven.io service-integration-kafka-logs\n
The output is similar to the following:
NAME PROJECT TYPE SOURCE SERVICE NAME DESTINATION SERVICE NAME SOURCE ENDPOINT ID DESTINATION ENDPOINT ID\nservice-integration-kafka-logs your-project kafka_logs kafka-sample kafka-sample \n
Your Kafka service logs are now being streamed to the logs
Kafka topic.
Aiven for Apache Kafka is an excellent option if you need to run Apache Kafka at scale. With Aiven Kubernetes Operator you can get up and running with a suitably sized Apache Kafka service in a few minutes.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/kafka/index.html#creating-a-kafka-instance","title":"Creating a Kafka instance","text":"1. Create a file named kafka-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-2\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n
2. Create the following resource on Kubernetes:
kubectl apply -f kafka-sample.yaml \n
3. Inspect the service created using the command below.
kubectl get kafka.aiven.io kafka-sample\n
The output has the project name and state, similar to the following:
NAME PROJECT REGION PLAN STATE\nkafka-sample <your-project> google-europe-west1 startup-2 RUNNING\n
After a couple of minutes, the STATE
field is changed to RUNNING
, and is ready to be used.
For your convenience, the operator automatically stores the Kafka connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
kubectl describe secret kafka-auth \n
The output is similar to the following:
Name: kafka-auth\nNamespace: default\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nCA_CERT: 1537 bytes\nHOST: 41 bytes\nPASSWORD: 16 bytes\nPORT: 5 bytes\nUSERNAME: 8 bytes\nACCESS_CERT: 1533 bytes\nACCESS_KEY: 2484 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret kafka-auth -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"CA_CERT\": \"<secret-ca-cert>\",\n \"ACCESS_CERT\": \"<secret-cert>\",\n \"ACCESS_KEY\": \"<secret-access-key>\",\n \"HOST\": \"kafka-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret-password>\",\n \"PORT\": \"13041\",\n \"USERNAME\": \"avnadmin\"\n}\n
"},{"location":"resources/kafka/index.html#testing-the-connection","title":"Testing the connection","text":"You can verify your access to the Kafka cluster from a Pod using the authentication data from the kafka-auth
Secret. kcat is used for our examples below.
1. Create a file named kafka-test-connection.yaml
, and add the following content:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-test-connection\nspec:\n restartPolicy: Never\n containers:\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n # the command below will connect to the Kafka cluster\n # and output its metadata\n command: [\n 'kcat', '-b', '$(HOST):$(PORT)',\n '-X', 'security.protocol=SSL',\n '-X', 'ssl.key.location=/kafka-auth/ACCESS_KEY',\n '-X', 'ssl.key.password=$(PASSWORD)',\n '-X', 'ssl.certificate.location=/kafka-auth/ACCESS_CERT',\n '-X', 'ssl.ca.location=/kafka-auth/CA_CERT',\n '-L'\n ]\n\n # loading the data from the Secret as environment variables\n # useful to access the Kafka information, like hostname and port\n envFrom:\n - secretRef:\n name: kafka-auth\n\n volumeMounts:\n - name: kafka-auth\n mountPath: \"/kafka-auth\"\n\n # loading the data from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: kafka-auth\n secret:\n secretName: kafka-auth\n
2. Apply the file.
kubectl apply -f kafka-test-connection.yaml\n
Once successfully applied, you have a log with the metadata information about the Kafka cluster.
kubectl logs kafka-test-connection \n
The output is similar to the following:
Metadata for all topics (from broker -1: ssl://kafka-sample-your-project.aivencloud.com:13041/bootstrap):\n 3 brokers:\n broker 2 at 35.205.234.70:13041\n broker 3 at 34.77.127.70:13041 (controller)\n broker 1 at 34.78.146.156:13041\n 0 topics:\n
"},{"location":"resources/kafka/index.html#creating-a-kafkatopic-and-kafkaacl","title":"Creating a KafkaTopic
and KafkaACL
","text":"To properly produce and consume content on Kafka, you need topics and ACLs. The operator supports both with the KafkaTopic
and KafkaACL
resources.
Below, here is how to create a Kafka topic named random-strings
where random string messages will be sent.
1. Create a file named kafka-topic-random-strings.yaml
with the content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: random-strings\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # here we can specify how many partitions the topic should have\n partitions: 3\n # and the topic replication factor\n replication: 2\n\n # we also support various topic-specific configurations\n config:\n flush_ms: 100\n
2. Create the resource on Kubernetes:
kubectl apply -f kafka-topic-random-strings.yaml\n
3. Create a user and an ACL. To use the Kafka topic, create a new user with the ServiceUser
resource (in order to avoid using the avnadmin
superuser), and the KafkaACL
to allow the user access to the topic.
In a file named kafka-acl-user-crab.yaml
, add the following two resources:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n # the name of our user \ud83e\udd80\n name: crab\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n # the Secret name we will store the users' connection information\n # looks exactly the same as the Secret generated when creating the Kafka cluster\n # we will use this Secret to produce and consume events later!\n connInfoSecretTarget:\n name: kafka-crab-connection\n\n # the Aiven project the user is related to\n project: <your-project-name>\n\n # the name of our Kafka Service\n serviceName: kafka-sample\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaACL\nmetadata:\n name: crab\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # the username from the ServiceUser above\n username: crab\n\n # the ACL allows to produce and consume on the topic\n permission: readwrite\n\n # specify the topic we created before\n topic: random-strings\n
To create the crab
user and its permissions, execute the following command:
kubectl apply -f kafka-acl-user-crab.yaml\n
"},{"location":"resources/kafka/index.html#producing-and-consuming-events","title":"Producing and consuming events","text":"Using the previously created KafkaTopic
, ServiceUser
, KafkaACL
, you can produce and consume events.
You can use kcat to produce a message into Kafka, and the -t random-strings
argument to select the desired topic, and use the content of the /etc/issue
file as the message's body.
1. Create a kafka-crab-produce.yaml
file with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-crab-produce\nspec:\n restartPolicy: Never\n containers:\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n # the command below will produce a message with the /etc/issue file content\n command: [\n 'kcat', '-b', '$(HOST):$(PORT)',\n '-X', 'security.protocol=SSL',\n '-X', 'ssl.key.location=/crab-auth/ACCESS_KEY',\n '-X', 'ssl.key.password=$(PASSWORD)',\n '-X', 'ssl.certificate.location=/crab-auth/ACCESS_CERT',\n '-X', 'ssl.ca.location=/crab-auth/CA_CERT',\n '-P', '-t', 'random-strings', '/etc/issue',\n ]\n\n # loading the crab user data from the Secret as environment variables\n # useful to access the Kafka information, like hostname and port\n envFrom:\n - secretRef:\n name: kafka-crab-connection\n\n volumeMounts:\n - name: crab-auth\n mountPath: \"/crab-auth\"\n\n # loading the crab user information from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: crab-auth\n secret:\n secretName: kafka-crab-connection\n
2. Create the Pod with the following content:
kubectl apply -f kafka-crab-produce.yaml\n
Now your event is stored in Kafka.
To consume a message, you can use a graphical interface called Kowl. It allows you to explore information about our Kafka cluster, such as brokers, topics, or consumer groups.
1. Create a Kubernetes Pod and service to deploy and access Kowl. Create a file named kafka-crab-consume.yaml
with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-crab-consume\n labels:\n app: kafka-crab-consume\nspec:\n containers:\n - image: quay.io/cloudhut/kowl:v1.4.0\n name: kowl\n\n # kowl configuration values\n env:\n - name: KAFKA_TLS_ENABLED\n value: 'true'\n\n - name: KAFKA_BROKERS\n value: $(HOST):$(PORT)\n - name: KAFKA_TLS_PASSPHRASE\n value: $(PASSWORD)\n\n - name: KAFKA_TLS_CAFILEPATH\n value: /crab-auth/CA_CERT\n - name: KAFKA_TLS_CERTFILEPATH\n value: /crab-auth/ACCESS_CERT\n - name: KAFKA_TLS_KEYFILEPATH\n value: /crab-auth/ACCESS_KEY\n\n # inject all connection information as environment variables\n envFrom:\n - secretRef:\n name: kafka-crab-connection\n\n volumeMounts:\n - name: crab-auth\n mountPath: /crab-auth\n\n # loading the crab user information from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: crab-auth\n secret:\n secretName: kafka-crab-connection\n\n---\n\n# we will be using a simple service to access Kowl on port 8080\napiVersion: v1\nkind: Service\nmetadata:\n name: kafka-crab-consume\nspec:\n selector:\n app: kafka-crab-consume\n ports:\n - port: 8080\n targetPort: 8080\n
2. Create the resources with:
kubectl apply -f kafka-crab-consume.yaml\n
3. In another terminal create a port-forward tunnel to your Pod:
kubectl port-forward kafka-crab-consume 8080:8080\n
4. In the browser of your choice, access the http://localhost:8080 address. You now see a page with the random-strings
topic listed:
5. Click the topic name to see the message.
You have now consumed the message.
"},{"location":"resources/kafka/connect.html","title":"Kafka Connect","text":"Aiven for Apache Kafka Connect is a framework and a runtime for integrating Kafka with other systems. Kafka connectors can either be a source (for pulling data from other systems into Kafka) or sink (for pushing data into other systems from Kafka).
This section involves a few different Kubernetes CRDs: 1. A KafkaService
service with a KafkaTopic
2. A KafkaConnect
service 3. A ServiceIntegration
to integrate the Kafka
and KafkaConnect
services 4. A PostgreSQL
used as a sink to receive messages from Kafka
5. A KafkaConnector
to finally connect the Kafka
with the PostgreSQL
Create a file named kafka-sample-connect.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample-connect\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: business-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n kafka_connect: true\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: kafka-topic-connect\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample-connect\n\n replication: 2\n partitions: 1\n
Next, create a file named kafka-connect.yaml
and add the following KafkaConnect
resource:
apiVersion: aiven.io/v1alpha1\nkind: KafkaConnect\nmetadata:\n name: kafka-connect\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
Now let's create a ServiceIntegration
. It will use the fields sourceServiceName
and destinationServiceName
to integrate the previously created kafka-sample-connect
and kafka-connect
. Open a new file named service-integration-connect.yaml
and add the content below:
apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: service-integration-kafka-connect\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # indicates the type of the integration\n integrationType: kafka_connect\n\n # we will send messages from the `kafka-sample-connect` to `kafka-connect`\n sourceServiceName: kafka-sample-connect\n destinationServiceName: kafka-connect\n
Let's add an Aiven for PostgreSQL service. It will be the service used as a sink, receiving messages from the kafka-sample-connect
cluster. Create a file named pg-sample-connect.yaml
with the content below:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-connect\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the PostgreSQL connection on the `pg-connection` Secret\n connInfoSecretTarget:\n name: pg-connection\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
Finally, let's add the glue of everything: a KafkaConnector
. As described in the specification, it will send receive messages from the kafka-sample-connect
and send them to the pg-connect
service. Check our official documentation for more connectors.
Create a file named kafka-connector-connect.yaml
and with the content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaConnector\nmetadata:\n name: kafka-connector\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # the Kafka cluster name\n serviceName: kafka-sample-connect\n\n # the connector we will be using\n connectorClass: io.aiven.connect.jdbc.JdbcSinkConnector\n\n userConfig:\n auto.create: \"true\"\n\n # constructs the pg-connect connection information\n connection.url: 'jdbc:postgresql://{{ fromSecret \"pg-connection\" \"PGHOST\"}}:{{ fromSecret \"pg-connection\" \"PGPORT\" }}/{{ fromSecret \"pg-connection\" \"PGDATABASE\" }}'\n connection.user: '{{ fromSecret \"pg-connection\" \"PGUSER\" }}'\n connection.password: '{{ fromSecret \"pg-connection\" \"PGPASSWORD\" }}'\n\n # specify which topics it will watch\n topics: kafka-topic-connect\n\n key.converter: org.apache.kafka.connect.json.JsonConverter\n value.converter: org.apache.kafka.connect.json.JsonConverter\n value.converter.schemas.enable: \"true\"\n
With all the files created, apply the new Kubernetes resources:
kubectl apply \\\n -f kafka-sample-connect.yaml \\\n -f kafka-connect.yaml \\\n -f service-integration-connect.yaml \\\n -f pg-sample-connect.yaml \\\n -f kafka-connector-connect.yaml\n
It will take some time for all the services to be up and running. You can check their status with the following command:
kubectl get \\\n kafkas.aiven.io/kafka-sample-connect \\\n kafkaconnects.aiven.io/kafka-connect \\\n postgresqls.aiven.io/pg-connect \\\n kafkaconnectors.aiven.io/kafka-connector\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\nkafka.aiven.io/kafka-sample-connect your-project google-europe-west1 business-4 RUNNING\n\nNAME STATE\nkafkaconnect.aiven.io/kafka-connect RUNNING\n\nNAME PROJECT REGION PLAN STATE\npostgresql.aiven.io/pg-connect your-project google-europe-west1 startup-4 RUNNING\n\nNAME SERVICE NAME PROJECT CONNECTOR CLASS STATE TASKS TOTAL TASKS RUNNING\nkafkaconnector.aiven.io/kafka-connector kafka-sample-connect your-project io.aiven.connect.jdbc.JdbcSinkConnector RUNNING 1 1\n
The deployment is finished when all services have the state RUNNING
."},{"location":"resources/kafka/connect.html#testing","title":"Testing","text":"To test the connection integration, let's produce a Kafka message using kcat from within the Kubernetes cluster. We will deploy a Pod responsible for crafting a message and sending to the Kafka cluster, using the kafka-auth
secret generate by the Kafka
CRD.
Create a new file named kcat-connect.yaml
and add the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-message\nspec:\n containers:\n\n restartPolicy: Never\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n command: ['/bin/sh']\n args: [\n '-c',\n 'echo {\\\"schema\\\":{\\\"type\\\":\\\"struct\\\",\\\"fields\\\":[{ \\\"field\\\": \\\"text\\\", \\\"type\\\": \\\"string\\\", \\\"optional\\\": false } ] }, \\\"payload\\\": { \\\"text\\\": \\\"Hello World\\\" } } > /tmp/msg;\n\n kcat\n -b $(HOST):$(PORT)\n -X security.protocol=SSL\n -X ssl.key.location=/kafka-auth/ACCESS_KEY\n -X ssl.key.password=$(PASSWORD)\n -X ssl.certificate.location=/kafka-auth/ACCESS_CERT\n -X ssl.ca.location=/kafka-auth/CA_CERT\n -P -t kafka-topic-connect /tmp/msg'\n ]\n\n envFrom:\n - secretRef:\n name: kafka-auth\n\n volumeMounts:\n - name: kafka-auth\n mountPath: \"/kafka-auth\"\n\n volumes:\n - name: kafka-auth\n secret:\n secretName: kafka-auth\n
Apply the file with:
kubectl apply -f kcat-connect.yaml\n
The Pod will execute the commands and finish. You can confirm its Completed
state with:
kubectl get pod kafka-message\n
The output is similar to the following:
NAME READY STATUS RESTARTS AGE\nkafka-message 0/1 Completed 0 5m35s\n
If everything went smoothly, we should have our produced message in the PostgreSQL service. Let's check that out.
Create a file named psql-connect.yaml
with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: psql-connect\nspec:\n restartPolicy: Never\n containers:\n - image: postgres:13\n name: postgres\n # \"kafka-topic-connect\" is the table automatically created by KafkaConnect\n command: ['psql', '$(DATABASE_URI)', '-c', 'SELECT * from \"kafka-topic-connect\";']\n\n envFrom:\n - secretRef:\n name: pg-connection\n
Apply the file with:
kubectl apply -f psql-connect.yaml\n
After a couple of seconds, inspect its log with this command:
kubectl logs psql-connect \n
The output is similar to the following:
text \n-------------\n Hello World\n(1 row)\n
"},{"location":"resources/kafka/connect.html#clean-up","title":"Clean up","text":"To clean up all the created resources, use the following command:
kubectl delete \\\n -f kafka-sample-connect.yaml \\\n -f kafka-connect.yaml \\\n -f service-integration-connect.yaml \\\n -f pg-sample-connect.yaml \\\n -f kafka-connector-connect.yaml \\\n -f kcat-connect.yaml \\\n -f psql-connect.yaml\n
"},{"location":"resources/kafka/schema.html","title":"Kafka Schema","text":""},{"location":"resources/kafka/schema.html#creating-a-kafkaschema","title":"Creating a KafkaSchema
","text":"Aiven develops and maintain Karapace, an open source implementation of Kafka REST and schema registry. Is available out of the box for our managed Kafka service.
The schema registry address and authentication is the same as the Kafka broker, the only different is the usage of the port 13044.
First, let's create an Aiven for Apache Kafka service.
1. Create a file named kafka-sample-schema.yaml
and add the content below:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: kafka-auth\n\n project: <your-project-name>\n cloudName: google-europe-west1\n plan: startup-2\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n userConfig:\n kafka_version: '2.7'\n\n # this flag enables the Schema registry\n schema_registry: true\n
2. Apply the changes with the following command:
kubectl apply -f kafka-schema.yaml \n
Now, let's create the schema itself.
1. Create a new file named kafka-sample-schema.yaml
and add the YAML content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaSchema\nmetadata:\n name: kafka-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample-schema\n\n # the name of the Schema\n subjectName: MySchema\n\n # the schema itself, in JSON format\n schema: |\n {\n \"type\": \"record\",\n \"name\": \"MySchema\",\n \"fields\": [\n {\n \"name\": \"field\",\n \"type\": \"string\"\n }\n ]\n }\n\n # sets the schema compatibility level \n compatibilityLevel: BACKWARD\n
2. Create the schema with the command:
kubectl apply -f kafka-schema.yaml\n
3. Review the resource you created with the following command:
kubectl get kafkaschemas.aiven.io kafka-schema\n
The output is similar to the following:
NAME SERVICE NAME PROJECT SUBJECT COMPATIBILITY LEVEL VERSION\nkafka-schema kafka-sample <your-project> MySchema BACKWARD 1\n
Now you can follow the instructions to use a schema registry in Java on how to use the schema created.
"}]} \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json index d8ba4a9c..8e444079 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Welcome to Aiven Operator for Kubernetes","text":"Provision and manage Aiven services from your Kubernetes cluster.
"},{"location":"index.html#what-is-aiven","title":"What is Aiven?","text":"Aiven offers managed services for the best open source data technologies, on a cloud of your choice.
We offer multiple cloud options because we believe that everyone should have access to great data platforms wherever they host their applications. Our customers tell us they love it because they know that they aren\u2019t locked in to one particular cloud platform for all their data needs.
"},{"location":"index.html#contributing","title":"Contributing","text":"The contribution guide covers everything you need to know about how you can contribute to Aiven Operator for Kubernetes. The developer guide will help you onboard as a developer.
"},{"location":"authentication.html","title":"Authenticating","text":"To get authenticated and authorized, set up the communication between the Aiven Operator for Kubernetes and Aiven by using a token stored in a Kubernetes secret. You can then refer to the secret name on every custom resource in the authSecretRef
field.
If you don't have an Aiven account yet, sign up here for a free trial. \ud83e\udd80
1. Generate an authentication token
Refer to our documentation article to generate your authentication token.
2. Create the Kubernetes Secret
The following command creates a secret named aiven-token
with a token
field containing the authentication token:
kubectl create secret generic aiven-token --from-literal=token=\"<your-token-here>\"\n
When managing your Aiven resources, we will be using the created Secret in the authSecretRef
field. It will look like the example below:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n [ ... ]\n
Also, note that within Aiven, all resources are conceptually inside a Project. By default, a random project name is generated when you signup, but you can also create new projects.
The Project name is required in most of the resources. It will look like the example below:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n project: <your-project-name-here>\n [ ... ]\n
"},{"location":"changelog.html","title":"Changelog","text":""},{"location":"changelog.html#v0150-2023-11-17","title":"v0.15.0 - 2023-11-17","text":"ServiceIntegration
: do not send empty user config to the API string
type fields to the documentationClickhouse
field userConfig.private_access.clickhouse_mysql
, type boolean
: Allow clients to connect to clickhouse_mysql with a DNS name that always resolves to the service's private IP addressesClickhouse
field userConfig.privatelink_access.clickhouse_mysql
, type boolean
: Enable clickhouse_mysqlClickhouse
field userConfig.public_access.clickhouse_mysql
, type boolean
: Allow clients to connect to clickhouse_mysql from the public internet for service nodes that are in a project VPC or another type of private networkGrafana
field userConfig.unified_alerting_enabled
, type boolean
: Enable or disable Grafana unified alerting functionalityKafka
field userConfig.aiven_kafka_topic_messages
, type boolean
: Allow access to read Kafka topic messages in the Aiven Console and REST APIKafka
field userConfig.kafka.sasl_oauthbearer_expected_audience
, type string
: The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiencesKafka
field userConfig.kafka.sasl_oauthbearer_expected_issuer
, type string
: Optional setting for the broker to use to verify that the JWT was created by the expected issuerKafka
field userConfig.kafka.sasl_oauthbearer_jwks_endpoint_url
, type string
: OIDC JWKS endpoint URL. By setting this the SASL SSL OAuth2/OIDC authentication is enabledKafka
field userConfig.kafka.sasl_oauthbearer_sub_claim_name
, type string
: Name of the scope from which to extract the subject claim from the JWT. Defaults to subKafka
field userConfig.kafka_version
: enum [3.1, 3.3, 3.4, 3.5]
\u2192 [3.1, 3.3, 3.4, 3.5, 3.6]
Kafka
field userConfig.tiered_storage.local_cache.size
: deprecatedOpenSearch
field userConfig.opensearch.indices_memory_max_index_buffer_size
, type integer
: Absolute value. Default is unbound. Doesn't work without indices.memory.index_buffer_sizeOpenSearch
field userConfig.opensearch.indices_memory_min_index_buffer_size
, type integer
: Absolute value. Default is 48mb. Doesn't work without indices.memory.index_buffer_sizeOpenSearch
field userConfig.opensearch.auth_failure_listeners.internal_authentication_backend_limiting.authentication_backend
: enum [internal]
OpenSearch
field userConfig.opensearch.auth_failure_listeners.internal_authentication_backend_limiting.type
: enum [username]
OpenSearch
field userConfig.opensearch.auth_failure_listeners.ip_rate_limiting.type
: enum [ip]
OpenSearch
field userConfig.opensearch.search_max_buckets
: maximum 65536
\u2192 1000000
ServiceIntegration
field kafkaMirrormaker.kafka_mirrormaker.producer_max_request_size
: maximum 67108864
\u2192 268435456
projectVpcId
and projectVPCRef
mutablenil
user config conversionCassandra
kind option additional_backup_regions
Grafana
kind option auto_login
Kafka
kind properties log_local_retention_bytes
, log_local_retention_ms
Kafka
kind option remote_log_storage_system_enable
OpenSearch
kind option auth_failure_listeners
OpenSearch
kind Index State Management optionsKafka
Kafka
version 3.5
Kafka
spec property scheduled_rebalance_max_delay_ms
Kafka
spec property remote_log_storage_system_enable
KafkaConnect
spec property scheduled_rebalance_max_delay_ms
OpenSearch
spec property openid
KAFKA_SCHEMA_REGISTRY_HOST
and KAFKA_SCHEMA_REGISTRY_PORT
for Kafka
KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
and KAFKA_REST_PORT
for Kafka
. Thanks to @Dariuschunclean_leader_election_enable
from KafkaTopic
kind configKAFKA_SASL_PORT
for Kafka
kind if SASL
authentication method is enabledredis
options to datadog ServiceIntegration
Cassandra
version 3
Kafka
versions 3.1
and 3.4
kafka_rest_config.producer_max_request_size
optionkafka_mirrormaker.producer_compression_type
optionclusterRole.create
option to Helm chart. Thanks to @ryaneorthOpenSearch.spec.userConfig.idp_pemtrustedcas_content
option. Specifies the PEM-encoded root certificate authority (CA) content for the SAML identity provider (IdP) server verification.ServiceIntegration
kind SourceProjectName
and DestinationProjectName
fieldsServiceIntegration
fields MaxLength
validationServiceIntegration
validation: multiple user configs cannot be setServiceIntegration
, should not require destinationServiceName
or sourceEndpointID
fieldServiceIntegration
, add missing external_aws_cloudwatch_metrics
type config serializationServiceIntegration
integration type listannotations
and labels
fields to connInfoSecretTarget
OpenSearch.spec.userConfig.opensearch.search_max_buckets
maximum to 65536
plan
as a required fieldminumim
, maximum
validations for number
typeip_filter
backward compatabilityclickhouseKafka.tables.data_format-property
enum RawBLOB
valueuserConfig.opensearch.email_sender_username
validation patternlog_cleaner_min_cleanable_ratio
minimum and maximum validation rules3.2
, reached EOL10
, reached EOLProjectVPC
by ID
to avoid conflicts ProjectVPC
deletion by exiting on DELETING
statusClickhouseUser
controllerClickhouseUser.spec.project
and ClickhouseUser.spec.serviceName
as immutablesignalfx
AuthSecretRef
fields marked as requireddatadog
, kafka_connect
, kafka_logs
, metrics
clickhouse_postgresql
, clickhouse_kafka
, clickhouse_kafka
, logs
, external_aws_cloudwatch_metrics
KafkaTopic.Spec.topicName
field. Unlike the metadata.name
, supports additional characters and has a longer length. KafkaTopic.Spec.topicName
replaces metadata.name
in future releases and will be marked as required.false
value for termination_protection
propertymin_cleanable_dirty_ratio
. Thanks to @TV2rdImportant: This release brings breaking changes to the userConfig
property. After new charts are installed, update your existing instances manually using the kubectl edit
command according to the API reference.
Note: It is now recommended to disable webhooks for Kubernetes version 1.25 and higher, as native CRD validation rules are used.
ip_filter
field is now of object
typeserviceIntegrations
on service types. Only the read_replica
type is available.min_cleanable_dirty_ratio
config field supportspec.disk_space
propertylinux/amd64
build. Thanks to @christoffer-eidenever
from choices of maintenance dowdevelopment
flag to configure logger's behaviormake generate-user-configs
)genericServiceHandler
to generalize service management KafkaACL
deletionProjectVPCRef
property to Kafka
, OpenSearch
, Clickhouse
and Redis
kinds to get ProjectVPC
ID when resource is readyProjectVPC
deletion, deletes by ID first if possible, then tries by nameclient.Object
storage update data lossfeatures: * add Redis CRD
improvements: * watch CRDs to reconcile token secrets
fixes: * fix RBACs of KafkaACL CRD
"},{"location":"changelog.html#v011-2021-09-13","title":"v0.1.1 - 2021-09-13","text":"improvements: * update helm installation docs
fixes: * fix typo in a kafka-connector kuttl test
"},{"location":"changelog.html#v010-2021-09-10","title":"v0.1.0 - 2021-09-10","text":"features: * initial release
"},{"location":"troubleshooting.html","title":"Troubleshooting","text":""},{"location":"troubleshooting.html#verifying-operator-status","title":"Verifying operator status","text":"Use the following checks to help you troubleshoot the Aiven Operator for Kubernetes.
"},{"location":"troubleshooting.html#checking-the-pods","title":"Checking the Pods","text":"Verify that all the operator Pods are READY
, and the STATUS
is Running
.
kubectl get pod -n aiven-operator-system\n
The output is similar to the following:
NAME READY STATUS RESTARTS AGE\naiven-operator-controller-manager-576d944499-ggttj 1/1 Running 0 12m\n
Verify that the cert-manager
Pods are also running.
kubectl get pod -n cert-manager\n
The output has the status:
NAME READY STATUS RESTARTS AGE\ncert-manager-7dd5854bb4-85cpv 1/1 Running 0 76s\ncert-manager-cainjector-64c949654c-n2z8l 1/1 Running 0 77s\ncert-manager-webhook-6bdffc7c9d-47w6z 1/1 Running 0 76s\n
"},{"location":"troubleshooting.html#visualizing-the-operator-logs","title":"Visualizing the operator logs","text":"Use the following command to visualize all the logs from the operator.
kubectl logs -n aiven-operator-system -l control-plane=controller-manager\n
"},{"location":"troubleshooting.html#verifing-the-operator-version","title":"Verifing the operator version","text":"kubectl get pod -n aiven-operator-system -l control-plane=controller-manager -o jsonpath=\"{.items[0].spec.containers[0].image}\"\n
"},{"location":"troubleshooting.html#known-issues-and-limitations","title":"Known issues and limitations","text":"We're always working to resolve problems that pop up in Aiven products. If your problem is listed below, we know about it and are working to fix it. If your problem isn't listed below, report it as an issue.
"},{"location":"troubleshooting.html#cert-manager","title":"cert-manager","text":""},{"location":"troubleshooting.html#issue","title":"Issue","text":"The following event appears on the operator Pod:
MountVolume.SetUp failed for volume \"cert\" : secret \"webhook-server-cert\" not found\n
"},{"location":"troubleshooting.html#impact","title":"Impact","text":"You cannot run the operator.
"},{"location":"troubleshooting.html#solution","title":"Solution","text":"Make sure that cert-manager is up and running.
kubectl get pod -n cert-manager\n
The output shows the status of each cert-manager:
NAME READY STATUS RESTARTS AGE\ncert-manager-7dd5854bb4-85cpv 1/1 Running 0 76s\ncert-manager-cainjector-64c949654c-n2z8l 1/1 Running 0 77s\ncert-manager-webhook-6bdffc7c9d-47w6z 1/1 Running 0 76s\n
"},{"location":"api-reference/index.html","title":"aiven.io/v1alpha1","text":"Autogenerated from CRD files.
"},{"location":"api-reference/cassandra.html","title":"Cassandra","text":""},{"location":"api-reference/cassandra.html#usage-example","title":"Usage example","text":"apiVersion: aiven.io/v1alpha1\nkind: Cassandra\nmetadata:\n name: my-cassandra\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: cassandra-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n migrate_sstableloader: true\n public_access:\n prometheus: true\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/cassandra.html#Cassandra","title":"Cassandra","text":"Cassandra is the Schema for the cassandras API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Cassandra
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). CassandraSpec defines the desired state of Cassandra. See below for nested schema.Appears on Cassandra
.
CassandraSpec defines the desired state of Cassandra.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CASSANDRA_HOST
, CASSANDRA_PORT
, CASSANDRA_USER
, CASSANDRA_PASSWORD
, CASSANDRA_URI
, CASSANDRA_HOSTS
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Cassandra specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CASSANDRA_HOST
, CASSANDRA_PORT
, CASSANDRA_USER
, CASSANDRA_PASSWORD
, CASSANDRA_URI
, CASSANDRA_HOSTS
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Cassandra specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Deprecated. Additional Cloud Regions for Backup Replication.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.cassandra
(object). cassandra configuration values. See below for nested schema.cassandra_version
(string, Enum: 4
, 3
). Cassandra major version.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migrate_sstableloader
(boolean). Sets the service into migration mode enabling the sstableloader utility to be used to upload Cassandra data files. Available only on service create.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.service_to_join_with
(string, MaxLength: 64). When bootstrapping, instead of creating a new Cassandra cluster try to join an existing one from another service. Can only be set on service creation.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
cassandra configuration values.
Optional
batch_size_fail_threshold_in_kb
(integer, Minimum: 1, Maximum: 1000000). Fail any multiple-partition batch exceeding this value. 50kb (10x warn threshold) by default.batch_size_warn_threshold_in_kb
(integer, Minimum: 1, Maximum: 1000000). Log a warning message on any multiple-partition batch size exceeding this value.5kb per batch by default.Caution should be taken on increasing the size of this thresholdas it can lead to node instability.datacenter
(string, MaxLength: 128). Name of the datacenter to which nodes of this service belong. Can be set only when creating the service.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Required
prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Required
prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: Clickhouse\nmetadata:\n name: my-clickhouse\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: clickhouse-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-16\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/clickhouse.html#Clickhouse","title":"Clickhouse","text":"Clickhouse is the Schema for the clickhouses API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Clickhouse
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ClickhouseSpec defines the desired state of Clickhouse. See below for nested schema.Appears on Clickhouse
.
ClickhouseSpec defines the desired state of Clickhouse.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CLICKHOUSE_HOST
, CLICKHOUSE_PORT
, CLICKHOUSE_USER
, CLICKHOUSE_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). OpenSearch specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CLICKHOUSE_HOST
, CLICKHOUSE_PORT
, CLICKHOUSE_USER
, CLICKHOUSE_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
OpenSearch specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
clickhouse
(boolean). Allow clients to connect to clickhouse with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.clickhouse_https
(boolean). Allow clients to connect to clickhouse_https with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.clickhouse_mysql
(boolean). Allow clients to connect to clickhouse_mysql with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
clickhouse
(boolean). Enable clickhouse.clickhouse_https
(boolean). Enable clickhouse_https.clickhouse_mysql
(boolean). Enable clickhouse_mysql.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
clickhouse
(boolean). Allow clients to connect to clickhouse from the public internet for service nodes that are in a project VPC or another type of private network.clickhouse_https
(boolean). Allow clients to connect to clickhouse_https from the public internet for service nodes that are in a project VPC or another type of private network.clickhouse_mysql
(boolean). Allow clients to connect to clickhouse_mysql from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: ClickhouseUser\nmetadata:\n name: my-clickhouse-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: clickhouse-user-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n serviceName: my-clickhouse\n
"},{"location":"api-reference/clickhouseuser.html#ClickhouseUser","title":"ClickhouseUser","text":"ClickhouseUser is the Schema for the clickhouseusers API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ClickhouseUser
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ClickhouseUserSpec defines the desired state of ClickhouseUser. See below for nested schema.Appears on ClickhouseUser
.
ClickhouseUserSpec defines the desired state of ClickhouseUser.
Required
project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the user to.serviceName
(string, Immutable, MaxLength: 63). Service to link the user to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CLICKHOUSEUSER_HOST
, CLICKHOUSEUSER_PORT
, CLICKHOUSEUSER_USER
, CLICKHOUSEUSER_PASSWORD
. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CLICKHOUSEUSER_HOST
, CLICKHOUSEUSER_PORT
, CLICKHOUSEUSER_USER
, CLICKHOUSEUSER_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.apiVersion: aiven.io/v1alpha1\nkind: ConnectionPool\nmetadata:\n name: my-connection-pool\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: connection-pool-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n serviceName: google-europe-west1\n databaseName: my-db\n username: my-user\n poolMode: transaction\n poolSize: 25\n
"},{"location":"api-reference/connectionpool.html#ConnectionPool","title":"ConnectionPool","text":"ConnectionPool is the Schema for the connectionpools API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ConnectionPool
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ConnectionPoolSpec defines the desired state of ConnectionPool. See below for nested schema.Appears on ConnectionPool
.
ConnectionPoolSpec defines the desired state of ConnectionPool.
Required
databaseName
(string, MaxLength: 40). Name of the database the pool connects to.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.serviceName
(string, MaxLength: 63). Service name.username
(string, MaxLength: 64). Name of the service user used to connect to the database.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CONNECTIONPOOL_HOST
, CONNECTIONPOOL_PORT
, CONNECTIONPOOL_DATABASE
, CONNECTIONPOOL_USER
, CONNECTIONPOOL_PASSWORD
, CONNECTIONPOOL_SSLMODE
, CONNECTIONPOOL_DATABASE_URI
. See below for nested schema.poolMode
(string, Enum: session
, transaction
, statement
). Mode the pool operates in (session, transaction, statement).poolSize
(integer). Number of connections the pool may create towards the backend server.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CONNECTIONPOOL_HOST
, CONNECTIONPOOL_PORT
, CONNECTIONPOOL_DATABASE
, CONNECTIONPOOL_USER
, CONNECTIONPOOL_PASSWORD
, CONNECTIONPOOL_SSLMODE
, CONNECTIONPOOL_DATABASE_URI
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.apiVersion: aiven.io/v1alpha1\nkind: Database\nmetadata:\n name: my-db\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n serviceName: google-europe-west1\n\n lcCtype: en_US.UTF-8\n lcCollate: en_US.UTF-8\n
"},{"location":"api-reference/database.html#Database","title":"Database","text":"Database is the Schema for the databases API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Database
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). DatabaseSpec defines the desired state of Database. See below for nested schema.Appears on Database
.
DatabaseSpec defines the desired state of Database.
Required
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the database to.serviceName
(string, MaxLength: 63). PostgreSQL service to link the database to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.lcCollate
(string, MaxLength: 128). Default string sort order (LC_COLLATE) of the database. Default value: en_US.UTF-8.lcCtype
(string, MaxLength: 128). Default character classification (LC_CTYPE) of the database. Default value: en_US.UTF-8.terminationProtection
(boolean). It is a Kubernetes side deletion protections, which prevents the database from being deleted by Kubernetes. It is recommended to enable this for any production databases containing critical data.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: Grafana\nmetadata:\n name: my-grafana\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: grafana-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-1\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n public_access:\n grafana: true\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/grafana.html#Grafana","title":"Grafana","text":"Grafana is the Schema for the grafanas API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Grafana
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). GrafanaSpec defines the desired state of Grafana. See below for nested schema.Appears on Grafana
.
GrafanaSpec defines the desired state of Grafana.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: GRAFANA_HOST
, GRAFANA_PORT
, GRAFANA_USER
, GRAFANA_PASSWORD
, GRAFANA_URI
, GRAFANA_HOSTS
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Cassandra specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: GRAFANA_HOST
, GRAFANA_PORT
, GRAFANA_USER
, GRAFANA_PASSWORD
, GRAFANA_URI
, GRAFANA_HOSTS
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Cassandra specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.alerting_enabled
(boolean). Enable or disable Grafana legacy alerting functionality. This should not be enabled with unified_alerting_enabled.alerting_error_or_timeout
(string, Enum: alerting
, keep_state
). Default error or timeout setting for new alerting rules.alerting_max_annotations_to_keep
(integer, Minimum: 0, Maximum: 1000000). Max number of alert annotations that Grafana stores. 0 (default) keeps all alert annotations.alerting_nodata_or_nullvalues
(string, Enum: alerting
, no_data
, keep_state
, ok
). Default value for 'no data or null values' for new alerting rules.allow_embedding
(boolean). Allow embedding Grafana dashboards with iframe/frame/object/embed tags. Disabled by default to limit impact of clickjacking.auth_azuread
(object). Azure AD OAuth integration. See below for nested schema.auth_basic_enabled
(boolean). Enable or disable basic authentication form, used by Grafana built-in login.auth_generic_oauth
(object). Generic OAuth integration. See below for nested schema.auth_github
(object). Github Auth integration. See below for nested schema.auth_gitlab
(object). GitLab Auth integration. See below for nested schema.auth_google
(object). Google Auth integration. See below for nested schema.cookie_samesite
(string, Enum: lax
, strict
, none
). Cookie SameSite attribute: strict
prevents sending cookie for cross-site requests, effectively disabling direct linking from other sites to Grafana. lax
is the default value.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.dashboard_previews_enabled
(boolean). This feature is new in Grafana 9 and is quite resource intensive. It may cause low-end plans to work more slowly while the dashboard previews are rendering.dashboards_min_refresh_interval
(string, Pattern: ^[0-9]+(ms|s|m|h|d)$
, MaxLength: 16). Signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s, 1h.dashboards_versions_to_keep
(integer, Minimum: 1, Maximum: 100). Dashboard versions to keep per dashboard.dataproxy_send_user_header
(boolean). Send X-Grafana-User
header to data source.dataproxy_timeout
(integer, Minimum: 15, Maximum: 90). Timeout for data proxy requests in seconds.date_formats
(object). Grafana date format specifications. See below for nested schema.disable_gravatar
(boolean). Set to true to disable gravatar. Defaults to false (gravatar is enabled).editors_can_admin
(boolean). Editors can manage folders, teams and dashboards created by them.external_image_storage
(object). External image store settings. See below for nested schema.google_analytics_ua_id
(string, Pattern: ^(G|UA|YT|MO)-[a-zA-Z0-9-]+$
, MaxLength: 64). Google Analytics ID.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.metrics_enabled
(boolean). Enable Grafana /metrics endpoint.oauth_allow_insecure_email_lookup
(boolean). Enforce user lookup based on email instead of the unique ID provided by the IdP.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.smtp_server
(object). SMTP server settings. See below for nested schema.static_ips
(boolean). Use static public IP addresses.unified_alerting_enabled
(boolean). Enable or disable Grafana unified alerting functionality. By default this is enabled and any legacy alerts will be migrated on upgrade to Grafana 9+. To stay on legacy alerting, set unified_alerting_enabled to false and alerting_enabled to true. See https://grafana.com/docs/grafana/latest/alerting/set-up/migrating-alerts/ for more details.user_auto_assign_org
(boolean). Auto-assign new users on signup to main organization. Defaults to false.user_auto_assign_org_role
(string, Enum: Viewer
, Admin
, Editor
). Set role for new signups. Defaults to Viewer.viewers_can_edit
(boolean). Users with view-only permission can edit but not save dashboards.Appears on spec.userConfig
.
Azure AD OAuth integration.
Required
auth_url
(string, MaxLength: 2048). Authorization URL.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.token_url
(string, MaxLength: 2048). Token URL.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_domains
(array of strings, MaxItems: 50). Allowed domains.allowed_groups
(array of strings, MaxItems: 50). Require users to belong to one of given groups.Appears on spec.userConfig
.
Generic OAuth integration.
Required
api_url
(string, MaxLength: 2048). API URL.auth_url
(string, MaxLength: 2048). Authorization URL.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.token_url
(string, MaxLength: 2048). Token URL.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_domains
(array of strings, MaxItems: 50). Allowed domains.allowed_organizations
(array of strings, MaxItems: 50). Require user to be member of one of the listed organizations.auto_login
(boolean). Allow users to bypass the login screen and automatically log in.name
(string, Pattern: ^[a-zA-Z0-9_\\- ]+$
, MaxLength: 128). Name of the OAuth integration.scopes
(array of strings, MaxItems: 50). OAuth scopes.Appears on spec.userConfig
.
Github Auth integration.
Required
client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_organizations
(array of strings, MaxItems: 50). Require users to belong to one of given organizations.team_ids
(array of integers, MaxItems: 50). Require users to belong to one of given team IDs.Appears on spec.userConfig
.
GitLab Auth integration.
Required
allowed_groups
(array of strings, MaxItems: 50). Require users to belong to one of given groups.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.api_url
(string, MaxLength: 2048). API URL. This only needs to be set when using self hosted GitLab.auth_url
(string, MaxLength: 2048). Authorization URL. This only needs to be set when using self hosted GitLab.token_url
(string, MaxLength: 2048). Token URL. This only needs to be set when using self hosted GitLab.Appears on spec.userConfig
.
Google Auth integration.
Required
allowed_domains
(array of strings, MaxItems: 64). Domains allowed to sign-in to this Grafana.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.Appears on spec.userConfig
.
Grafana date format specifications.
Optional
default_timezone
(string, MaxLength: 64). Default time zone for user preferences. Value browser
uses browser local time zone.full_date
(string, MaxLength: 128). Moment.js style format string for cases where full date is shown.interval_day
(string, MaxLength: 128). Moment.js style format string used when a time requiring day accuracy is shown.interval_hour
(string, MaxLength: 128). Moment.js style format string used when a time requiring hour accuracy is shown.interval_minute
(string, MaxLength: 128). Moment.js style format string used when a time requiring minute accuracy is shown.interval_month
(string, MaxLength: 128). Moment.js style format string used when a time requiring month accuracy is shown.interval_second
(string, MaxLength: 128). Moment.js style format string used when a time requiring second accuracy is shown.interval_year
(string, MaxLength: 128). Moment.js style format string used when a time requiring year accuracy is shown.Appears on spec.userConfig
.
External image store settings.
Required
access_key
(string, Pattern: ^[A-Z0-9]+$
, MaxLength: 4096). S3 access key. Requires permissions to the S3 bucket for the s3:PutObject and s3:PutObjectAcl actions.bucket_url
(string, MaxLength: 2048). Bucket URL for S3.provider
(string, Enum: s3
). Provider type.secret_key
(string, Pattern: ^[A-Za-z0-9/+=]+$
, MaxLength: 4096). S3 secret key.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Required
grafana
(boolean). Allow clients to connect to grafana with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Required
grafana
(boolean). Enable grafana.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Required
grafana
(boolean). Allow clients to connect to grafana from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
SMTP server settings.
Required
from_address
(string, MaxLength: 319). Address used for sending emails.host
(string, MaxLength: 255). Server hostname or IP.port
(integer, Minimum: 1, Maximum: 65535). SMTP server port.Optional
from_name
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 128). Name used in outgoing emails, defaults to Grafana.password
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 255). Password for SMTP authentication.skip_verify
(boolean). Skip verifying server certificate. Defaults to false.starttls_policy
(string, Enum: OpportunisticStartTLS
, MandatoryStartTLS
, NoStartTLS
). Either OpportunisticStartTLS, MandatoryStartTLS or NoStartTLS. Default is OpportunisticStartTLS.username
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 255). Username for SMTP authentication.apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: my-kafka\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: kafka-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-2\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/kafka.html#Kafka","title":"Kafka","text":"Kafka is the Schema for the kafkas API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Kafka
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaSpec defines the desired state of Kafka. See below for nested schema.Appears on Kafka
.
KafkaSpec defines the desired state of Kafka.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: KAFKA_HOST
, KAFKA_PORT
, KAFKA_USERNAME
, KAFKA_PASSWORD
, KAFKA_ACCESS_CERT
, KAFKA_ACCESS_KEY
, KAFKA_SASL_HOST
, KAFKA_SASL_PORT
, KAFKA_SCHEMA_REGISTRY_HOST
, KAFKA_SCHEMA_REGISTRY_PORT
, KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
, KAFKA_REST_PORT
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.karapace
(boolean). Switch the service to use Karapace for schema registry and REST proxy.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Kafka specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: KAFKA_HOST
, KAFKA_PORT
, KAFKA_USERNAME
, KAFKA_PASSWORD
, KAFKA_ACCESS_CERT
, KAFKA_ACCESS_KEY
, KAFKA_SASL_HOST
, KAFKA_SASL_PORT
, KAFKA_SCHEMA_REGISTRY_HOST
, KAFKA_SCHEMA_REGISTRY_PORT
, KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
, KAFKA_REST_PORT
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Kafka specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.aiven_kafka_topic_messages
(boolean). Allow access to read Kafka topic messages in the Aiven Console and REST API.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.kafka
(object). Kafka broker configuration values. See below for nested schema.kafka_authentication_methods
(object). Kafka authentication methods. See below for nested schema.kafka_connect
(boolean). Enable Kafka Connect service.kafka_connect_config
(object). Kafka Connect configuration values. See below for nested schema.kafka_rest
(boolean). Enable Kafka-REST service.kafka_rest_authorization
(boolean). Enable authorization in Kafka-REST service.kafka_rest_config
(object). Kafka REST configuration. See below for nested schema.kafka_version
(string, Enum: 3.3
, 3.1
, 3.4
, 3.5
, 3.6
). Kafka major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.schema_registry
(boolean). Enable Schema-Registry service.schema_registry_config
(object). Schema Registry configuration. See below for nested schema.static_ips
(boolean). Use static public IP addresses.tiered_storage
(object). Tiered storage configuration. See below for nested schema.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Kafka broker configuration values.
Optional
auto_create_topics_enable
(boolean). Enable auto creation of topics.compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, uncompressed
, producer
). Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts uncompressed
which is equivalent to no compression; and producer
which means retain the original compression codec set by the producer.connections_max_idle_ms
(integer, Minimum: 1000, Maximum: 3600000). Idle connections timeout: the server socket processor threads close the connections that idle for longer than this.default_replication_factor
(integer, Minimum: 1, Maximum: 10). Replication factor for autocreated topics.group_initial_rebalance_delay_ms
(integer, Minimum: 0, Maximum: 300000). The amount of time, in milliseconds, the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins. The default value for this is 3 seconds. During development and testing it might be desirable to set this to 0 in order to not delay test execution time.group_max_session_timeout_ms
(integer, Minimum: 0, Maximum: 1800000). The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.group_min_session_timeout_ms
(integer, Minimum: 0, Maximum: 60000). The minimum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.log_cleaner_delete_retention_ms
(integer, Minimum: 0, Maximum: 315569260000). How long are delete records retained?.log_cleaner_max_compaction_lag_ms
(integer, Minimum: 30000). The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted.log_cleaner_min_cleanable_ratio
(number, Minimum: 0.2, Maximum: 0.9). Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option.log_cleaner_min_compaction_lag_ms
(integer, Minimum: 0). The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.log_cleanup_policy
(string, Enum: delete
, compact
, compact,delete
). The default cleanup policy for segments beyond the retention window.log_flush_interval_messages
(integer, Minimum: 1). The number of messages accumulated on a log partition before messages are flushed to disk.log_flush_interval_ms
(integer, Minimum: 0). The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used.log_index_interval_bytes
(integer, Minimum: 0, Maximum: 104857600). The interval with which Kafka adds an entry to the offset index.log_index_size_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). The maximum size in bytes of the offset index.log_local_retention_bytes
(integer, Minimum: -2). The maximum size of local log segments that can grow for a partition before it gets eligible for deletion. If set to -2, the value of log.retention.bytes is used. The effective value should always be less than or equal to log.retention.bytes value.log_local_retention_ms
(integer, Minimum: -2). The number of milliseconds to keep the local log segments before it gets eligible for deletion. If set to -2, the value of log.retention.ms is used. The effective value should always be less than or equal to log.retention.ms value.log_message_downconversion_enable
(boolean). This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.log_message_timestamp_difference_max_ms
(integer, Minimum: 0). The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message.log_message_timestamp_type
(string, Enum: CreateTime
, LogAppendTime
). Define whether the timestamp in the message is message create time or log append time.log_preallocate
(boolean). Should pre allocate file when create new segment?.log_retention_bytes
(integer, Minimum: -1). The maximum size of the log before deleting messages.log_retention_hours
(integer, Minimum: -1, Maximum: 2147483647). The number of hours to keep a log file before deleting it.log_retention_ms
(integer, Minimum: -1). The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.log_roll_jitter_ms
(integer, Minimum: 0). The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used.log_roll_ms
(integer, Minimum: 1). The maximum time before a new log segment is rolled out (in milliseconds).log_segment_bytes
(integer, Minimum: 10485760, Maximum: 1073741824). The maximum size of a single log file.log_segment_delete_delay_ms
(integer, Minimum: 0, Maximum: 3600000). The amount of time to wait before deleting a file from the filesystem.max_connections_per_ip
(integer, Minimum: 256, Maximum: 2147483647). The maximum number of connections allowed from each ip address (defaults to 2147483647).max_incremental_fetch_session_cache_slots
(integer, Minimum: 1000, Maximum: 10000). The maximum number of incremental fetch sessions that the broker will maintain.message_max_bytes
(integer, Minimum: 0, Maximum: 100001200). The maximum size of message that the server can receive.min_insync_replicas
(integer, Minimum: 1, Maximum: 7). When a producer sets acks to all
(or -1
), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.num_partitions
(integer, Minimum: 1, Maximum: 1000). Number of partitions for autocreated topics.offsets_retention_minutes
(integer, Minimum: 1, Maximum: 2147483647). Log retention window in minutes for offsets topic.producer_purgatory_purge_interval_requests
(integer, Minimum: 10, Maximum: 10000). The purge interval (in number of requests) of the producer request purgatory(defaults to 1000).replica_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). The number of bytes of messages to attempt to fetch for each partition (defaults to 1048576). This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made.replica_fetch_response_max_bytes
(integer, Minimum: 10485760, Maximum: 1048576000). Maximum bytes expected for the entire fetch response (defaults to 10485760). Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum.sasl_oauthbearer_expected_audience
(string, MaxLength: 128). The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences.sasl_oauthbearer_expected_issuer
(string, MaxLength: 128). Optional setting for the broker to use to verify that the JWT was created by the expected issuer.sasl_oauthbearer_jwks_endpoint_url
(string, MaxLength: 2048). OIDC JWKS endpoint URL. By setting this the SASL SSL OAuth2/OIDC authentication is enabled. See also other options for SASL OAuth2/OIDC.sasl_oauthbearer_sub_claim_name
(string, MaxLength: 128). Name of the scope from which to extract the subject claim from the JWT. Defaults to sub.socket_request_max_bytes
(integer, Minimum: 10485760, Maximum: 209715200). The maximum number of bytes in a socket request (defaults to 104857600).transaction_partition_verification_enable
(boolean). Enable verification that checks that the partition has been added to the transaction before writing transactional records to the partition.transaction_remove_expired_transaction_cleanup_interval_ms
(integer, Minimum: 600000, Maximum: 3600000). The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing (defaults to 3600000 (1 hour)).transaction_state_log_segment_bytes
(integer, Minimum: 1048576, Maximum: 2147483647). The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads (defaults to 104857600 (100 mebibytes)).Appears on spec.userConfig
.
Kafka authentication methods.
Optional
certificate
(boolean). Enable certificate/SSL authentication.sasl
(boolean). Enable SASL authentication.Appears on spec.userConfig
.
Kafka Connect configuration values.
Optional
connector_client_config_override_policy
(string, Enum: None
, All
). Defines what client configurations can be overridden by the connector. Default is None.consumer_auto_offset_reset
(string, Enum: earliest
, latest
). What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.consumer_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.consumer_isolation_level
(string, Enum: read_uncommitted
, read_committed
). Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.consumer_max_partition_fetch_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.consumer_max_poll_interval_ms
(integer, Minimum: 1, Maximum: 2147483647). The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).consumer_max_poll_records
(integer, Minimum: 1, Maximum: 10000). The maximum number of records returned in a single call to poll() (defaults to 500).offset_flush_interval_ms
(integer, Minimum: 1, Maximum: 100000000). The interval at which to try committing offsets for tasks (defaults to 60000).offset_flush_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger
for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger
for the specified time waiting for more records to show up. Defaults to 0.producer_max_request_size
(integer, Minimum: 131072, Maximum: 67108864). This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.scheduled_rebalance_max_delay_ms
(integer, Minimum: 0, Maximum: 600000). The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.session_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). The timeout in milliseconds used to detect failures when using Kafka\u2019s group management facilities (defaults to 10000).Appears on spec.userConfig
.
Kafka REST configuration.
Optional
consumer_enable_auto_commit
(boolean). If true the consumer's offset will be periodically committed to Kafka in the background.consumer_request_max_bytes
(integer, Minimum: 0, Maximum: 671088640). Maximum number of bytes in unencoded message keys and values by a single request.consumer_request_timeout_ms
(integer, Enum: 1000
, 15000
, 30000
, Minimum: 1000, Maximum: 30000). The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached.producer_acks
(string, Enum: all
, -1
, 0
, 1
). The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to all
or -1
, the leader will wait for the full set of in-sync replicas to acknowledge the record.producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). Wait for up to the given delay to allow batching records together.producer_max_request_size
(integer, Minimum: 0, Maximum: 2147483647). The maximum size of a request in bytes. Note that Kafka broker can also cap the record batch size.simpleconsumer_pool_size_max
(integer, Minimum: 10, Maximum: 250). Maximum number of SimpleConsumers that can be instantiated per broker.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
kafka
(boolean). Allow clients to connect to kafka with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.kafka_connect
(boolean). Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.kafka_rest
(boolean). Allow clients to connect to kafka_rest with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.schema_registry
(boolean). Allow clients to connect to schema_registry with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
jolokia
(boolean). Enable jolokia.kafka
(boolean). Enable kafka.kafka_connect
(boolean). Enable kafka_connect.kafka_rest
(boolean). Enable kafka_rest.prometheus
(boolean). Enable prometheus.schema_registry
(boolean). Enable schema_registry.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
kafka
(boolean). Allow clients to connect to kafka from the public internet for service nodes that are in a project VPC or another type of private network.kafka_connect
(boolean). Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.kafka_rest
(boolean). Allow clients to connect to kafka_rest from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.schema_registry
(boolean). Allow clients to connect to schema_registry from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
Schema Registry configuration.
Optional
leader_eligibility
(boolean). If true, Karapace / Schema Registry on the service nodes can participate in leader election. It might be needed to disable this when the schemas topic is replicated to a secondary cluster and Karapace / Schema Registry there must not participate in leader election. Defaults to true
.topic_name
(string, MinLength: 1, MaxLength: 249). The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy. Please note that changing this configuration in an existing Schema Registry / Karapace setup leads to previous schemas being inaccessible, data encoded with them potentially unreadable and schema ID sequence put out of order. It's only possible to do the switch while Schema Registry / Karapace is disabled. Defaults to _schemas
.Appears on spec.userConfig
.
Tiered storage configuration.
Optional
enabled
(boolean). Whether to enable the tiered storage functionality.local_cache
(object). Deprecated. Local cache configuration. See below for nested schema.Appears on spec.userConfig.tiered_storage
.
Deprecated. Local cache configuration.
Required
size
(integer, Minimum: 1, Maximum: 107374182400). Deprecated. Local cache size in bytes.apiVersion: aiven.io/v1alpha1\nkind: KafkaACL\nmetadata:\n name: my-kafka-acl\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n topic: my-topic\n username: my-user\n permission: admin\n
"},{"location":"api-reference/kafkaacl.html#KafkaACL","title":"KafkaACL","text":"KafkaACL is the Schema for the kafkaacls API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaACL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaACLSpec defines the desired state of KafkaACL. See below for nested schema.Appears on KafkaACL
.
KafkaACLSpec defines the desired state of KafkaACL.
Required
permission
(string, Enum: admin
, read
, readwrite
, write
). Kafka permission to grant (admin, read, readwrite, write).project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the Kafka ACL to.serviceName
(string, MaxLength: 63). Service to link the Kafka ACL to.topic
(string). Topic name pattern for the ACL entry.username
(string). Username pattern for the ACL entry.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: KafkaConnect\nmetadata:\n name: my-kafka-connect\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: business-4\n\n userConfig:\n kafka_connect:\n consumer_isolation_level: read_committed\n public_access:\n kafka_connect: true\n
"},{"location":"api-reference/kafkaconnect.html#KafkaConnect","title":"KafkaConnect","text":"KafkaConnect is the Schema for the kafkaconnects API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaConnect
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaConnectSpec defines the desired state of KafkaConnect. See below for nested schema.Appears on KafkaConnect
.
KafkaConnectSpec defines the desired state of KafkaConnect.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). KafkaConnect specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
KafkaConnect specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.kafka_connect
(object). Kafka Connect configuration values. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Kafka Connect configuration values.
Optional
connector_client_config_override_policy
(string, Enum: None
, All
). Defines what client configurations can be overridden by the connector. Default is None.consumer_auto_offset_reset
(string, Enum: earliest
, latest
). What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.consumer_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.consumer_isolation_level
(string, Enum: read_uncommitted
, read_committed
). Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.consumer_max_partition_fetch_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.consumer_max_poll_interval_ms
(integer, Minimum: 1, Maximum: 2147483647). The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).consumer_max_poll_records
(integer, Minimum: 1, Maximum: 10000). The maximum number of records returned in a single call to poll() (defaults to 500).offset_flush_interval_ms
(integer, Minimum: 1, Maximum: 100000000). The interval at which to try committing offsets for tasks (defaults to 60000).offset_flush_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger
for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger
for the specified time waiting for more records to show up. Defaults to 0.producer_max_request_size
(integer, Minimum: 131072, Maximum: 67108864). This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.scheduled_rebalance_max_delay_ms
(integer, Minimum: 0, Maximum: 600000). The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.session_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). The timeout in milliseconds used to detect failures when using Kafka\u2019s group management facilities (defaults to 10000).Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
kafka_connect
(boolean). Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
jolokia
(boolean). Enable jolokia.kafka_connect
(boolean). Enable kafka_connect.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
kafka_connect
(boolean). Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.KafkaConnector is the Schema for the kafkaconnectors API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaConnector
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaConnectorSpec defines the desired state of KafkaConnector. See below for nested schema.Appears on KafkaConnector
.
KafkaConnectorSpec defines the desired state of KafkaConnector.
Required
connectorClass
(string, MaxLength: 1024). The Java class of the connector.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.serviceName
(string, MaxLength: 63). Service name.userConfig
(object, AdditionalProperties: string). The connector specific configuration To build config values from secret the template function {{ fromSecret \"name\" \"key\" }}
is provided when interpreting the keys.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: KafkaSchema\nmetadata:\n name: my-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n subjectName: mny-subject\n compatibilityLevel: BACKWARD\n schema: |\n {\n \"doc\": \"example_doc\",\n \"fields\": [{\n \"default\": 5,\n \"doc\": \"field_doc\",\n \"name\": \"field_name\",\n \"namespace\": \"field_namespace\",\n \"type\": \"int\"\n }],\n \"name\": \"example_name\",\n \"namespace\": \"example_namespace\",\n \"type\": \"record\"\n }\n
"},{"location":"api-reference/kafkaschema.html#KafkaSchema","title":"KafkaSchema","text":"KafkaSchema is the Schema for the kafkaschemas API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaSchema
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaSchemaSpec defines the desired state of KafkaSchema. See below for nested schema.Appears on KafkaSchema
.
KafkaSchemaSpec defines the desired state of KafkaSchema.
Required
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the Kafka Schema to.schema
(string). Kafka Schema configuration should be a valid Avro Schema JSON format.serviceName
(string, MaxLength: 63). Service to link the Kafka Schema to.subjectName
(string, MaxLength: 63). Kafka Schema Subject name.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.compatibilityLevel
(string, Enum: BACKWARD
, BACKWARD_TRANSITIVE
, FORWARD
, FORWARD_TRANSITIVE
, FULL
, FULL_TRANSITIVE
, NONE
). Kafka Schemas compatibility level.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: kafka-topic\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n topicName: my-kafka-topic\n\n replication: 2\n partitions: 1\n\n config:\n min_cleanable_dirty_ratio: 0.2\n
"},{"location":"api-reference/kafkatopic.html#KafkaTopic","title":"KafkaTopic","text":"KafkaTopic is the Schema for the kafkatopics API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaTopic
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaTopicSpec defines the desired state of KafkaTopic. See below for nested schema.Appears on KafkaTopic
.
KafkaTopicSpec defines the desired state of KafkaTopic.
Required
partitions
(integer, Minimum: 1, Maximum: 1000000). Number of partitions to create in the topic.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.replication
(integer, Minimum: 2). Replication factor for the topic.serviceName
(string, MaxLength: 63). Service name.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.config
(object). Kafka topic configuration. See below for nested schema.tags
(array of objects). Kafka topic tags. See below for nested schema.termination_protection
(boolean). It is a Kubernetes side deletion protections, which prevents the kafka topic from being deleted by Kubernetes. It is recommended to enable this for any production databases containing critical data.topicName
(string, Immutable, MinLength: 1, MaxLength: 249). Topic name. If provided, is used instead of metadata.name. This field supports additional characters, has a longer length, and will replace metadata.name in future releases.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Kafka topic configuration.
Optional
cleanup_policy
(string). cleanup.policy value.compression_type
(string). compression.type value.delete_retention_ms
(integer). delete.retention.ms value.file_delete_delay_ms
(integer). file.delete.delay.ms value.flush_messages
(integer). flush.messages value.flush_ms
(integer). flush.ms value.index_interval_bytes
(integer). index.interval.bytes value.max_compaction_lag_ms
(integer). max.compaction.lag.ms value.max_message_bytes
(integer). max.message.bytes value.message_downconversion_enable
(boolean). message.downconversion.enable value.message_format_version
(string). message.format.version value.message_timestamp_difference_max_ms
(integer). message.timestamp.difference.max.ms value.message_timestamp_type
(string). message.timestamp.type value.min_cleanable_dirty_ratio
(number). min.cleanable.dirty.ratio value.min_compaction_lag_ms
(integer). min.compaction.lag.ms value.min_insync_replicas
(integer). min.insync.replicas value.preallocate
(boolean). preallocate value.retention_bytes
(integer). retention.bytes value.retention_ms
(integer). retention.ms value.segment_bytes
(integer). segment.bytes value.segment_index_bytes
(integer). segment.index.bytes value.segment_jitter_ms
(integer). segment.jitter.ms value.segment_ms
(integer). segment.ms value.Appears on spec
.
Kafka topic tags.
Required
key
(string, MinLength: 1, MaxLength: 64, Format: ^[a-zA-Z0-9_-]*$
). Optional
value
(string, MaxLength: 256, Format: ^[a-zA-Z0-9_-]*$
). apiVersion: aiven.io/v1alpha1\nkind: MySQL\nmetadata:\n name: my-mysql\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: mysql-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: business-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n backup_hour: 17\n backup_minute: 11\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/mysql.html#MySQL","title":"MySQL","text":"MySQL is the Schema for the mysqls API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value MySQL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). MySQLSpec defines the desired state of MySQL. See below for nested schema.Appears on MySQL
.
MySQLSpec defines the desired state of MySQL.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: MYSQL_HOST
, MYSQL_PORT
, MYSQL_DATABASE
, MYSQL_USER
, MYSQL_PASSWORD
, MYSQL_SSL_MODE
, MYSQL_URI
, MYSQL_REPLICA_URI
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). MySQL specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: MYSQL_HOST
, MYSQL_PORT
, MYSQL_DATABASE
, MYSQL_USER
, MYSQL_PASSWORD
, MYSQL_SSL_MODE
, MYSQL_URI
, MYSQL_REPLICA_URI
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
MySQL specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.admin_password
(string, Immutable, Pattern: ^[a-zA-Z0-9-_]+$
, MinLength: 8, MaxLength: 256). Custom password for admin user. Defaults to random string. This must be set only when a new service is being created.admin_username
(string, Immutable, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Custom username for admin user. This must be set only when a new service is being created.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.binlog_retention_period
(integer, Minimum: 600, Maximum: 86400). The minimum amount of time in seconds to keep binlog entries before deletion. This may be extended for services that require binlog entries for longer than the default for example if using the MySQL Debezium Kafka connector.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.mysql
(object). mysql.conf configuration values. See below for nested schema.mysql_version
(string, Enum: 8
). MySQL major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_target_time
(string, Immutable, MaxLength: 32). Recovery target time when forking a service. This has effect only when a new service is being created.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Migrate data from existing server.
Required
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.Optional
dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.Appears on spec.userConfig
.
mysql.conf configuration values.
Optional
connect_timeout
(integer, Minimum: 2, Maximum: 3600). The number of seconds that the mysqld server waits for a connect packet before responding with Bad handshake.default_time_zone
(string, MinLength: 2, MaxLength: 100). Default server time zone as an offset from UTC (from -12:00 to +12:00), a time zone name, or SYSTEM
to use the MySQL server default.group_concat_max_len
(integer, Minimum: 4). The maximum permitted result length in bytes for the GROUP_CONCAT() function.information_schema_stats_expiry
(integer, Minimum: 900, Maximum: 31536000). The time, in seconds, before cached statistics expire.innodb_change_buffer_max_size
(integer, Minimum: 0, Maximum: 50). Maximum size for the InnoDB change buffer, as a percentage of the total size of the buffer pool. Default is 25.innodb_flush_neighbors
(integer, Minimum: 0, Maximum: 2). Specifies whether flushing a page from the InnoDB buffer pool also flushes other dirty pages in the same extent (default is 1): 0 - dirty pages in the same extent are not flushed, 1 - flush contiguous dirty pages in the same extent, 2 - flush dirty pages in the same extent.innodb_ft_min_token_size
(integer, Minimum: 0, Maximum: 16). Minimum length of words that are stored in an InnoDB FULLTEXT index. Changing this parameter will lead to a restart of the MySQL service.innodb_ft_server_stopword_table
(string, Pattern: ^.+/.+$
, MaxLength: 1024). This option is used to specify your own InnoDB FULLTEXT index stopword list for all InnoDB tables.innodb_lock_wait_timeout
(integer, Minimum: 1, Maximum: 3600). The length of time in seconds an InnoDB transaction waits for a row lock before giving up. Default is 120.innodb_log_buffer_size
(integer, Minimum: 1048576, Maximum: 4294967295). The size in bytes of the buffer that InnoDB uses to write to the log files on disk.innodb_online_alter_log_max_size
(integer, Minimum: 65536, Maximum: 1099511627776). The upper limit in bytes on the size of the temporary log files used during online DDL operations for InnoDB tables.innodb_print_all_deadlocks
(boolean). When enabled, information about all deadlocks in InnoDB user transactions is recorded in the error log. Disabled by default.innodb_read_io_threads
(integer, Minimum: 1, Maximum: 64). The number of I/O threads for read operations in InnoDB. Default is 4. Changing this parameter will lead to a restart of the MySQL service.innodb_rollback_on_timeout
(boolean). When enabled a transaction timeout causes InnoDB to abort and roll back the entire transaction. Changing this parameter will lead to a restart of the MySQL service.innodb_thread_concurrency
(integer, Minimum: 0, Maximum: 1000). Defines the maximum number of threads permitted inside of InnoDB. Default is 0 (infinite concurrency - no limit).innodb_write_io_threads
(integer, Minimum: 1, Maximum: 64). The number of I/O threads for write operations in InnoDB. Default is 4. Changing this parameter will lead to a restart of the MySQL service.interactive_timeout
(integer, Minimum: 30, Maximum: 604800). The number of seconds the server waits for activity on an interactive connection before closing it.internal_tmp_mem_storage_engine
(string, Enum: TempTable
, MEMORY
). The storage engine for in-memory internal temporary tables.long_query_time
(number, Minimum: 0, Maximum: 3600). The slow_query_logs work as SQL statements that take more than long_query_time seconds to execute. Default is 10s.max_allowed_packet
(integer, Minimum: 102400, Maximum: 1073741824). Size of the largest message in bytes that can be received by the server. Default is 67108864 (64M).max_heap_table_size
(integer, Minimum: 1048576, Maximum: 1073741824). Limits the size of internal in-memory tables. Also set tmp_table_size. Default is 16777216 (16M).net_buffer_length
(integer, Minimum: 1024, Maximum: 1048576). Start sizes of connection buffer and result buffer. Default is 16384 (16K). Changing this parameter will lead to a restart of the MySQL service.net_read_timeout
(integer, Minimum: 1, Maximum: 3600). The number of seconds to wait for more data from a connection before aborting the read.net_write_timeout
(integer, Minimum: 1, Maximum: 3600). The number of seconds to wait for a block to be written to a connection before aborting the write.slow_query_log
(boolean). Slow query log enables capturing of slow queries. Setting slow_query_log to false also truncates the mysql.slow_log table. Default is off.sort_buffer_size
(integer, Minimum: 32768, Maximum: 1073741824). Sort buffer size in bytes for ORDER BY optimization. Default is 262144 (256K).sql_mode
(string, Pattern: ^[A-Z_]*(,[A-Z_]+)*$
, MaxLength: 1024). Global SQL mode. Set to empty to use MySQL server defaults. When creating a new service and not setting this field Aiven default SQL mode (strict, SQL standard compliant) will be assigned.sql_require_primary_key
(boolean). Require primary key to be defined for new tables or old tables modified with ALTER TABLE and fail if missing. It is recommended to always have primary keys because various functionality may break if any large table is missing them.tmp_table_size
(integer, Minimum: 1048576, Maximum: 1073741824). Limits the size of internal in-memory tables. Also set max_heap_table_size. Default is 16777216 (16M).wait_timeout
(integer, Minimum: 1, Maximum: 2147483). The number of seconds the server waits for activity on a noninteractive connection before closing it.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
mysql
(boolean). Allow clients to connect to mysql with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.mysqlx
(boolean). Allow clients to connect to mysqlx with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
mysql
(boolean). Enable mysql.mysqlx
(boolean). Enable mysqlx.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
mysql
(boolean). Allow clients to connect to mysql from the public internet for service nodes that are in a project VPC or another type of private network.mysqlx
(boolean). Allow clients to connect to mysqlx from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: OpenSearch\nmetadata:\n name: my-os\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: os-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-4\n disk_space: 80Gib\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/opensearch.html#OpenSearch","title":"OpenSearch","text":"OpenSearch is the Schema for the opensearches API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value OpenSearch
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). OpenSearchSpec defines the desired state of OpenSearch. See below for nested schema.Appears on OpenSearch
.
OpenSearchSpec defines the desired state of OpenSearch.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: OPENSEARCH_HOST
, OPENSEARCH_PORT
, OPENSEARCH_USER
, OPENSEARCH_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). OpenSearch specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: OPENSEARCH_HOST
, OPENSEARCH_PORT
, OPENSEARCH_USER
, OPENSEARCH_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
OpenSearch specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.disable_replication_factor_adjustment
(boolean). DEPRECATED: Disable automatic replication factor adjustment for multi-node services. By default, Aiven ensures all indexes are replicated at least to two nodes. Note: Due to potential data loss in case of losing a service node, this setting can no longer be activated.index_patterns
(array of objects, MaxItems: 512). Index patterns. See below for nested schema.index_template
(object). Template settings for all new indexes. See below for nested schema.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.keep_index_refresh_interval
(boolean). Aiven automation resets index.refresh_interval to default value for every index to be sure that indices are always visible to search. If it doesn't fit your case, you can disable this by setting up this flag to true.max_index_count
(integer, Minimum: 0). DEPRECATED: use index_patterns instead.openid
(object). OpenSearch OpenID Connect Configuration. See below for nested schema.opensearch
(object). OpenSearch settings. See below for nested schema.opensearch_dashboards
(object). OpenSearch Dashboards settings. See below for nested schema.opensearch_version
(string, Enum: 1
, 2
). OpenSearch major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.saml
(object). OpenSearch SAML configuration. See below for nested schema.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Index patterns.
Required
max_index_count
(integer, Minimum: 0). Maximum number of indexes to keep.pattern
(string, Pattern: ^[A-Za-z0-9-_.*?]+$
, MaxLength: 1024). fnmatch pattern.Optional
sorting_algorithm
(string, Enum: alphabetical
, creation_date
). Deletion sorting algorithm.Appears on spec.userConfig
.
Template settings for all new indexes.
Optional
mapping_nested_objects_limit
(integer, Minimum: 0, Maximum: 100000). The maximum number of nested JSON objects that a single document can contain across all nested types. This limit helps to prevent out of memory errors when a document contains too many nested objects. Default is 10000.number_of_replicas
(integer, Minimum: 0, Maximum: 29). The number of replicas each primary shard has.number_of_shards
(integer, Minimum: 1, Maximum: 1024). The number of primary shards that an index should have.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
OpenSearch OpenID Connect Configuration.
Required
client_id
(string, MinLength: 1, MaxLength: 1024). The ID of the OpenID Connect client configured in your IdP. Required.client_secret
(string, MinLength: 1, MaxLength: 1024). The client secret of the OpenID Connect client configured in your IdP. Required.connect_url
(string, MaxLength: 2048). The URL of your IdP where the Security plugin can find the OpenID Connect metadata/configuration settings.Optional
enabled
(boolean). Enables or disables OpenID Connect authentication for OpenSearch. When enabled, users can authenticate using OpenID Connect with an Identity Provider.header
(string, MinLength: 1, MaxLength: 1024). HTTP header name of the JWT token. Optional. Default is Authorization.jwt_header
(string, MinLength: 1, MaxLength: 1024). The HTTP header that stores the token. Typically the Authorization header with the Bearer schema: Authorization: Bearer . Optional. Default is Authorization. jwt_url_parameter
(string, MinLength: 1, MaxLength: 1024). If the token is not transmitted in the HTTP header, but as an URL parameter, define the name of the parameter here. Optional.refresh_rate_limit_count
(integer, Minimum: 10). The maximum number of unknown key IDs in the time frame. Default is 10. Optional.refresh_rate_limit_time_window_ms
(integer, Minimum: 10000). The time frame to use when checking the maximum number of unknown key IDs, in milliseconds. Optional.Default is 10000 (10 seconds).roles_key
(string, MinLength: 1, MaxLength: 1024). The key in the JSON payload that stores the user\u2019s roles. The value of this key must be a comma-separated list of roles. Required only if you want to use roles in the JWT.scope
(string, MinLength: 1, MaxLength: 1024). The scope of the identity token issued by the IdP. Optional. Default is openid profile email address phone.subject_key
(string, MinLength: 1, MaxLength: 1024). The key in the JSON payload that stores the user\u2019s name. If not defined, the subject registered claim is used. Most IdP providers use the preferred_username claim. Optional.Appears on spec.userConfig
.
OpenSearch settings.
Optional
action_auto_create_index_enabled
(boolean). Explicitly allow or block automatic creation of indices. Defaults to true.action_destructive_requires_name
(boolean). Require explicit index names when deleting.auth_failure_listeners
(object). Opensearch Security Plugin Settings. See below for nested schema.cluster_max_shards_per_node
(integer, Minimum: 100, Maximum: 10000). Controls the number of shards allowed in the cluster per data node.cluster_routing_allocation_node_concurrent_recoveries
(integer, Minimum: 2, Maximum: 16). How many concurrent incoming/outgoing shard recoveries (normally replicas) are allowed to happen on a node. Defaults to 2.email_sender_name
(string, Pattern: ^[a-zA-Z0-9-_]+$
, MaxLength: 40). Sender name placeholder to be used in Opensearch Dashboards and Opensearch keystore.email_sender_password
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 1024). Sender password for Opensearch alerts to authenticate with SMTP server.email_sender_username
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 320). Sender username for Opensearch alerts.http_max_content_length
(integer, Minimum: 1, Maximum: 2147483647). Maximum content length for HTTP requests to the OpenSearch HTTP API, in bytes.http_max_header_size
(integer, Minimum: 1024, Maximum: 262144). The max size of allowed headers, in bytes.http_max_initial_line_length
(integer, Minimum: 1024, Maximum: 65536). The max length of an HTTP URL, in bytes.indices_fielddata_cache_size
(integer, Minimum: 3, Maximum: 100). Relative amount. Maximum amount of heap memory used for field data cache. This is an expert setting; decreasing the value too much will increase overhead of loading field data; too much memory used for field data cache will decrease amount of heap available for other operations.indices_memory_index_buffer_size
(integer, Minimum: 3, Maximum: 40). Percentage value. Default is 10%. Total amount of heap used for indexing buffer, before writing segments to disk. This is an expert setting. Too low value will slow down indexing; too high value will increase indexing performance but causes performance issues for query performance.indices_memory_max_index_buffer_size
(integer, Minimum: 3, Maximum: 2048). Absolute value. Default is unbound. Doesn't work without indices.memory.index_buffer_size. Maximum amount of heap used for query cache, an absolute indices.memory.index_buffer_size maximum hard limit.indices_memory_min_index_buffer_size
(integer, Minimum: 3, Maximum: 2048). Absolute value. Default is 48mb. Doesn't work without indices.memory.index_buffer_size. Minimum amount of heap used for query cache, an absolute indices.memory.index_buffer_size minimal hard limit.indices_queries_cache_size
(integer, Minimum: 3, Maximum: 40). Percentage value. Default is 10%. Maximum amount of heap used for query cache. This is an expert setting. Too low value will decrease query performance and increase performance for other operations; too high value will cause issues with other OpenSearch functionality.indices_query_bool_max_clause_count
(integer, Minimum: 64, Maximum: 4096). Maximum number of clauses Lucene BooleanQuery can have. The default value (1024) is relatively high, and increasing it may cause performance issues. Investigate other approaches first before increasing this value.indices_recovery_max_bytes_per_sec
(integer, Minimum: 40, Maximum: 400). Limits total inbound and outbound recovery traffic for each node. Applies to both peer recoveries as well as snapshot recoveries (i.e., restores from a snapshot). Defaults to 40mb.indices_recovery_max_concurrent_file_chunks
(integer, Minimum: 2, Maximum: 5). Number of file chunks sent in parallel for each recovery. Defaults to 2.ism_enabled
(boolean). Specifies whether ISM is enabled or not.ism_history_enabled
(boolean). Specifies whether audit history is enabled or not. The logs from ISM are automatically indexed to a logs document.ism_history_max_age
(integer, Minimum: 1, Maximum: 2147483647). The maximum age before rolling over the audit history index in hours.ism_history_max_docs
(integer, Minimum: 1). The maximum number of documents before rolling over the audit history index.ism_history_rollover_check_period
(integer, Minimum: 1, Maximum: 2147483647). The time between rollover checks for the audit history index in hours.ism_history_rollover_retention_period
(integer, Minimum: 1, Maximum: 2147483647). How long audit history indices are kept in days.override_main_response_version
(boolean). Compatibility mode sets OpenSearch to report its version as 7.10 so clients continue to work. Default is false.reindex_remote_whitelist
(array of strings, MaxItems: 32). Whitelisted addresses for reindexing. Changing this value will cause all OpenSearch instances to restart.script_max_compilations_rate
(string, MaxLength: 1024). Script compilation circuit breaker limits the number of inline script compilations within a period of time. Default is use-context.search_max_buckets
(integer, Minimum: 1, Maximum: 1000000). Maximum number of aggregation buckets allowed in a single response. OpenSearch default value is used when this is not defined.thread_pool_analyze_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_analyze_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_force_merge_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_get_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_get_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_search_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_search_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_search_throttled_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_search_throttled_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_write_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_write_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.Appears on spec.userConfig.opensearch
.
Opensearch Security Plugin Settings.
Optional
internal_authentication_backend_limiting
(object). See below for nested schema.ip_rate_limiting
(object). IP address rate limiting settings. See below for nested schema.Appears on spec.userConfig.opensearch.auth_failure_listeners
.
Optional
allowed_tries
(integer, Minimum: 0, Maximum: 2147483647). The number of login attempts allowed before login is blocked.authentication_backend
(string, Enum: internal
, MaxLength: 1024). internal_authentication_backend_limiting.authentication_backend.block_expiry_seconds
(integer, Minimum: 0, Maximum: 2147483647). The duration of time that login remains blocked after a failed login.max_blocked_clients
(integer, Minimum: 0, Maximum: 2147483647). internal_authentication_backend_limiting.max_blocked_clients.max_tracked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of tracked IP addresses that have failed login.time_window_seconds
(integer, Minimum: 0, Maximum: 2147483647). The window of time in which the value for allowed_tries
is enforced.type
(string, Enum: username
, MaxLength: 1024). internal_authentication_backend_limiting.type.Appears on spec.userConfig.opensearch.auth_failure_listeners
.
IP address rate limiting settings.
Optional
allowed_tries
(integer, Minimum: 1, Maximum: 2147483647). The number of login attempts allowed before login is blocked.block_expiry_seconds
(integer, Minimum: 1, Maximum: 36000). The duration of time that login remains blocked after a failed login.max_blocked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of blocked IP addresses.max_tracked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of tracked IP addresses that have failed login.time_window_seconds
(integer, Minimum: 1, Maximum: 36000). The window of time in which the value for allowed_tries
is enforced.type
(string, Enum: ip
, MaxLength: 1024). The type of rate limiting.Appears on spec.userConfig
.
OpenSearch Dashboards settings.
Optional
enabled
(boolean). Enable or disable OpenSearch Dashboards.max_old_space_size
(integer, Minimum: 64, Maximum: 2048). Limits the maximum amount of memory (in MiB) the OpenSearch Dashboards process can use. This sets the max_old_space_size option of the nodejs running the OpenSearch Dashboards. Note: the memory reserved by OpenSearch Dashboards is not available for OpenSearch.opensearch_request_timeout
(integer, Minimum: 5000, Maximum: 120000). Timeout in milliseconds for requests made by OpenSearch Dashboards towards OpenSearch.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
opensearch
(boolean). Allow clients to connect to opensearch with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.opensearch_dashboards
(boolean). Allow clients to connect to opensearch_dashboards with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
opensearch
(boolean). Enable opensearch.opensearch_dashboards
(boolean). Enable opensearch_dashboards.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
opensearch
(boolean). Allow clients to connect to opensearch from the public internet for service nodes that are in a project VPC or another type of private network.opensearch_dashboards
(boolean). Allow clients to connect to opensearch_dashboards from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
OpenSearch SAML configuration.
Required
enabled
(boolean). Enables or disables SAML-based authentication for OpenSearch. When enabled, users can authenticate using SAML with an Identity Provider.idp_entity_id
(string, MinLength: 1, MaxLength: 1024). The unique identifier for the Identity Provider (IdP) entity that is used for SAML authentication. This value is typically provided by the IdP.idp_metadata_url
(string, MinLength: 1, MaxLength: 2048). The URL of the SAML metadata for the Identity Provider (IdP). This is used to configure SAML-based authentication with the IdP.sp_entity_id
(string, MinLength: 1, MaxLength: 1024). The unique identifier for the Service Provider (SP) entity that is used for SAML authentication. This value is typically provided by the SP.Optional
idp_pemtrustedcas_content
(string, MaxLength: 16384). This parameter specifies the PEM-encoded root certificate authority (CA) content for the SAML identity provider (IdP) server verification. The root CA content is used to verify the SSL/TLS certificate presented by the server.roles_key
(string, MinLength: 1, MaxLength: 256). Optional. Specifies the attribute in the SAML response where role information is stored, if available. Role attributes are not required for SAML authentication, but can be included in SAML assertions by most Identity Providers (IdPs) to determine user access levels or permissions.subject_key
(string, MinLength: 1, MaxLength: 256). Optional. Specifies the attribute in the SAML response where the subject identifier is stored. If not configured, the NameID attribute is used by default.apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: my-postgresql\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: postgresql-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n pg_version: \"15\"\n
"},{"location":"api-reference/postgresql.html#PostgreSQL","title":"PostgreSQL","text":"PostgreSQL is the Schema for the postgresql API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value PostgreSQL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). PostgreSQLSpec defines the desired state of postgres instance. See below for nested schema.Appears on PostgreSQL
.
PostgreSQLSpec defines the desired state of postgres instance.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: POSTGRESQL_HOST
, POSTGRESQL_PORT
, POSTGRESQL_DATABASE
, POSTGRESQL_USER
, POSTGRESQL_PASSWORD
, POSTGRESQL_SSLMODE
, POSTGRESQL_DATABASE_URI
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). PostgreSQL specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: POSTGRESQL_HOST
, POSTGRESQL_PORT
, POSTGRESQL_DATABASE
, POSTGRESQL_USER
, POSTGRESQL_PASSWORD
, POSTGRESQL_SSLMODE
, POSTGRESQL_DATABASE_URI
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
PostgreSQL specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.admin_password
(string, Immutable, Pattern: ^[a-zA-Z0-9-_]+$
, MinLength: 8, MaxLength: 256). Custom password for admin user. Defaults to random string. This must be set only when a new service is being created.admin_username
(string, Immutable, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Custom username for admin user. This must be set only when a new service is being created.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.enable_ipv6
(boolean). Register AAAA DNS records for the service, and allow IPv6 packets to service ports.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.pg
(object). postgresql.conf configuration values. See below for nested schema.pg_read_replica
(boolean). Should the service which is being forked be a read replica (deprecated, use read_replica service integration instead).pg_service_to_fork_from
(string, Immutable, MaxLength: 64). Name of the PG Service from which to fork (deprecated, use service_to_fork_from). This has effect only when a new service is being created.pg_stat_monitor_enable
(boolean). Enable the pg_stat_monitor extension. Enabling this extension will cause the cluster to be restarted.When this extension is enabled, pg_stat_statements results for utility commands are unreliable.pg_version
(string, Enum: 11
, 12
, 13
, 14
, 15
). PostgreSQL major version.pgbouncer
(object). PGBouncer connection pooling settings. See below for nested schema.pglookout
(object). PGLookout settings. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_target_time
(string, Immutable, MaxLength: 32). Recovery target time when forking a service. This has effect only when a new service is being created.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.shared_buffers_percentage
(number, Minimum: 20, Maximum: 60). Percentage of total RAM that the database server uses for shared memory buffers. Valid range is 20-60 (float), which corresponds to 20% - 60%. This setting adjusts the shared_buffers configuration value.static_ips
(boolean). Use static public IP addresses.synchronous_replication
(string, Enum: quorum
, off
). Synchronous replication type. Note that the service plan also needs to support synchronous replication.timescaledb
(object). TimescaleDB extension configuration values. See below for nested schema.variant
(string, Enum: aiven
, timescale
). Variant of the PostgreSQL service, may affect the features that are exposed by default.work_mem
(integer, Minimum: 1, Maximum: 1024). Sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files, in MB. Default is 1MB + 0.075% of total RAM (up to 32MB).Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Migrate data from existing server.
Required
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.Optional
dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.Appears on spec.userConfig
.
postgresql.conf configuration values.
Optional
autovacuum_analyze_scale_factor
(number, Minimum: 0, Maximum: 1). Specifies a fraction of the table size to add to autovacuum_analyze_threshold when deciding whether to trigger an ANALYZE. The default is 0.2 (20% of table size).autovacuum_analyze_threshold
(integer, Minimum: 0, Maximum: 2147483647). Specifies the minimum number of inserted, updated or deleted tuples needed to trigger an ANALYZE in any one table. The default is 50 tuples.autovacuum_freeze_max_age
(integer, Minimum: 200000000, Maximum: 1500000000). Specifies the maximum age (in transactions) that a table's pg_class.relfrozenxid field can attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. This parameter will cause the server to be restarted.autovacuum_max_workers
(integer, Minimum: 1, Maximum: 20). Specifies the maximum number of autovacuum processes (other than the autovacuum launcher) that may be running at any one time. The default is three. This parameter can only be set at server start.autovacuum_naptime
(integer, Minimum: 1, Maximum: 86400). Specifies the minimum delay between autovacuum runs on any given database. The delay is measured in seconds, and the default is one minute.autovacuum_vacuum_cost_delay
(integer, Minimum: -1, Maximum: 100). Specifies the cost delay value that will be used in automatic VACUUM operations. If -1 is specified, the regular vacuum_cost_delay value will be used. The default value is 20 milliseconds.autovacuum_vacuum_cost_limit
(integer, Minimum: -1, Maximum: 10000). Specifies the cost limit value that will be used in automatic VACUUM operations. If -1 is specified (which is the default), the regular vacuum_cost_limit value will be used.autovacuum_vacuum_scale_factor
(number, Minimum: 0, Maximum: 1). Specifies a fraction of the table size to add to autovacuum_vacuum_threshold when deciding whether to trigger a VACUUM. The default is 0.2 (20% of table size).autovacuum_vacuum_threshold
(integer, Minimum: 0, Maximum: 2147483647). Specifies the minimum number of updated or deleted tuples needed to trigger a VACUUM in any one table. The default is 50 tuples.bgwriter_delay
(integer, Minimum: 10, Maximum: 10000). Specifies the delay between activity rounds for the background writer in milliseconds. Default is 200.bgwriter_flush_after
(integer, Minimum: 0, Maximum: 2048). Whenever more than bgwriter_flush_after bytes have been written by the background writer, attempt to force the OS to issue these writes to the underlying storage. Specified in kilobytes, default is 512. Setting of 0 disables forced writeback.bgwriter_lru_maxpages
(integer, Minimum: 0, Maximum: 1073741823). In each round, no more than this many buffers will be written by the background writer. Setting this to zero disables background writing. Default is 100.bgwriter_lru_multiplier
(number, Minimum: 0, Maximum: 10). The average recent need for new buffers is multiplied by bgwriter_lru_multiplier to arrive at an estimate of the number that will be needed during the next round, (up to bgwriter_lru_maxpages). 1.0 represents a \u201cjust in time\u201d policy of writing exactly the number of buffers predicted to be needed. Larger values provide some cushion against spikes in demand, while smaller values intentionally leave writes to be done by server processes. The default is 2.0.deadlock_timeout
(integer, Minimum: 500, Maximum: 1800000). This is the amount of time, in milliseconds, to wait on a lock before checking to see if there is a deadlock condition.default_toast_compression
(string, Enum: lz4
, pglz
). Specifies the default TOAST compression method for values of compressible columns (the default is lz4).idle_in_transaction_session_timeout
(integer, Minimum: 0, Maximum: 604800000). Time out sessions with open transactions after this number of milliseconds.jit
(boolean). Controls system-wide use of Just-in-Time Compilation (JIT).log_autovacuum_min_duration
(integer, Minimum: -1, Maximum: 2147483647). Causes each action executed by autovacuum to be logged if it ran for at least the specified number of milliseconds. Setting this to zero logs all autovacuum actions. Minus-one (the default) disables logging autovacuum actions.log_error_verbosity
(string, Enum: TERSE
, DEFAULT
, VERBOSE
). Controls the amount of detail written in the server log for each message that is logged.log_line_prefix
(string, Enum: 'pid=%p,user=%u,db=%d,app=%a,client=%h '
, '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
, '%m [%p] %q[user=%u,db=%d,app=%a] '
). Choose from one of the available log-formats. These can support popular log analyzers like pgbadger, pganalyze etc.log_min_duration_statement
(integer, Minimum: -1, Maximum: 86400000). Log statements that take more than this number of milliseconds to run, -1 disables.log_temp_files
(integer, Minimum: -1, Maximum: 2147483647). Log statements for each temporary file created larger than this number of kilobytes, -1 disables.max_files_per_process
(integer, Minimum: 1000, Maximum: 4096). PostgreSQL maximum number of files that can be open per process.max_locks_per_transaction
(integer, Minimum: 64, Maximum: 6400). PostgreSQL maximum locks per transaction.max_logical_replication_workers
(integer, Minimum: 4, Maximum: 64). PostgreSQL maximum logical replication workers (taken from the pool of max_parallel_workers).max_parallel_workers
(integer, Minimum: 0, Maximum: 96). Sets the maximum number of workers that the system can support for parallel queries.max_parallel_workers_per_gather
(integer, Minimum: 0, Maximum: 96). Sets the maximum number of workers that can be started by a single Gather or Gather Merge node.max_pred_locks_per_transaction
(integer, Minimum: 64, Maximum: 5120). PostgreSQL maximum predicate locks per transaction.max_prepared_transactions
(integer, Minimum: 0, Maximum: 10000). PostgreSQL maximum prepared transactions.max_replication_slots
(integer, Minimum: 8, Maximum: 64). PostgreSQL maximum replication slots.max_slot_wal_keep_size
(integer, Minimum: -1, Maximum: 2147483647). PostgreSQL maximum WAL size (MB) reserved for replication slots. Default is -1 (unlimited). wal_keep_size minimum WAL size setting takes precedence over this.max_stack_depth
(integer, Minimum: 2097152, Maximum: 6291456). Maximum depth of the stack in bytes.max_standby_archive_delay
(integer, Minimum: 1, Maximum: 43200000). Max standby archive delay in milliseconds.max_standby_streaming_delay
(integer, Minimum: 1, Maximum: 43200000). Max standby streaming delay in milliseconds.max_wal_senders
(integer, Minimum: 20, Maximum: 64). PostgreSQL maximum WAL senders.max_worker_processes
(integer, Minimum: 8, Maximum: 96). Sets the maximum number of background processes that the system can support.pg_partman_bgw.interval
(integer, Minimum: 3600, Maximum: 604800). Sets the time interval to run pg_partman's scheduled tasks.pg_partman_bgw.role
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Controls which role to use for pg_partman's scheduled background tasks.pg_stat_monitor.pgsm_enable_query_plan
(boolean). Enables or disables query plan monitoring.pg_stat_monitor.pgsm_max_buckets
(integer, Minimum: 1, Maximum: 10). Sets the maximum number of buckets.pg_stat_statements.track
(string, Enum: all
, top
, none
). Controls which statements are counted. Specify top to track top-level statements (those issued directly by clients), all to also track nested statements (such as statements invoked within functions), or none to disable statement statistics collection. The default value is top.temp_file_limit
(integer, Minimum: -1, Maximum: 2147483647). PostgreSQL temporary file limit in KiB, -1 for unlimited.timezone
(string, MaxLength: 64). PostgreSQL service timezone.track_activity_query_size
(integer, Minimum: 1024, Maximum: 10240). Specifies the number of bytes reserved to track the currently executing command for each active session.track_commit_timestamp
(string, Enum: off
, on
). Record commit time of transactions.track_functions
(string, Enum: all
, pl
, none
). Enables tracking of function call counts and time used.track_io_timing
(string, Enum: off
, on
). Enables timing of database I/O calls. This parameter is off by default, because it will repeatedly query the operating system for the current time, which may cause significant overhead on some platforms.wal_sender_timeout
(integer). Terminate replication connections that are inactive for longer than this amount of time, in milliseconds. Setting this value to zero disables the timeout.wal_writer_delay
(integer, Minimum: 10, Maximum: 200). WAL flush interval in milliseconds. Note that setting this value to lower than the default 200ms may negatively impact performance.Appears on spec.userConfig
.
PGBouncer connection pooling settings.
Optional
autodb_idle_timeout
(integer, Minimum: 0, Maximum: 86400). If the automatically created database pools have been unused this many seconds, they are freed. If 0 then timeout is disabled. [seconds].autodb_max_db_connections
(integer, Minimum: 0, Maximum: 2147483647). Do not allow more than this many server connections per database (regardless of user). Setting it to 0 means unlimited.autodb_pool_mode
(string, Enum: session
, transaction
, statement
). PGBouncer pool mode.autodb_pool_size
(integer, Minimum: 0, Maximum: 10000). If non-zero then create automatically a pool of that size per user when a pool doesn't exist.ignore_startup_parameters
(array of strings, MaxItems: 32). List of parameters to ignore when given in startup packet.min_pool_size
(integer, Minimum: 0, Maximum: 10000). Add more server connections to pool if below this number. Improves behavior when usual load comes suddenly back after period of total inactivity. The value is effectively capped at the pool size.server_idle_timeout
(integer, Minimum: 0, Maximum: 86400). If a server connection has been idle more than this many seconds it will be dropped. If 0 then timeout is disabled. [seconds].server_lifetime
(integer, Minimum: 60, Maximum: 86400). The pooler will close an unused server connection that has been connected longer than this. [seconds].server_reset_query_always
(boolean). Run server_reset_query (DISCARD ALL) in all pooling modes.Appears on spec.userConfig
.
PGLookout settings.
Required
max_failover_replication_time_lag
(integer, Minimum: 10). Number of seconds of master unavailability before triggering database failover to standby.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
pg
(boolean). Allow clients to connect to pg with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.pgbouncer
(boolean). Allow clients to connect to pgbouncer with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
pg
(boolean). Enable pg.pgbouncer
(boolean). Enable pgbouncer.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
pg
(boolean). Allow clients to connect to pg from the public internet for service nodes that are in a project VPC or another type of private network.pgbouncer
(boolean). Allow clients to connect to pgbouncer from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
TimescaleDB extension configuration values.
Required
max_background_workers
(integer, Minimum: 1, Maximum: 4096). The number of background workers for timescaledb operations. You should configure this setting to the sum of your number of databases and the total number of concurrent background workers you want running at any given point in time.apiVersion: aiven.io/v1alpha1\nkind: Project\nmetadata:\n name: my-project\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: project-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n tags:\n env: prod\n\n billingAddress: NYC\n cloud: aws-eu-west-1\n
"},{"location":"api-reference/project.html#Project","title":"Project","text":"Project is the Schema for the projects API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Project
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ProjectSpec defines the desired state of Project. See below for nested schema.Appears on Project
.
ProjectSpec defines the desired state of Project.
Optional
accountId
(string, MaxLength: 32). Account ID.authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.billingAddress
(string, MaxLength: 1000). Billing name and address of the project.billingCurrency
(string, Enum: AUD
, CAD
, CHF
, DKK
, EUR
, GBP
, NOK
, SEK
, USD
). Billing currency.billingEmails
(array of strings, MaxItems: 10). Billing contact emails of the project.billingExtraText
(string, MaxLength: 1000). Extra text to be included in all project invoices, e.g. purchase order or cost center number.billingGroupId
(string, MinLength: 36, MaxLength: 36). BillingGroup ID.cardId
(string, MaxLength: 64). Credit card ID; The ID may be either last 4 digits of the card or the actual ID.cloud
(string, MaxLength: 256). Target cloud, example: aws-eu-central-1.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: PROJECT_CA_CERT
. See below for nested schema.copyFromProject
(string, MaxLength: 63). Project name from which to copy settings to the new project.countryCode
(string, MinLength: 2, MaxLength: 2). Billing country code of the project.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize projects.technicalEmails
(array of strings, MaxItems: 10). Technical contact emails of the project.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: PROJECT_CA_CERT
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.apiVersion: aiven.io/v1alpha1\nkind: ProjectVPC\nmetadata:\n name: my-project-vpc\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n cloudName: google-europe-west1\n networkCidr: 10.0.0.0/24\n
"},{"location":"api-reference/projectvpc.html#ProjectVPC","title":"ProjectVPC","text":"ProjectVPC is the Schema for the projectvpcs API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ProjectVPC
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ProjectVPCSpec defines the desired state of ProjectVPC. See below for nested schema.Appears on ProjectVPC
.
ProjectVPCSpec defines the desired state of ProjectVPC.
Required
cloudName
(string, Immutable, MaxLength: 256). Cloud the VPC is in.networkCidr
(string, Immutable, MaxLength: 36). Network address range used by the VPC like 192.168.0.0/24.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). The project the VPC belongs to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: Redis\nmetadata:\n name: k8s-redis\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: redis-token\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n userConfig:\n redis_maxmemory_policy: \"allkeys-random\"\n
"},{"location":"api-reference/redis.html#Redis","title":"Redis","text":"Redis is the Schema for the redis API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Redis
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). RedisSpec defines the desired state of Redis. See below for nested schema.Appears on Redis
.
RedisSpec defines the desired state of Redis.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: REDIS_HOST
, REDIS_PORT
, REDIS_USER
, REDIS_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Redis specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: REDIS_HOST
, REDIS_PORT
, REDIS_USER
, REDIS_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Redis specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.redis_acl_channels_default
(string, Enum: allchannels
, resetchannels
). Determines default pub/sub channels' ACL for new users if ACL is not supplied. When this option is not defined, all_channels is assumed to keep backward compatibility. This option doesn't affect Redis configuration acl-pubsub-default.redis_io_threads
(integer, Minimum: 1, Maximum: 32). Set Redis IO thread count. Changing this will cause a restart of the Redis service.redis_lfu_decay_time
(integer, Minimum: 1, Maximum: 120). LFU maxmemory-policy counter decay time in minutes.redis_lfu_log_factor
(integer, Minimum: 0, Maximum: 100). Counter logarithm factor for volatile-lfu and allkeys-lfu maxmemory-policies.redis_maxmemory_policy
(string, Enum: noeviction
, allkeys-lru
, volatile-lru
, allkeys-random
, volatile-random
, volatile-ttl
, volatile-lfu
, allkeys-lfu
). Redis maxmemory-policy.redis_notify_keyspace_events
(string, Pattern: ^[KEg\\$lshzxeA]*$
, MaxLength: 32). Set notify-keyspace-events option.redis_number_of_databases
(integer, Minimum: 1, Maximum: 128). Set number of Redis databases. Changing this will cause a restart of the Redis service.redis_persistence
(string, Enum: off
, rdb
). When persistence is rdb
, Redis does RDB dumps each 10 minutes if any key is changed. Also RDB dumps are done according to backup schedule for backup purposes. When persistence is off
, no RDB dumps and backups are done, so data can be lost at any moment if service is restarted for any reason, or if service is powered off. Also service can't be forked.redis_pubsub_client_output_buffer_limit
(integer, Minimum: 32, Maximum: 512). Set output buffer limit for pub / sub clients in MB. The value is the hard limit, the soft limit is 1/4 of the hard limit. When setting the limit, be mindful of the available memory in the selected service plan.redis_ssl
(boolean). Require SSL to access Redis.redis_timeout
(integer, Minimum: 0, Maximum: 31536000). Redis idle connection timeout in seconds.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Migrate data from existing server.
Required
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.Optional
dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.redis
(boolean). Allow clients to connect to redis with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
prometheus
(boolean). Enable prometheus.redis
(boolean). Enable redis.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.redis
(boolean). Allow clients to connect to redis from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: my-service-integration\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n\n integrationType: kafka_logs\n sourceServiceName: my-source-service-name\n destinationServiceName: my-destination-service-name\n\n kafkaLogs:\n kafka_topic: my-kafka-topic\n
"},{"location":"api-reference/serviceintegration.html#ServiceIntegration","title":"ServiceIntegration","text":"ServiceIntegration is the Schema for the serviceintegrations API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ServiceIntegration
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ServiceIntegrationSpec defines the desired state of ServiceIntegration. See below for nested schema.Appears on ServiceIntegration
.
ServiceIntegrationSpec defines the desired state of ServiceIntegration.
Required
integrationType
(string, Enum: alertmanager
, autoscaler
, caching
, cassandra_cross_service_cluster
, clickhouse_kafka
, clickhouse_postgresql
, dashboard
, datadog
, datasource
, external_aws_cloudwatch_logs
, external_aws_cloudwatch_metrics
, external_elasticsearch_logs
, external_google_cloud_logging
, external_opensearch_logs
, flink
, flink_external_kafka
, internal_connectivity
, jolokia
, kafka_connect
, kafka_logs
, kafka_mirrormaker
, logs
, m3aggregator
, m3coordinator
, metrics
, opensearch_cross_cluster_replication
, opensearch_cross_cluster_search
, prometheus
, read_replica
, rsyslog
, schema_registry_proxy
, stresstester
, thanosquery
, thanosstore
, vmalert
, Immutable). Type of the service integration accepted by Aiven API. Some values may not be supported by the operator.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project the integration belongs to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.clickhouseKafka
(object). Clickhouse Kafka configuration values. See below for nested schema.clickhousePostgresql
(object). Clickhouse PostgreSQL configuration values. See below for nested schema.datadog
(object). Datadog specific user configuration options. See below for nested schema.destinationEndpointId
(string, Immutable, MaxLength: 36). Destination endpoint for the integration (if any).destinationProjectName
(string, Immutable, MaxLength: 63). Destination project for the integration (if any).destinationServiceName
(string, Immutable, MaxLength: 64). Destination service for the integration (if any).externalAWSCloudwatchMetrics
(object). External AWS CloudWatch Metrics integration Logs configuration values. See below for nested schema.kafkaConnect
(object). Kafka Connect service configuration values. See below for nested schema.kafkaLogs
(object). Kafka logs configuration values. See below for nested schema.kafkaMirrormaker
(object). Kafka MirrorMaker configuration values. See below for nested schema.logs
(object). Logs configuration values. See below for nested schema.metrics
(object). Metrics configuration values. See below for nested schema.sourceEndpointID
(string, Immutable, MaxLength: 36). Source endpoint for the integration (if any).sourceProjectName
(string, Immutable, MaxLength: 63). Source project for the integration (if any).sourceServiceName
(string, Immutable, MaxLength: 64). Source service for the integration (if any).Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Clickhouse Kafka configuration values.
Required
tables
(array of objects, MaxItems: 100). Tables to create. See below for nested schema.Appears on spec.clickhouseKafka
.
Tables to create.
Required
columns
(array of objects, MaxItems: 100). Table columns. See below for nested schema.data_format
(string, Enum: Avro
, CSV
, JSONAsString
, JSONCompactEachRow
, JSONCompactStringsEachRow
, JSONEachRow
, JSONStringsEachRow
, MsgPack
, TSKV
, TSV
, TabSeparated
, RawBLOB
, AvroConfluent
). Message data format.group_name
(string, MinLength: 1, MaxLength: 249). Kafka consumers group.name
(string, MinLength: 1, MaxLength: 40). Name of the table.topics
(array of objects, MaxItems: 100). Kafka topics. See below for nested schema.Optional
auto_offset_reset
(string, Enum: smallest
, earliest
, beginning
, largest
, latest
, end
). Action to take when there is no initial offset in offset store or the desired offset is out of range.date_time_input_format
(string, Enum: basic
, best_effort
, best_effort_us
). Method to read DateTime from text input formats.handle_error_mode
(string, Enum: default
, stream
). How to handle errors for Kafka engine.max_block_size
(integer, Minimum: 0, Maximum: 1000000000). Number of row collected by poll(s) for flushing data from Kafka.max_rows_per_message
(integer, Minimum: 1, Maximum: 1000000000). The maximum number of rows produced in one kafka message for row-based formats.num_consumers
(integer, Minimum: 1, Maximum: 10). The number of consumers per table per replica.poll_max_batch_size
(integer, Minimum: 0, Maximum: 1000000000). Maximum amount of messages to be polled in a single Kafka poll.skip_broken_messages
(integer, Minimum: 0, Maximum: 1000000000). Skip at least this number of broken messages from Kafka topic per block.Appears on spec.clickhouseKafka.tables
.
Table columns.
Required
name
(string, MinLength: 1, MaxLength: 40). Column name.type
(string, MinLength: 1, MaxLength: 1000). Column type.Appears on spec.clickhouseKafka.tables
.
Kafka topics.
Required
name
(string, MinLength: 1, MaxLength: 249). Name of the topic.Appears on spec
.
Clickhouse PostgreSQL configuration values.
Required
databases
(array of objects, MaxItems: 10). Databases to expose. See below for nested schema.Appears on spec.clickhousePostgresql
.
Databases to expose.
Optional
database
(string, MinLength: 1, MaxLength: 63). PostgreSQL database to expose.schema
(string, MinLength: 1, MaxLength: 63). PostgreSQL schema to expose.Appears on spec
.
Datadog specific user configuration options.
Optional
datadog_dbm_enabled
(boolean). Enable Datadog Database Monitoring.datadog_tags
(array of objects, MaxItems: 32). Custom tags provided by user. See below for nested schema.exclude_consumer_groups
(array of strings, MaxItems: 1024). List of custom metrics.exclude_topics
(array of strings, MaxItems: 1024). List of topics to exclude.include_consumer_groups
(array of strings, MaxItems: 1024). List of custom metrics.include_topics
(array of strings, MaxItems: 1024). List of topics to include.kafka_custom_metrics
(array of strings, MaxItems: 1024). List of custom metrics.max_jmx_metrics
(integer, Minimum: 10, Maximum: 100000). Maximum number of JMX metrics to send.opensearch
(object). Datadog Opensearch Options. See below for nested schema.redis
(object). Datadog Redis Options. See below for nested schema.Appears on spec.datadog
.
Custom tags provided by user.
Required
tag
(string, MinLength: 1, MaxLength: 200). Tag format and usage are described here: https://docs.datadoghq.com/getting_started/tagging. Tags with prefix aiven-
are reserved for Aiven.Optional
comment
(string, MaxLength: 1024). Optional tag explanation.Appears on spec.datadog
.
Datadog Opensearch Options.
Optional
index_stats_enabled
(boolean). Enable Datadog Opensearch Index Monitoring.pending_task_stats_enabled
(boolean). Enable Datadog Opensearch Pending Task Monitoring.pshard_stats_enabled
(boolean). Enable Datadog Opensearch Primary Shard Monitoring.Appears on spec.datadog
.
Datadog Redis Options.
Required
command_stats_enabled
(boolean). Enable command_stats option in the agent's configuration.Appears on spec
.
External AWS CloudWatch Metrics integration Logs configuration values.
Optional
dropped_metrics
(array of objects, MaxItems: 1024). Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics). See below for nested schema.extra_metrics
(array of objects, MaxItems: 1024). Metrics to allow through to AWS CloudWatch (in addition to default metrics). See below for nested schema.Appears on spec.externalAWSCloudwatchMetrics
.
Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics).
Required
field
(string, MaxLength: 1000). Identifier of a value in the metric.metric
(string, MaxLength: 1000). Identifier of the metric.Appears on spec.externalAWSCloudwatchMetrics
.
Metrics to allow through to AWS CloudWatch (in addition to default metrics).
Required
field
(string, MaxLength: 1000). Identifier of a value in the metric.metric
(string, MaxLength: 1000). Identifier of the metric.Appears on spec
.
Kafka Connect service configuration values.
Required
kafka_connect
(object). Kafka Connect service configuration values. See below for nested schema.Appears on spec.kafkaConnect
.
Kafka Connect service configuration values.
Optional
config_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration data are stored.This must be the same for all workers with the same group_id.group_id
(string, MaxLength: 249). A unique string that identifies the Connect cluster group this worker belongs to.offset_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration offsets are stored.This must be the same for all workers with the same group_id.status_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration status updates are stored.This must be the same for all workers with the same group_id.Appears on spec
.
Kafka logs configuration values.
Required
kafka_topic
(string, MinLength: 1, MaxLength: 249). Topic name.Optional
selected_log_fields
(array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.Appears on spec
.
Kafka MirrorMaker configuration values.
Optional
cluster_alias
(string, Pattern: ^[a-zA-Z0-9_.-]+$
, MaxLength: 128). The alias under which the Kafka cluster is known to MirrorMaker. Can contain the following symbols: ASCII alphanumerics, .
, _
, and -
.kafka_mirrormaker
(object). Kafka MirrorMaker configuration values. See below for nested schema.Appears on spec.kafkaMirrormaker
.
Kafka MirrorMaker configuration values.
Optional
consumer_fetch_min_bytes
(integer, Minimum: 1, Maximum: 5242880). The minimum amount of data the server should return for a fetch request.producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). The batch size in bytes producer will attempt to collect before publishing to broker.producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The amount of bytes producer can use for buffering data before publishing to broker.producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). The linger time (ms) for waiting new data to arrive for publishing.producer_max_request_size
(integer, Minimum: 0, Maximum: 268435456). The maximum request size in bytes.Appears on spec
.
Logs configuration values.
Optional
elasticsearch_index_days_max
(integer, Minimum: 1, Maximum: 10000). Elasticsearch index retention limit.elasticsearch_index_prefix
(string, MinLength: 1, MaxLength: 1024). Elasticsearch index prefix.selected_log_fields
(array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.Appears on spec
.
Metrics configuration values.
Optional
database
(string, Pattern: ^[_A-Za-z0-9][-_A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of the database where to store metric datapoints. Only affects PostgreSQL destinations. Defaults to metrics
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.retention_days
(integer, Minimum: 0, Maximum: 10000). Number of days to keep old metrics. Only affects PostgreSQL destinations. Set to 0 for no automatic cleanup. Defaults to 30 days.ro_username
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of a user that can be used to read metrics. This will be used for Grafana integration (if enabled) to prevent Grafana users from making undesired changes. Only affects PostgreSQL destinations. Defaults to metrics_reader
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.source_mysql
(object). Configuration options for metrics where source service is MySQL. See below for nested schema.username
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of the user used to write metrics. Only affects PostgreSQL destinations. Defaults to metrics_writer
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.Appears on spec.metrics
.
Configuration options for metrics where source service is MySQL.
Required
telegraf
(object). Configuration options for Telegraf MySQL input plugin. See below for nested schema.Appears on spec.metrics.source_mysql
.
Configuration options for Telegraf MySQL input plugin.
Optional
gather_event_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS.gather_file_events_stats
(boolean). gather metrics from PERFORMANCE_SCHEMA.FILE_SUMMARY_BY_EVENT_NAME.gather_index_io_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE.gather_info_schema_auto_inc
(boolean). Gather auto_increment columns and max values from information schema.gather_innodb_metrics
(boolean). Gather metrics from INFORMATION_SCHEMA.INNODB_METRICS.gather_perf_events_statements
(boolean). Gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_DIGEST.gather_process_list
(boolean). Gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST.gather_slave_status
(boolean). Gather metrics from SHOW SLAVE STATUS command output.gather_table_io_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE.gather_table_lock_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS.gather_table_schema
(boolean). Gather metrics from INFORMATION_SCHEMA.TABLES.perf_events_statements_digest_text_limit
(integer, Minimum: 1, Maximum: 2048). Truncates digest text from perf_events_statements into this many characters.perf_events_statements_limit
(integer, Minimum: 1, Maximum: 4000). Limits metrics from perf_events_statements.perf_events_statements_time_limit
(integer, Minimum: 1, Maximum: 2592000). Only include perf_events_statements whose last seen is less than this many seconds.apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: my-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: service-user-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n serviceName: my-service-name\n
"},{"location":"api-reference/serviceuser.html#ServiceUser","title":"ServiceUser","text":"ServiceUser is the Schema for the serviceusers API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ServiceUser
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ServiceUserSpec defines the desired state of ServiceUser. See below for nested schema.Appears on ServiceUser
.
ServiceUserSpec defines the desired state of ServiceUser.
Required
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the user to.serviceName
(string, MaxLength: 63). Service to link the user to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.authentication
(string, Enum: caching_sha2_password
, mysql_native_password
). Authentication details.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: SERVICEUSER_HOST
, SERVICEUSER_PORT
, SERVICEUSER_USERNAME
, SERVICEUSER_PASSWORD
, SERVICEUSER_CA_CERT
, SERVICEUSER_ACCESS_CERT
, SERVICEUSER_ACCESS_KEY
. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: SERVICEUSER_HOST
, SERVICEUSER_PORT
, SERVICEUSER_USERNAME
, SERVICEUSER_PASSWORD
, SERVICEUSER_CA_CERT
, SERVICEUSER_ACCESS_CERT
, SERVICEUSER_ACCESS_KEY
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.The Aiven Operator for Kubernetes project accepts contributions via GitHub pull requests. This document outlines the process to help get your contribution accepted.
Please see also the Aiven Operator for Kubernetes Developer Guide.
"},{"location":"contributing/index.html#support-channels","title":"Support Channels","text":"This project offers support through GitHub issues and can be filed here. Moreover, GitHub issues are used as the primary method for tracking anything to do with the Aiven Operator for Kubernetes project.
"},{"location":"contributing/index.html#pull-request-process","title":"Pull Request Process","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
"},{"location":"contributing/index.html#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
Examples of unacceptable behavior by participants include:
This project adheres to the Conventional Commits specification. Please, make sure that your commit messages follow that specification.
"},{"location":"contributing/index.html#our-responsibilities","title":"Our Responsibilities","text":"Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"contributing/index.html#scope","title":"Scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
"},{"location":"contributing/index.html#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at opensource@aiven.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
"},{"location":"contributing/developer-guide.html","title":"Developer guide","text":""},{"location":"contributing/developer-guide.html#getting-started","title":"Getting Started","text":"You must have a working Go environment and then clone the repository:
git clone git@github.com:aiven/aiven-operator.git\ncd aiven-operator\n
"},{"location":"contributing/developer-guide.html#resource-generation","title":"Resource generation","text":"Please see this page for more information.
"},{"location":"contributing/developer-guide.html#building","title":"Building","text":"The project uses the make
build system.
Building the operator binary:
make build\n
"},{"location":"contributing/developer-guide.html#testing","title":"Testing","text":"As of now, we only support integration tests who interact directly with Aiven. To run the tests, you'll need an Aiven account and an Aiven authentication code.
"},{"location":"contributing/developer-guide.html#prerequisites","title":"Prerequisites","text":"Please have installed first:
-w0
flag, some tests may not work properly kind create cluster --image kindest/node:v1.24.0 --wait 5m\n
The following commands must be executed with these environment variables (keep them in secret!):
AIVEN_TOKEN
\u2014 your authentication token AIVEN_PROJECT_NAME
\u2014 your Aiven project name to run services inSetup everything:
make e2e-setup-kind\n
Note
Additionally, webhooks can be disabled, if there are any problems with them.
WEBHOOKS_ENABLED=false make e2e-setup-kind\n
Run e2e tests (creates real services in AIVEN_PROJECT_NAME
):
make test-e2e-preinstalled \n
When you're done, just drop the cluster:
kind delete cluster\n
"},{"location":"contributing/developer-guide.html#documentation","title":"Documentation","text":"The documentation is written in markdown and generated by mkdocs and mkdocs-material.
To run the documentation live preview:
make serve-docs\n
And open the http://localhost:8000/aiven-operator/
page in your web browser.
The documentation API Reference section is generated automatically from the source code during the documentation deployment. To generate it locally, run the following command:
make docs\n
"},{"location":"contributing/resource-generation.html","title":"Resource generation","text":"Aiven Kubernetes Operator generates service configs code (also known as user configs) and documentation from public service types schema.
"},{"location":"contributing/resource-generation.html#flow-overview","title":"Flow overview","text":"When a new schema is issued on the API, a cron job fetches it, parses, patches, and saves in a shared library \u2014 go-api-schemas.
When the library is updated, the GitHub dependabot creates PRs to the dependant repositories, like Aiven Kubernetes Operator and Aiven Terraform Provider.
Then the make generate
command is called by GitHub action. And the PR is ready for review.
flowchart TB\n API(Aiven API) <-.->|polls schema updates| Schema([go-api-schemas])\n Bot(dependabot) <-.->|polls updates| Schema \n Bot-->|pull request|UpdateOP[/\"\u2728 $ make generate \u2728\"/]\n UpdateOP-->|review| OP([operator repository])
"},{"location":"contributing/resource-generation.html#make-generate","title":"make generate","text":"The command runs several generators in a certain sequence. First, the user config generator is called. Then controller-gen cli. Then API reference docs generator and charts generator.
Here how it goes in the details:
./<api-reference-docs>/example/
, if it finds one, it validates that with the CRD. Each CRD has an OpenAPI v3 schema as a part of it. This is also used by Kubernetes itself to validate user input.flowchart TB\n Make[/$ make generate/]-->Generator(userconfig generator<br> creates/updates structs using updated spec)\n Generator-->|go: KafkaUserConfig struct| K8S(controller-gen<br> adds k8s methods to structs)\n K8S-->|go files| CRD(controller-gen<br> creates CRDs out of structs)\n CRD-->|CRD: aiven.io_kafkas.yaml| Docs(docs generator)\n subgraph API reference generation\n Docs-->|aiven.io_kafkas.yaml|Reference(creates reference<br> out of CRD)\n Docs-->|examples/kafka.yaml,<br> aiven.io_kafkas.yaml|Examples(validates example<br> using CRD)\n Examples--> Markdown(creates docs out of CRDs, adds examples)\n Reference-->Markdown(kafka.md)\n end\n CRD-->|yaml files|Charts(charts generator<br> updates helm charts<br> and the changelog)\n Charts-->ToRelease(\"Ready to release \ud83c\udf89\")\n Markdown-->ToRelease
"},{"location":"contributing/resource-generation.html#charts-version-bump","title":"Charts version bump","text":"By default, charts generator keeps the current helm chart's version, because it doesn't know semver. You need it to do manually.
To do so run the following command with the version of your choice:
make version=v1.0.0 charts\n
"},{"location":"installation/helm.html","title":"Installing with Helm (recommended)","text":""},{"location":"installation/helm.html#installing","title":"Installing","text":"The Aiven Operator for Kubernetes can be installed via Helm.
Before you start, make sure you have the prerequisites.
First add the Aiven Helm repository:
helm repo add aiven https://aiven.github.io/aiven-charts && helm repo update\n
"},{"location":"installation/helm.html#installing-custom-resource-definitions","title":"Installing Custom Resource Definitions","text":"helm install aiven-operator-crds aiven/aiven-operator-crds\n
Verify the installation:
kubectl api-resources --api-group=aiven.io\n
The output is similar to the following: NAME SHORTNAMES APIVERSION NAMESPACED KIND\nconnectionpools aiven.io/v1alpha1 true ConnectionPool\ndatabases aiven.io/v1alpha1 true Database\n... < several omitted lines >\n
"},{"location":"installation/helm.html#installing-the-operator","title":"Installing the Operator","text":"helm install aiven-operator aiven/aiven-operator\n
Note
Installation will fail if webhooks are enabled and the CRDs for the cert-manager are not installed.
Verify the installation:
helm status aiven-operator\n
The output is similar to the following:
NAME: aiven-operator\nLAST DEPLOYED: Fri Sep 10 15:23:26 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\n
It is also possible to install the operator without webhooks enabled:
helm install aiven-operator aiven/aiven-operator --set webhooks.enabled=false\n
"},{"location":"installation/helm.html#configuration-options","title":"Configuration Options","text":"Please refer to the values.yaml of the chart.
"},{"location":"installation/helm.html#installing-without-full-cluster-administrator-access","title":"Installing without full cluster administrator access","text":"There can be some scenarios where the individual installing the Helm chart does not have the ability to provision cluster-wide resources (e.g. ClusterRoles/ClusterRoleBindings). In this scenario, you can have a cluster administrator manually install the ClusterRole and ClusterRoleBinding the operator requires prior to installing the Helm chart specifying false
for the clusterRole.create
attribute.
Important
Please see this page for more information.
Find out the name of your deployment:
helm list\n
The output has the name of each deployment similar to the following:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION\naiven-operator default 1 2021-09-09 10:56:14.623700249 +0200 CEST deployed aiven-operator-v0.1.0 v0.1.0 \naiven-operator-crds default 1 2021-09-09 10:56:05.736411868 +0200 CEST deployed aiven-operator-crds-v0.1.0 v0.1.0\n
Remove the CRDs:
helm uninstall aiven-operator-crds\n
The confirmation message is similar to the following:
release \"aiven-operator-crds\" uninstalled\n
Remove the operator:
helm uninstall aiven-operator\n
The confirmation message is similar to the following:
release \"aiven-operator\" uninstalled\n
"},{"location":"installation/kubectl.html","title":"Installing with kubectl","text":""},{"location":"installation/kubectl.html#installing","title":"Installing","text":"Before you start, make sure you have the prerequisites.
All Aiven Operator for Kubernetes components can be installed from one YAML file that is uploaded for every release:
kubectl apply -f https://github.com/aiven/aiven-operator/releases/latest/download/deployment.yaml\n
By default the Deployment is installed into the aiven-operator-system
namespace.
Assuming you installed version vX.Y.Z
of the operator it can be uninstalled via
kubectl delete -f https://github.com/aiven/aiven-operator/releases/download/vX.Y.Z/deployment.yaml\n
"},{"location":"installation/prerequisites.html","title":"Prerequisites","text":"The Aiven Operator for Kubernetes supports all major Kubernetes distributions, both locally and in the cloud.
Make sure you have the following:
The Aiven Operator for Kubernetes uses cert-manager
to configure the service reference of our webhooks.
Please follow the installation instructions on their website.
Note
This is not required in the Helm installation if you select to disable webhooks, but that is not recommended outside of playground use. The Aiven Operator for Kubernetes uses webhooks for setting defaults and enforcing invariants that are expected by the aiven API and will lead to errors if ignored. In the future webhooks will also be used for conversion and supporting multiple CRD versions.
"},{"location":"installation/uninstalling.html","title":"Uninstalling","text":"Danger
Uninstalling the Aiven Operator for Kubernetes can remove the resources created in Aiven, possibly resulting in data loss.
Depending on your installation, please follow one of:
Aiven resources need to have an accompanying secret that contains the token that is used to authorize the manipulation of that resource. If that token expired then you will not be able to delete the custom resource and deletion will also hang until the situation is resolved. The recommended approach to deal with that situation is to patch a valid token into the secret again so that proper cleanup of aiven resources can take place.
"},{"location":"installation/uninstalling.html#hanging-deletions","title":"Hanging deletions","text":"To protect the secrets that the operator is using from deletion, it adds the finalizer finalizers.aiven.io/needed-to-delete-services
to the secret. This solves a race condition that happens when deleting a namespace, where there is a possibility of the secret getting deleted before the resource that uses it. When the controller is deleted it may not cleanup the finalizers from all secrets. If there is a secret with this finalizer blocking deletion of a namespace, for now please do
kubectl patch secret <offending-secret> -p '{\"metadata\":{\"finalizers\":null}}' --type=merge\n
to remove the finalizer.
"},{"location":"resources/cassandra.html","title":"Cassandra","text":"Aiven for Apache Cassandra\u00ae is a distributed database designed to handle large volumes of writes.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/cassandra.html#creating-a-cassandra-instance","title":"Creating a Cassandra instance","text":"1. Create a file named cassandra-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Cassandra\nmetadata:\n name: cassandra-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Cassandra connection on the `cassandra-secret` Secret\n connInfoSecretTarget:\n name: cassandra-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f cassandra-sample.yaml \n
The output is:
cassandra.aiven.io/cassandra-sample created\n
3. Review the resource you created with this command:
kubectl describe cassandra.aiven.io cassandra-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-31T10:17:25Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-31T10:24:00Z\n Message: Instance is running on Aiven side\n Reason: CheckRunning\n Status: True\n Type: Running\n State: RUNNING\n...\n
The resource can be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the Cassandra connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret cassandra-secret \n
The output is similar to the following:
Name: cassandra-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nCASSANDRA_HOSTS: 59 bytes\nCASSANDRA_PASSWORD: 24 bytes\nCASSANDRA_PORT: 5 bytes\nCASSANDRA_URI: 66 bytes\nCASSANDRA_USER: 8 bytes\nCASSANDRA_HOST: 60 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret cassandra-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"CASSANDRA_HOST\": \"<secret>\",\n \"CASSANDRA_HOSTS\": \"<secret>\",\n \"CASSANDRA_PASSWORD\": \"<secret>\",\n \"CASSANDRA_PORT\": \"14609\",\n \"CASSANDRA_URI\": \"<secret>\",\n \"CASSANDRA_USER\": \"avnadmin\"\n}\n
"},{"location":"resources/cassandra.html#creating-a-cassandra-user","title":"Creating a Cassandra user","text":"You can create service users for your instance of Aiven for Apache Cassandra. Service users are unique to this instance and are not shared with any other services.
1. Create a file named cassandra-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: cassandra-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: cassandra-service-user-secret\n\n project: <your-project-name>\n serviceName: cassandra-sample\n
2. Create the user by applying the configuration:
kubectl apply -f cassandra-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using the following command:
kubectl get secret cassandra-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"<secret>\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"cassandra-service-user\"\n}\n
You can connect to the Cassandra instance using these credentials and the host information from the cassandra-secret
Secret.
Aiven for MySQL is a fully managed relational database service, deployable in the cloud of your choice.
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/mysql.html#creating-a-mysql-instance","title":"Creating a MySQL instance","text":"1. Create a file named mysql-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: MySQL\nmetadata:\n name: mysql-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the MySQL connection on the `mysql-secret` Secret\n connInfoSecretTarget:\n name: mysql-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f mysql-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe mysql.aiven.io mysql-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-02-22T15:43:44Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-02-22T15:43:44Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the MySQL connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret mysql-secret \n
The output is similar to the following:
Name: mysql-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nMYSQL_PORT: 5 bytes\nMYSQL_SSL_MODE: 8 bytes\nMYSQL_URI: 115 bytes\nMYSQL_USER: 8 bytes\nMYSQL_DATABASE: 9 bytes\nMYSQL_HOST: 39 bytes\nMYSQL_PASSWORD: 24 bytes\n
You can use jq to quickly decode the Secret:
kubectl get secret mysql-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"MYSQL_DATABASE\": \"defaultdb\",\n \"MYSQL_HOST\": \"<secret>\",\n \"MYSQL_PASSWORD\": \"<secret>\",\n \"MYSQL_PORT\": \"12691\",\n \"MYSQL_SSL_MODE\": \"REQUIRED\",\n \"MYSQL_URI\": \"<secret>\",\n \"MYSQL_USER\": \"avnadmin\"\n}\n
"},{"location":"resources/mysql.html#creating-a-mysql-user","title":"Creating a MySQL user","text":"You can create service users for your instance of Aiven for MySQL. Service users are unique to this instance and are not shared with any other services.
1. Create a file named mysql-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: mysql-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: mysql-service-user-secret\n\n project: <your-project-name>\n serviceName: mysql-sample\n
2. Create the user by applying the configuration:
kubectl apply -f mysql-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using jq:
kubectl get secret mysql-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"<secret>\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"mysql-service-user\"\n}\n
You can connect to the MySQL instance using these credentials and the host information from the mysql-secret
Secret.
OpenSearch\u00ae is an open source search and analytics suite including search engine, NoSQL document database, and visualization interface. OpenSearch offers a distributed, full-text search engine based on Apache Lucene\u00ae with a RESTful API interface and support for JSON documents.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/opensearch.html#creating-an-opensearch-instance","title":"Creating an OpenSearch instance","text":"1. Create a file named os-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: OpenSearch\nmetadata:\n name: os-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the OpenSearch connection on the `os-secret` Secret\n connInfoSecretTarget:\n name: os-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f os-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe opensearch.aiven.io os-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-19T14:41:43Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-19T14:41:43Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the OpenSearch connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret os-secret \n
The output is similar to the following:
Name: os-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nHOST: 61 bytes\nPASSWORD: 24 bytes\nPORT: 5 bytes\nUSER: 8 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret os-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"HOST\": \"os-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"13041\",\n \"USER\": \"avnadmin\"\n}\n
"},{"location":"resources/opensearch.html#creating-an-opensearch-user","title":"Creating an OpenSearch user","text":"You can create service users for your instance of Aiven for OpenSearch. Service users are unique to this instance and are not shared with any other services.
1. Create a file named os-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: os-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: os-service-user-secret\n\n project: <your-project-name>\n serviceName: os-sample\n
2. Create the user by applying the configuration:
kubectl apply -f os-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using the following command:
kubectl get secret os-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"os-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"os-service-user\"\n}\n
You can connect to the OpenSearch instance using these credentials and the host information from the os-secret
Secret.
PostgreSQL is an open source, relational database. It's ideal for organisations that need a well organised tabular datastore. On top of the strict table and columns formats, PostgreSQL also offers solutions for nested datasets with the native jsonb
format and advanced set of extensions including PostGIS, a spatial database extender for location queries. Aiven for PostgreSQL is the perfect fit for your relational data.
With Aiven Kubernetes Operator, you can manage Aiven for PostgreSQL through the well defined Kubernetes API.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/postgresql.html#creating-a-postgresql-instance","title":"Creating a PostgreSQL instance","text":"1. Create a file named pg-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the PostgreSQL connection on the `pg-connection` Secret\n connInfoSecretTarget:\n name: pg-connection\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific PostgreSQL configuration\n userConfig:\n pg_version: '11'\n
2. Create the service by applying the configuration:
kubectl apply -f pg-sample.yaml\n
3. Review the resource you created with the following command:
kubectl get postgresqls.aiven.io pg-sample\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\npg-sample your-project google-europe-west1 startup-4 RUNNING\n
The resource can stay in the BUILDING
state for a couple of minutes. Once the state changes to RUNNING
, you are ready to access it.
For your convenience, the operator automatically stores the PostgreSQL connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
kubectl describe secret pg-connection\n
The output is similar to the following:
Name: pg-connection\nNamespace: default\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nDATABASE_URI: 107 bytes\nPGDATABASE: 9 bytes\nPGHOST: 38 bytes\nPGPASSWORD: 16 bytes\nPGPORT: 5 bytes\nPGSSLMODE: 7 bytes\nPGUSER: 8 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret pg-connection -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"DATABASE_URI\": \"postgres://avnadmin:<secret-password>@pg-sample-your-project.aivencloud.com:13039/defaultdb?sslmode=require\",\n \"PGDATABASE\": \"defaultdb\",\n \"PGHOST\": \"pg-sample-your-project.aivencloud.com\",\n \"PGPASSWORD\": \"<secret-password>\",\n \"PGPORT\": \"13039\",\n \"PGSSLMODE\": \"require\",\n \"PGUSER\": \"avnadmin\"\n}\n
"},{"location":"resources/postgresql.html#testing-the-connection","title":"Testing the connection","text":"You can verify your PostgreSQL connection from a Kubernetes workload by deploying a Pod that runs the psql
command.
1. Create a file named pod-psql.yaml
apiVersion: v1\nkind: Pod\nmetadata:\n name: psql-test-connection\nspec:\n restartPolicy: Never\n containers:\n - image: postgres:11-alpine\n name: postgres\n command: [ 'psql', '$(DATABASE_URI)', '-c', 'SELECT version();' ]\n\n # the pg-connection Secret becomes environment variables \n envFrom:\n - secretRef:\n name: pg-connection\n
It runs once and stops, due to the restartPolicy: Never
flag.
2. Inspect the log:
kubectl logs psql-test-connection\n
The output is similar to the following:
version \n---------------------------------------------------------------------------------------------\n PostgreSQL 11.12 on x86_64-pc-linux-gnu, compiled by gcc, a 68c5366192 p 6b9244f01a, 64-bit\n(1 row)\n
You have now connected to the PostgreSQL, and executed the SELECT version();
query.
The Database
Kubernetes resource allows you to create a logical database within the PostgreSQL instance.
Create the pg-database-sample.yaml
file with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Database\nmetadata:\n name: pg-database-sample\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n # the name of the previously created PostgreSQL instance\n serviceName: pg-sample\n\n project: <your-project-name>\n lcCollate: en_US.UTF-8\n lcCtype: en_US.UTF-8\n
You can now connect to the pg-database-sample
using the credentials stored in the pg-connection
Secret.
Aiven uses the concept of service user that allows you to create users for different services. You can create one for the PostgreSQL instance.
1. Create a file named pg-service-user.yaml
.
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: pg-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: pg-service-user-connection\n\n project: <your-project-name>\n serviceName: pg-sample\n
2. Apply the configuration with the following command.
kubectl apply -f pg-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information, in this case named pg-service-user-connection
:
kubectl get secret pg-service-user-connection -o json | jq '.data | map_values(@base64d)'\n
The output has the password and username:
{\n \"PASSWORD\": \"<secret-password>\",\n \"USERNAME\": \"pg-service-user\"\n}\n
You can now connect to the PostgreSQL instance using the credentials generated above, and the host information from the pg-connection
Secret.
Connection pooling allows you to maintain very large numbers of connections to a database while minimizing the consumption of server resources. For more information, refer to the connection pooling article in Aiven Docs. Aiven for PostgreSQL uses PGBouncer for connection pooling.
You can create a connection pool with the ConnectionPool
resource using the previously created Database
and ServiceUser
:
Create a new file named pg-connection-pool.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: ConnectionPool\nmetadata:\n name: pg-connection-pool\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: pg-connection-pool-connection\n\n project: <your-project-name>\n serviceName: pg-sample\n databaseName: pg-database-sample\n username: pg-service-user\n poolSize: 10\n poolMode: transaction\n
The ConnectionPool
generates a Secret with the connection info using the name from the connInfoSecretTarget.Name
field:
kubectl get secret pg-connection-pool-connection -o json | jq '.data | map_values(@base64d)' \n
The output is similar to the following: {\n \"DATABASE_URI\": \"postgres://pg-service-user:<secret-password>@pg-sample-you-project.aivencloud.com:13040/pg-connection-pool?sslmode=require\",\n \"PGDATABASE\": \"pg-database-sample\",\n \"PGHOST\": \"pg-sample-your-project.aivencloud.com\",\n \"PGPASSWORD\": \"<secret-password>\",\n \"PGPORT\": \"13040\",\n \"PGSSLMODE\": \"require\",\n \"PGUSER\": \"pg-service-user\"\n}\n
"},{"location":"resources/postgresql.html#creating-a-postgresql-read-only-replica","title":"Creating a PostgreSQL read-only replica","text":"Read-only replicas can be used to reduce the load on the primary service by making read-only queries against the replica service.
To create a read-only replica for a PostgreSQL service, you create a second PostgreSQL service and use serviceIntegrations to replicate data from your primary service.
The example that follows creates a primary service and a read-only replica.
1. Create a new file named pg-read-replica.yaml
with the following:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: primary-pg-service\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your project's name here\n project: <your-project-name>\n\n # add the cloud provider and plan of your choice\n # you can see all of the options at https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n userConfig:\n pg_version: '15'\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: read-replica-pg\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your project's name here\n project: <your-project-name>\n\n # add the cloud provider and plan of your choice\n # you can see all of the options at https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: saturday\n maintenanceWindowTime: 23:00:00\n userConfig:\n pg_version: '15'\n\n # use the read_replica integration and point it to your primary service\n serviceIntegrations:\n - integrationType: read_replica\n sourceServiceName: primary-pg-service\n
Note
You can create the replica service in a different region or on a different cloud provider.
2. Apply the configuration with the following command:
kubectl apply -f pg-read-replica.yaml\n
The output is similar to the following:
postgresql.aiven.io/primary-pg-service created\npostgresql.aiven.io/read-replica-pg created\n
3. Check the status of the primary service with the following command:
kubectl get postgresqls.aiven.io primary-pg-service\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\nprimary-pg-service <your-project-name> google-europe-west1 startup-4 RUNNING\n
The resource can be in the BUILDING
state for a few minutes. After the state of the primary service changes to RUNNING
, the read-only replica is created. You can check the status of the replica using the same command with the name of the replica:
kubectl get postgresqls.aiven.io read-replica-pg\n
"},{"location":"resources/project-vpc.html","title":"Aiven Project VPC","text":"Virtual Private Cloud (VPC) peering is a method of connecting separate AWS, Google Cloud or Microsoft Azure private networks to each other. It makes it possible for the virtual machines in the different VPCs to talk to each other directly without going through the public internet.
Within the Aiven Kubernetes Operator, you can create a ProjectVPC
on Aiven's side to connect to your cloud provider.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/project-vpc.html#creating-an-aiven-vpc","title":"Creating an Aiven VPC","text":"1. Create a file named vpc-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: ProjectVPC\nmetadata:\n name: vpc-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # creates a VPC to link an AWS account on the South Africa region\n cloudName: aws-af-south-1\n\n # the network range used by the VPC\n networkCidr: 192.168.0.0/24\n
2. Create the Project by applying the configuration:
kubectl apply -f vpc-sample.yaml\n
3. Review the resource you created with the following command:
kubectl get projects.aiven.io vpc-sample\n
The output is similar to the following:
NAME PROJECT CLOUD NETWORK CIDR\nvpc-sample <your-project> aws-af-south-1 192.168.0.0/24\n
"},{"location":"resources/project-vpc.html#using-the-aiven-vpc","title":"Using the Aiven VPC","text":"Follow the official VPC documentation to complete the VPC peering on your cloud of choice.
"},{"location":"resources/project.html","title":"Aiven Project","text":"Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
The Project
CRD allows you to create Aiven Projects, where your resources can be located.
To create a fully working Aiven Project with the Aiven Operator you need a source Aiven Project already created with a working billing configuration, like a credit card.
Create a file named project-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Project\nmetadata:\n name: project-sample\nspec:\n # the source Project to copy the billing information from\n copyFromProject: <your-source-project>\n\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: project-sample\n
Apply the resource with:
kubectl apply -f project-sample.yaml\n
Verify the newly created Project:
kubectl get projects.aiven.io project-sample\n
The output is similar to the following:
NAME AGE\nproject-sample 22s\n
"},{"location":"resources/redis.html","title":"Redis","text":"Aiven for Redis\u00ae* is a fully managed in-memory NoSQL database that you can deploy in the cloud of your choice to store and access data quickly and efficiently.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/redis.html#creating-a-redis-instance","title":"Creating a Redis instance","text":"1. Create a file named redis-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Redis\nmetadata:\n name: redis-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Redis connection on the `redis-secret` Secret\n connInfoSecretTarget:\n name: redis-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Redis configuration\n userConfig:\n redis_maxmemory_policy: \"allkeys-random\"\n
2. Create the service by applying the configuration:
kubectl apply -f redis-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe redis.aiven.io redis-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-19T14:48:59Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-19T14:48:59Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the Redis connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret redis-secret \n
The output is similar to the following:
Name: redis-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nSSL: 8 bytes\nUSER: 7 bytes\nHOST: 60 bytes\nPASSWORD: 24 bytes\nPORT: 5 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret redis-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"HOST\": \"redis-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret-password>\",\n \"PORT\": \"14610\",\n \"SSL\": \"required\",\n \"USER\": \"default\"\n}\n
"},{"location":"resources/service-integrations.html","title":"Service Integrations","text":"Service Integrations provide additional functionality and features by connecting different Aiven services together.
See our Getting Started with Service Integrations guide for more information.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/service-integrations.html#send-kafka-logs-to-a-kafka-topic","title":"Send Kafka logs to a Kafka Topic","text":"This integration allows you to send Kafka service logs to a specific Kafka Topic.
First, let's create a Kafka service and a topic.
1. Create a new file named kafka-sample-topic.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-2\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: logs\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # here we can specify how many partitions the topic should have\n partitions: 3\n # and the topic replication factor\n replication: 2\n\n # we also support various topic-specific configurations\n config:\n flush_ms: 100\n
2. Create the resource on Kubernetes:
kubectl apply -f kafka-sample-topic.yaml \n
3. Now, create a ServiceIntegration
resource to send the Kafka logs to the created topic. In the same file, add the following YAML:
apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: service-integration-kafka-logs\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # indicates the type of the integration\n integrationType: kafka_logs\n\n # we will send the logs to the same kafka-sample instance\n # the source and destination are the same\n sourceServiceName: kafka-sample\n destinationServiceName: kafka-sample\n\n # the topic name we will send to\n kafkaLogs:\n kafka_topic: logs\n
4. Reapply the resource on Kubernetes:
kubectl apply -f kafka-sample-topic.yaml \n
5. Let's check the created service integration:
kubectl get serviceintegrations.aiven.io service-integration-kafka-logs\n
The output is similar to the following:
NAME PROJECT TYPE SOURCE SERVICE NAME DESTINATION SERVICE NAME SOURCE ENDPOINT ID DESTINATION ENDPOINT ID\nservice-integration-kafka-logs your-project kafka_logs kafka-sample kafka-sample \n
Your Kafka service logs are now being streamed to the logs
Kafka topic.
Aiven for Apache Kafka is an excellent option if you need to run Apache Kafka at scale. With Aiven Kubernetes Operator you can get up and running with a suitably sized Apache Kafka service in a few minutes.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/kafka/index.html#creating-a-kafka-instance","title":"Creating a Kafka instance","text":"1. Create a file named kafka-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-2\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n
2. Create the following resource on Kubernetes:
kubectl apply -f kafka-sample.yaml \n
3. Inspect the service created using the command below.
kubectl get kafka.aiven.io kafka-sample\n
The output has the project name and state, similar to the following:
NAME PROJECT REGION PLAN STATE\nkafka-sample <your-project> google-europe-west1 startup-2 RUNNING\n
After a couple of minutes, the STATE
field is changed to RUNNING
, and is ready to be used.
For your convenience, the operator automatically stores the Kafka connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
kubectl describe secret kafka-auth \n
The output is similar to the following:
Name: kafka-auth\nNamespace: default\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nCA_CERT: 1537 bytes\nHOST: 41 bytes\nPASSWORD: 16 bytes\nPORT: 5 bytes\nUSERNAME: 8 bytes\nACCESS_CERT: 1533 bytes\nACCESS_KEY: 2484 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret kafka-auth -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"CA_CERT\": \"<secret-ca-cert>\",\n \"ACCESS_CERT\": \"<secret-cert>\",\n \"ACCESS_KEY\": \"<secret-access-key>\",\n \"HOST\": \"kafka-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret-password>\",\n \"PORT\": \"13041\",\n \"USERNAME\": \"avnadmin\"\n}\n
"},{"location":"resources/kafka/index.html#testing-the-connection","title":"Testing the connection","text":"You can verify your access to the Kafka cluster from a Pod using the authentication data from the kafka-auth
Secret. kcat is used for our examples below.
1. Create a file named kafka-test-connection.yaml
, and add the following content:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-test-connection\nspec:\n restartPolicy: Never\n containers:\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n # the command below will connect to the Kafka cluster\n # and output its metadata\n command: [\n 'kcat', '-b', '$(HOST):$(PORT)',\n '-X', 'security.protocol=SSL',\n '-X', 'ssl.key.location=/kafka-auth/ACCESS_KEY',\n '-X', 'ssl.key.password=$(PASSWORD)',\n '-X', 'ssl.certificate.location=/kafka-auth/ACCESS_CERT',\n '-X', 'ssl.ca.location=/kafka-auth/CA_CERT',\n '-L'\n ]\n\n # loading the data from the Secret as environment variables\n # useful to access the Kafka information, like hostname and port\n envFrom:\n - secretRef:\n name: kafka-auth\n\n volumeMounts:\n - name: kafka-auth\n mountPath: \"/kafka-auth\"\n\n # loading the data from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: kafka-auth\n secret:\n secretName: kafka-auth\n
2. Apply the file.
kubectl apply -f kafka-test-connection.yaml\n
Once successfully applied, you have a log with the metadata information about the Kafka cluster.
kubectl logs kafka-test-connection \n
The output is similar to the following:
Metadata for all topics (from broker -1: ssl://kafka-sample-your-project.aivencloud.com:13041/bootstrap):\n 3 brokers:\n broker 2 at 35.205.234.70:13041\n broker 3 at 34.77.127.70:13041 (controller)\n broker 1 at 34.78.146.156:13041\n 0 topics:\n
"},{"location":"resources/kafka/index.html#creating-a-kafkatopic-and-kafkaacl","title":"Creating a KafkaTopic
and KafkaACL
","text":"To properly produce and consume content on Kafka, you need topics and ACLs. The operator supports both with the KafkaTopic
and KafkaACL
resources.
Below, here is how to create a Kafka topic named random-strings
where random string messages will be sent.
1. Create a file named kafka-topic-random-strings.yaml
with the content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: random-strings\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # here we can specify how many partitions the topic should have\n partitions: 3\n # and the topic replication factor\n replication: 2\n\n # we also support various topic-specific configurations\n config:\n flush_ms: 100\n
2. Create the resource on Kubernetes:
kubectl apply -f kafka-topic-random-strings.yaml\n
3. Create a user and an ACL. To use the Kafka topic, create a new user with the ServiceUser
resource (in order to avoid using the avnadmin
superuser), and the KafkaACL
to allow the user access to the topic.
In a file named kafka-acl-user-crab.yaml
, add the following two resources:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n # the name of our user \ud83e\udd80\n name: crab\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n # the Secret name we will store the users' connection information\n # looks exactly the same as the Secret generated when creating the Kafka cluster\n # we will use this Secret to produce and consume events later!\n connInfoSecretTarget:\n name: kafka-crab-connection\n\n # the Aiven project the user is related to\n project: <your-project-name>\n\n # the name of our Kafka Service\n serviceName: kafka-sample\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaACL\nmetadata:\n name: crab\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # the username from the ServiceUser above\n username: crab\n\n # the ACL allows to produce and consume on the topic\n permission: readwrite\n\n # specify the topic we created before\n topic: random-strings\n
To create the crab
user and its permissions, execute the following command:
kubectl apply -f kafka-acl-user-crab.yaml\n
"},{"location":"resources/kafka/index.html#producing-and-consuming-events","title":"Producing and consuming events","text":"Using the previously created KafkaTopic
, ServiceUser
, KafkaACL
, you can produce and consume events.
You can use kcat to produce a message into Kafka, and the -t random-strings
argument to select the desired topic, and use the content of the /etc/issue
file as the message's body.
1. Create a kafka-crab-produce.yaml
file with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-crab-produce\nspec:\n restartPolicy: Never\n containers:\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n # the command below will produce a message with the /etc/issue file content\n command: [\n 'kcat', '-b', '$(HOST):$(PORT)',\n '-X', 'security.protocol=SSL',\n '-X', 'ssl.key.location=/crab-auth/ACCESS_KEY',\n '-X', 'ssl.key.password=$(PASSWORD)',\n '-X', 'ssl.certificate.location=/crab-auth/ACCESS_CERT',\n '-X', 'ssl.ca.location=/crab-auth/CA_CERT',\n '-P', '-t', 'random-strings', '/etc/issue',\n ]\n\n # loading the crab user data from the Secret as environment variables\n # useful to access the Kafka information, like hostname and port\n envFrom:\n - secretRef:\n name: kafka-crab-connection\n\n volumeMounts:\n - name: crab-auth\n mountPath: \"/crab-auth\"\n\n # loading the crab user information from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: crab-auth\n secret:\n secretName: kafka-crab-connection\n
2. Create the Pod with the following content:
kubectl apply -f kafka-crab-produce.yaml\n
Now your event is stored in Kafka.
To consume a message, you can use a graphical interface called Kowl. It allows you to explore information about our Kafka cluster, such as brokers, topics, or consumer groups.
1. Create a Kubernetes Pod and service to deploy and access Kowl. Create a file named kafka-crab-consume.yaml
with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-crab-consume\n labels:\n app: kafka-crab-consume\nspec:\n containers:\n - image: quay.io/cloudhut/kowl:v1.4.0\n name: kowl\n\n # kowl configuration values\n env:\n - name: KAFKA_TLS_ENABLED\n value: 'true'\n\n - name: KAFKA_BROKERS\n value: $(HOST):$(PORT)\n - name: KAFKA_TLS_PASSPHRASE\n value: $(PASSWORD)\n\n - name: KAFKA_TLS_CAFILEPATH\n value: /crab-auth/CA_CERT\n - name: KAFKA_TLS_CERTFILEPATH\n value: /crab-auth/ACCESS_CERT\n - name: KAFKA_TLS_KEYFILEPATH\n value: /crab-auth/ACCESS_KEY\n\n # inject all connection information as environment variables\n envFrom:\n - secretRef:\n name: kafka-crab-connection\n\n volumeMounts:\n - name: crab-auth\n mountPath: /crab-auth\n\n # loading the crab user information from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: crab-auth\n secret:\n secretName: kafka-crab-connection\n\n---\n\n# we will be using a simple service to access Kowl on port 8080\napiVersion: v1\nkind: Service\nmetadata:\n name: kafka-crab-consume\nspec:\n selector:\n app: kafka-crab-consume\n ports:\n - port: 8080\n targetPort: 8080\n
2. Create the resources with:
kubectl apply -f kafka-crab-consume.yaml\n
3. In another terminal create a port-forward tunnel to your Pod:
kubectl port-forward kafka-crab-consume 8080:8080\n
4. In the browser of your choice, access the http://localhost:8080 address. You now see a page with the random-strings
topic listed:
5. Click the topic name to see the message.
You have now consumed the message.
"},{"location":"resources/kafka/connect.html","title":"Kafka Connect","text":"Aiven for Apache Kafka Connect is a framework and a runtime for integrating Kafka with other systems. Kafka connectors can either be a source (for pulling data from other systems into Kafka) or sink (for pushing data into other systems from Kafka).
This section involves a few different Kubernetes CRDs: 1. A KafkaService
service with a KafkaTopic
2. A KafkaConnect
service 3. A ServiceIntegration
to integrate the Kafka
and KafkaConnect
services 4. A PostgreSQL
used as a sink to receive messages from Kafka
5. A KafkaConnector
to finally connect the Kafka
with the PostgreSQL
Create a file named kafka-sample-connect.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample-connect\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: business-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n kafka_connect: true\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: kafka-topic-connect\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample-connect\n\n replication: 2\n partitions: 1\n
Next, create a file named kafka-connect.yaml
and add the following KafkaConnect
resource:
apiVersion: aiven.io/v1alpha1\nkind: KafkaConnect\nmetadata:\n name: kafka-connect\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
Now let's create a ServiceIntegration
. It will use the fields sourceServiceName
and destinationServiceName
to integrate the previously created kafka-sample-connect
and kafka-connect
. Open a new file named service-integration-connect.yaml
and add the content below:
apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: service-integration-kafka-connect\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # indicates the type of the integration\n integrationType: kafka_connect\n\n # we will send messages from the `kafka-sample-connect` to `kafka-connect`\n sourceServiceName: kafka-sample-connect\n destinationServiceName: kafka-connect\n
Let's add an Aiven for PostgreSQL service. It will be the service used as a sink, receiving messages from the kafka-sample-connect
cluster. Create a file named pg-sample-connect.yaml
with the content below:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-connect\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the PostgreSQL connection on the `pg-connection` Secret\n connInfoSecretTarget:\n name: pg-connection\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
Finally, let's add the glue of everything: a KafkaConnector
. As described in the specification, it will send receive messages from the kafka-sample-connect
and send them to the pg-connect
service. Check our official documentation for more connectors.
Create a file named kafka-connector-connect.yaml
and with the content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaConnector\nmetadata:\n name: kafka-connector\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # the Kafka cluster name\n serviceName: kafka-sample-connect\n\n # the connector we will be using\n connectorClass: io.aiven.connect.jdbc.JdbcSinkConnector\n\n userConfig:\n auto.create: \"true\"\n\n # constructs the pg-connect connection information\n connection.url: 'jdbc:postgresql://{{ fromSecret \"pg-connection\" \"PGHOST\"}}:{{ fromSecret \"pg-connection\" \"PGPORT\" }}/{{ fromSecret \"pg-connection\" \"PGDATABASE\" }}'\n connection.user: '{{ fromSecret \"pg-connection\" \"PGUSER\" }}'\n connection.password: '{{ fromSecret \"pg-connection\" \"PGPASSWORD\" }}'\n\n # specify which topics it will watch\n topics: kafka-topic-connect\n\n key.converter: org.apache.kafka.connect.json.JsonConverter\n value.converter: org.apache.kafka.connect.json.JsonConverter\n value.converter.schemas.enable: \"true\"\n
With all the files created, apply the new Kubernetes resources:
kubectl apply \\\n -f kafka-sample-connect.yaml \\\n -f kafka-connect.yaml \\\n -f service-integration-connect.yaml \\\n -f pg-sample-connect.yaml \\\n -f kafka-connector-connect.yaml\n
It will take some time for all the services to be up and running. You can check their status with the following command:
kubectl get \\\n kafkas.aiven.io/kafka-sample-connect \\\n kafkaconnects.aiven.io/kafka-connect \\\n postgresqls.aiven.io/pg-connect \\\n kafkaconnectors.aiven.io/kafka-connector\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\nkafka.aiven.io/kafka-sample-connect your-project google-europe-west1 business-4 RUNNING\n\nNAME STATE\nkafkaconnect.aiven.io/kafka-connect RUNNING\n\nNAME PROJECT REGION PLAN STATE\npostgresql.aiven.io/pg-connect your-project google-europe-west1 startup-4 RUNNING\n\nNAME SERVICE NAME PROJECT CONNECTOR CLASS STATE TASKS TOTAL TASKS RUNNING\nkafkaconnector.aiven.io/kafka-connector kafka-sample-connect your-project io.aiven.connect.jdbc.JdbcSinkConnector RUNNING 1 1\n
The deployment is finished when all services have the state RUNNING
."},{"location":"resources/kafka/connect.html#testing","title":"Testing","text":"To test the connection integration, let's produce a Kafka message using kcat from within the Kubernetes cluster. We will deploy a Pod responsible for crafting a message and sending to the Kafka cluster, using the kafka-auth
secret generate by the Kafka
CRD.
Create a new file named kcat-connect.yaml
and add the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-message\nspec:\n containers:\n\n restartPolicy: Never\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n command: ['/bin/sh']\n args: [\n '-c',\n 'echo {\\\"schema\\\":{\\\"type\\\":\\\"struct\\\",\\\"fields\\\":[{ \\\"field\\\": \\\"text\\\", \\\"type\\\": \\\"string\\\", \\\"optional\\\": false } ] }, \\\"payload\\\": { \\\"text\\\": \\\"Hello World\\\" } } > /tmp/msg;\n\n kcat\n -b $(HOST):$(PORT)\n -X security.protocol=SSL\n -X ssl.key.location=/kafka-auth/ACCESS_KEY\n -X ssl.key.password=$(PASSWORD)\n -X ssl.certificate.location=/kafka-auth/ACCESS_CERT\n -X ssl.ca.location=/kafka-auth/CA_CERT\n -P -t kafka-topic-connect /tmp/msg'\n ]\n\n envFrom:\n - secretRef:\n name: kafka-auth\n\n volumeMounts:\n - name: kafka-auth\n mountPath: \"/kafka-auth\"\n\n volumes:\n - name: kafka-auth\n secret:\n secretName: kafka-auth\n
Apply the file with:
kubectl apply -f kcat-connect.yaml\n
The Pod will execute the commands and finish. You can confirm its Completed
state with:
kubectl get pod kafka-message\n
The output is similar to the following:
NAME READY STATUS RESTARTS AGE\nkafka-message 0/1 Completed 0 5m35s\n
If everything went smoothly, we should have our produced message in the PostgreSQL service. Let's check that out.
Create a file named psql-connect.yaml
with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: psql-connect\nspec:\n restartPolicy: Never\n containers:\n - image: postgres:13\n name: postgres\n # \"kafka-topic-connect\" is the table automatically created by KafkaConnect\n command: ['psql', '$(DATABASE_URI)', '-c', 'SELECT * from \"kafka-topic-connect\";']\n\n envFrom:\n - secretRef:\n name: pg-connection\n
Apply the file with:
kubectl apply -f psql-connect.yaml\n
After a couple of seconds, inspect its log with this command:
kubectl logs psql-connect \n
The output is similar to the following:
text \n-------------\n Hello World\n(1 row)\n
"},{"location":"resources/kafka/connect.html#clean-up","title":"Clean up","text":"To clean up all the created resources, use the following command:
kubectl delete \\\n -f kafka-sample-connect.yaml \\\n -f kafka-connect.yaml \\\n -f service-integration-connect.yaml \\\n -f pg-sample-connect.yaml \\\n -f kafka-connector-connect.yaml \\\n -f kcat-connect.yaml \\\n -f psql-connect.yaml\n
"},{"location":"resources/kafka/schema.html","title":"Kafka Schema","text":""},{"location":"resources/kafka/schema.html#creating-a-kafkaschema","title":"Creating a KafkaSchema
","text":"Aiven develops and maintain Karapace, an open source implementation of Kafka REST and schema registry. Is available out of the box for our managed Kafka service.
The schema registry address and authentication is the same as the Kafka broker, the only different is the usage of the port 13044.
First, let's create an Aiven for Apache Kafka service.
1. Create a file named kafka-sample-schema.yaml
and add the content below:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: kafka-auth\n\n project: <your-project-name>\n cloudName: google-europe-west1\n plan: startup-2\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n userConfig:\n kafka_version: '2.7'\n\n # this flag enables the Schema registry\n schema_registry: true\n
2. Apply the changes with the following command:
kubectl apply -f kafka-schema.yaml \n
Now, let's create the schema itself.
1. Create a new file named kafka-sample-schema.yaml
and add the YAML content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaSchema\nmetadata:\n name: kafka-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample-schema\n\n # the name of the Schema\n subjectName: MySchema\n\n # the schema itself, in JSON format\n schema: |\n {\n \"type\": \"record\",\n \"name\": \"MySchema\",\n \"fields\": [\n {\n \"name\": \"field\",\n \"type\": \"string\"\n }\n ]\n }\n\n # sets the schema compatibility level \n compatibilityLevel: BACKWARD\n
2. Create the schema with the command:
kubectl apply -f kafka-schema.yaml\n
3. Review the resource you created with the following command:
kubectl get kafkaschemas.aiven.io kafka-schema\n
The output is similar to the following:
NAME SERVICE NAME PROJECT SUBJECT COMPATIBILITY LEVEL VERSION\nkafka-schema kafka-sample <your-project> MySchema BACKWARD 1\n
Now you can follow the instructions to use a schema registry in Java on how to use the schema created.
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Welcome to Aiven Operator for Kubernetes","text":"Provision and manage Aiven services from your Kubernetes cluster.
"},{"location":"index.html#what-is-aiven","title":"What is Aiven?","text":"Aiven offers managed services for the best open source data technologies, on a cloud of your choice.
We offer multiple cloud options because we believe that everyone should have access to great data platforms wherever they host their applications. Our customers tell us they love it because they know that they aren\u2019t locked in to one particular cloud platform for all their data needs.
"},{"location":"index.html#contributing","title":"Contributing","text":"The contribution guide covers everything you need to know about how you can contribute to Aiven Operator for Kubernetes. The developer guide will help you onboard as a developer.
"},{"location":"authentication.html","title":"Authenticating","text":"To get authenticated and authorized, set up the communication between the Aiven Operator for Kubernetes and Aiven by using a token stored in a Kubernetes secret. You can then refer to the secret name on every custom resource in the authSecretRef
field.
If you don't have an Aiven account yet, sign up here for a free trial. \ud83e\udd80
1. Generate an authentication token
Refer to our documentation article to generate your authentication token.
2. Create the Kubernetes Secret
The following command creates a secret named aiven-token
with a token
field containing the authentication token:
kubectl create secret generic aiven-token --from-literal=token=\"<your-token-here>\"\n
When managing your Aiven resources, we will be using the created Secret in the authSecretRef
field. It will look like the example below:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n [ ... ]\n
Also, note that within Aiven, all resources are conceptually inside a Project. By default, a random project name is generated when you signup, but you can also create new projects.
The Project name is required in most of the resources. It will look like the example below:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n project: <your-project-name-here>\n [ ... ]\n
"},{"location":"changelog.html","title":"Changelog","text":""},{"location":"changelog.html#v0160-2023-12-07","title":"v0.16.0 - 2023-12-07","text":"Preconditions
, CreateOrUpdate
, Delete
. Thanks to @ataraxKafka
field userConfig.kafka.transaction_partition_verification_enable
, type boolean
: Enable verification that checks that the partition has been added to the transaction before writing transactional records to the partitionCassandra
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleClickhouse
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleGrafana
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleKafkaConnect
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleKafka
field userConfig.kafka_rest_config.name_strategy_validation
, type boolean
: If true, validate that given schema is registered under expected subject name by the used name strategy when producing messagesKafka
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleMySQL
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleOpenSearch
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consolePostgreSQL
field userConfig.pg_qualstats
, type object
: System-wide settings for the pg_qualstats extensionPostgreSQL
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleRedis
field userConfig.service_log
, type boolean
: Store logs for the service so that they are available in the HTTP API and consoleServiceIntegration
: do not send empty user config to the API string
type fields to the documentationClickhouse
field userConfig.private_access.clickhouse_mysql
, type boolean
: Allow clients to connect to clickhouse_mysql with a DNS name that always resolves to the service's private IP addressesClickhouse
field userConfig.privatelink_access.clickhouse_mysql
, type boolean
: Enable clickhouse_mysqlClickhouse
field userConfig.public_access.clickhouse_mysql
, type boolean
: Allow clients to connect to clickhouse_mysql from the public internet for service nodes that are in a project VPC or another type of private networkGrafana
field userConfig.unified_alerting_enabled
, type boolean
: Enable or disable Grafana unified alerting functionalityKafka
field userConfig.aiven_kafka_topic_messages
, type boolean
: Allow access to read Kafka topic messages in the Aiven Console and REST APIKafka
field userConfig.kafka.sasl_oauthbearer_expected_audience
, type string
: The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiencesKafka
field userConfig.kafka.sasl_oauthbearer_expected_issuer
, type string
: Optional setting for the broker to use to verify that the JWT was created by the expected issuerKafka
field userConfig.kafka.sasl_oauthbearer_jwks_endpoint_url
, type string
: OIDC JWKS endpoint URL. By setting this the SASL SSL OAuth2/OIDC authentication is enabledKafka
field userConfig.kafka.sasl_oauthbearer_sub_claim_name
, type string
: Name of the scope from which to extract the subject claim from the JWT. Defaults to subKafka
field userConfig.kafka_version
: enum [3.1, 3.3, 3.4, 3.5]
\u2192 [3.1, 3.3, 3.4, 3.5, 3.6]
Kafka
field userConfig.tiered_storage.local_cache.size
: deprecatedOpenSearch
field userConfig.opensearch.indices_memory_max_index_buffer_size
, type integer
: Absolute value. Default is unbound. Doesn't work without indices.memory.index_buffer_sizeOpenSearch
field userConfig.opensearch.indices_memory_min_index_buffer_size
, type integer
: Absolute value. Default is 48mb. Doesn't work without indices.memory.index_buffer_sizeOpenSearch
field userConfig.opensearch.auth_failure_listeners.internal_authentication_backend_limiting.authentication_backend
: enum [internal]
OpenSearch
field userConfig.opensearch.auth_failure_listeners.internal_authentication_backend_limiting.type
: enum [username]
OpenSearch
field userConfig.opensearch.auth_failure_listeners.ip_rate_limiting.type
: enum [ip]
OpenSearch
field userConfig.opensearch.search_max_buckets
: maximum 65536
\u2192 1000000
ServiceIntegration
field kafkaMirrormaker.kafka_mirrormaker.producer_max_request_size
: maximum 67108864
\u2192 268435456
projectVpcId
and projectVPCRef
mutablenil
user config conversionCassandra
kind option additional_backup_regions
Grafana
kind option auto_login
Kafka
kind properties log_local_retention_bytes
, log_local_retention_ms
Kafka
kind option remote_log_storage_system_enable
OpenSearch
kind option auth_failure_listeners
OpenSearch
kind Index State Management optionsKafka
Kafka
version 3.5
Kafka
spec property scheduled_rebalance_max_delay_ms
Kafka
spec property remote_log_storage_system_enable
KafkaConnect
spec property scheduled_rebalance_max_delay_ms
OpenSearch
spec property openid
KAFKA_SCHEMA_REGISTRY_HOST
and KAFKA_SCHEMA_REGISTRY_PORT
for Kafka
KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
and KAFKA_REST_PORT
for Kafka
. Thanks to @Dariuschunclean_leader_election_enable
from KafkaTopic
kind configKAFKA_SASL_PORT
for Kafka
kind if SASL
authentication method is enabledredis
options to datadog ServiceIntegration
Cassandra
version 3
Kafka
versions 3.1
and 3.4
kafka_rest_config.producer_max_request_size
optionkafka_mirrormaker.producer_compression_type
optionclusterRole.create
option to Helm chart. Thanks to @ryaneorthOpenSearch.spec.userConfig.idp_pemtrustedcas_content
option. Specifies the PEM-encoded root certificate authority (CA) content for the SAML identity provider (IdP) server verification.ServiceIntegration
kind SourceProjectName
and DestinationProjectName
fieldsServiceIntegration
fields MaxLength
validationServiceIntegration
validation: multiple user configs cannot be setServiceIntegration
, should not require destinationServiceName
or sourceEndpointID
fieldServiceIntegration
, add missing external_aws_cloudwatch_metrics
type config serializationServiceIntegration
integration type listannotations
and labels
fields to connInfoSecretTarget
OpenSearch.spec.userConfig.opensearch.search_max_buckets
maximum to 65536
plan
as a required fieldminumim
, maximum
validations for number
typeip_filter
backward compatabilityclickhouseKafka.tables.data_format-property
enum RawBLOB
valueuserConfig.opensearch.email_sender_username
validation patternlog_cleaner_min_cleanable_ratio
minimum and maximum validation rules3.2
, reached EOL10
, reached EOLProjectVPC
by ID
to avoid conflicts ProjectVPC
deletion by exiting on DELETING
statusClickhouseUser
controllerClickhouseUser.spec.project
and ClickhouseUser.spec.serviceName
as immutablesignalfx
AuthSecretRef
fields marked as requireddatadog
, kafka_connect
, kafka_logs
, metrics
clickhouse_postgresql
, clickhouse_kafka
, clickhouse_kafka
, logs
, external_aws_cloudwatch_metrics
KafkaTopic.Spec.topicName
field. Unlike the metadata.name
, supports additional characters and has a longer length. KafkaTopic.Spec.topicName
replaces metadata.name
in future releases and will be marked as required.false
value for termination_protection
propertymin_cleanable_dirty_ratio
. Thanks to @TV2rdImportant: This release brings breaking changes to the userConfig
property. After new charts are installed, update your existing instances manually using the kubectl edit
command according to the API reference.
Note: It is now recommended to disable webhooks for Kubernetes version 1.25 and higher, as native CRD validation rules are used.
ip_filter
field is now of object
typeserviceIntegrations
on service types. Only the read_replica
type is available.min_cleanable_dirty_ratio
config field supportspec.disk_space
propertylinux/amd64
build. Thanks to @christoffer-eidenever
from choices of maintenance dowdevelopment
flag to configure logger's behaviormake generate-user-configs
)genericServiceHandler
to generalize service management KafkaACL
deletionProjectVPCRef
property to Kafka
, OpenSearch
, Clickhouse
and Redis
kinds to get ProjectVPC
ID when resource is readyProjectVPC
deletion, deletes by ID first if possible, then tries by nameclient.Object
storage update data lossfeatures: * add Redis CRD
improvements: * watch CRDs to reconcile token secrets
fixes: * fix RBACs of KafkaACL CRD
"},{"location":"changelog.html#v011-2021-09-13","title":"v0.1.1 - 2021-09-13","text":"improvements: * update helm installation docs
fixes: * fix typo in a kafka-connector kuttl test
"},{"location":"changelog.html#v010-2021-09-10","title":"v0.1.0 - 2021-09-10","text":"features: * initial release
"},{"location":"troubleshooting.html","title":"Troubleshooting","text":""},{"location":"troubleshooting.html#verifying-operator-status","title":"Verifying operator status","text":"Use the following checks to help you troubleshoot the Aiven Operator for Kubernetes.
"},{"location":"troubleshooting.html#checking-the-pods","title":"Checking the Pods","text":"Verify that all the operator Pods are READY
, and the STATUS
is Running
.
kubectl get pod -n aiven-operator-system\n
The output is similar to the following:
NAME READY STATUS RESTARTS AGE\naiven-operator-controller-manager-576d944499-ggttj 1/1 Running 0 12m\n
Verify that the cert-manager
Pods are also running.
kubectl get pod -n cert-manager\n
The output has the status:
NAME READY STATUS RESTARTS AGE\ncert-manager-7dd5854bb4-85cpv 1/1 Running 0 76s\ncert-manager-cainjector-64c949654c-n2z8l 1/1 Running 0 77s\ncert-manager-webhook-6bdffc7c9d-47w6z 1/1 Running 0 76s\n
"},{"location":"troubleshooting.html#visualizing-the-operator-logs","title":"Visualizing the operator logs","text":"Use the following command to visualize all the logs from the operator.
kubectl logs -n aiven-operator-system -l control-plane=controller-manager\n
"},{"location":"troubleshooting.html#verifing-the-operator-version","title":"Verifing the operator version","text":"kubectl get pod -n aiven-operator-system -l control-plane=controller-manager -o jsonpath=\"{.items[0].spec.containers[0].image}\"\n
"},{"location":"troubleshooting.html#known-issues-and-limitations","title":"Known issues and limitations","text":"We're always working to resolve problems that pop up in Aiven products. If your problem is listed below, we know about it and are working to fix it. If your problem isn't listed below, report it as an issue.
"},{"location":"troubleshooting.html#cert-manager","title":"cert-manager","text":""},{"location":"troubleshooting.html#issue","title":"Issue","text":"The following event appears on the operator Pod:
MountVolume.SetUp failed for volume \"cert\" : secret \"webhook-server-cert\" not found\n
"},{"location":"troubleshooting.html#impact","title":"Impact","text":"You cannot run the operator.
"},{"location":"troubleshooting.html#solution","title":"Solution","text":"Make sure that cert-manager is up and running.
kubectl get pod -n cert-manager\n
The output shows the status of each cert-manager:
NAME READY STATUS RESTARTS AGE\ncert-manager-7dd5854bb4-85cpv 1/1 Running 0 76s\ncert-manager-cainjector-64c949654c-n2z8l 1/1 Running 0 77s\ncert-manager-webhook-6bdffc7c9d-47w6z 1/1 Running 0 76s\n
"},{"location":"api-reference/index.html","title":"aiven.io/v1alpha1","text":"Autogenerated from CRD files.
"},{"location":"api-reference/cassandra.html","title":"Cassandra","text":""},{"location":"api-reference/cassandra.html#usage-example","title":"Usage example","text":"apiVersion: aiven.io/v1alpha1\nkind: Cassandra\nmetadata:\n name: my-cassandra\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: cassandra-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n migrate_sstableloader: true\n public_access:\n prometheus: true\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/cassandra.html#Cassandra","title":"Cassandra","text":"Cassandra is the Schema for the cassandras API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Cassandra
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). CassandraSpec defines the desired state of Cassandra. See below for nested schema.Appears on Cassandra
.
CassandraSpec defines the desired state of Cassandra.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CASSANDRA_HOST
, CASSANDRA_PORT
, CASSANDRA_USER
, CASSANDRA_PASSWORD
, CASSANDRA_URI
, CASSANDRA_HOSTS
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Cassandra specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CASSANDRA_HOST
, CASSANDRA_PORT
, CASSANDRA_USER
, CASSANDRA_PASSWORD
, CASSANDRA_URI
, CASSANDRA_HOSTS
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Cassandra specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Deprecated. Additional Cloud Regions for Backup Replication.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.cassandra
(object). cassandra configuration values. See below for nested schema.cassandra_version
(string, Enum: 4
, 3
). Cassandra major version.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migrate_sstableloader
(boolean). Sets the service into migration mode enabling the sstableloader utility to be used to upload Cassandra data files. Available only on service create.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.service_to_join_with
(string, MaxLength: 64). When bootstrapping, instead of creating a new Cassandra cluster try to join an existing one from another service. Can only be set on service creation.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
cassandra configuration values.
Optional
batch_size_fail_threshold_in_kb
(integer, Minimum: 1, Maximum: 1000000). Fail any multiple-partition batch exceeding this value. 50kb (10x warn threshold) by default.batch_size_warn_threshold_in_kb
(integer, Minimum: 1, Maximum: 1000000). Log a warning message on any multiple-partition batch size exceeding this value.5kb per batch by default.Caution should be taken on increasing the size of this thresholdas it can lead to node instability.datacenter
(string, MaxLength: 128). Name of the datacenter to which nodes of this service belong. Can be set only when creating the service.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Required
prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Required
prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: Clickhouse\nmetadata:\n name: my-clickhouse\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: clickhouse-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-16\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/clickhouse.html#Clickhouse","title":"Clickhouse","text":"Clickhouse is the Schema for the clickhouses API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Clickhouse
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ClickhouseSpec defines the desired state of Clickhouse. See below for nested schema.Appears on Clickhouse
.
ClickhouseSpec defines the desired state of Clickhouse.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CLICKHOUSE_HOST
, CLICKHOUSE_PORT
, CLICKHOUSE_USER
, CLICKHOUSE_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). OpenSearch specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CLICKHOUSE_HOST
, CLICKHOUSE_PORT
, CLICKHOUSE_USER
, CLICKHOUSE_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
OpenSearch specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
clickhouse
(boolean). Allow clients to connect to clickhouse with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.clickhouse_https
(boolean). Allow clients to connect to clickhouse_https with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.clickhouse_mysql
(boolean). Allow clients to connect to clickhouse_mysql with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
clickhouse
(boolean). Enable clickhouse.clickhouse_https
(boolean). Enable clickhouse_https.clickhouse_mysql
(boolean). Enable clickhouse_mysql.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
clickhouse
(boolean). Allow clients to connect to clickhouse from the public internet for service nodes that are in a project VPC or another type of private network.clickhouse_https
(boolean). Allow clients to connect to clickhouse_https from the public internet for service nodes that are in a project VPC or another type of private network.clickhouse_mysql
(boolean). Allow clients to connect to clickhouse_mysql from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: ClickhouseUser\nmetadata:\n name: my-clickhouse-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: clickhouse-user-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n serviceName: my-clickhouse\n
"},{"location":"api-reference/clickhouseuser.html#ClickhouseUser","title":"ClickhouseUser","text":"ClickhouseUser is the Schema for the clickhouseusers API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ClickhouseUser
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ClickhouseUserSpec defines the desired state of ClickhouseUser. See below for nested schema.Appears on ClickhouseUser
.
ClickhouseUserSpec defines the desired state of ClickhouseUser.
Required
project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the user to.serviceName
(string, Immutable, MaxLength: 63). Service to link the user to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CLICKHOUSEUSER_HOST
, CLICKHOUSEUSER_PORT
, CLICKHOUSEUSER_USER
, CLICKHOUSEUSER_PASSWORD
. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CLICKHOUSEUSER_HOST
, CLICKHOUSEUSER_PORT
, CLICKHOUSEUSER_USER
, CLICKHOUSEUSER_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.apiVersion: aiven.io/v1alpha1\nkind: ConnectionPool\nmetadata:\n name: my-connection-pool\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: connection-pool-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n serviceName: google-europe-west1\n databaseName: my-db\n username: my-user\n poolMode: transaction\n poolSize: 25\n
"},{"location":"api-reference/connectionpool.html#ConnectionPool","title":"ConnectionPool","text":"ConnectionPool is the Schema for the connectionpools API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ConnectionPool
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ConnectionPoolSpec defines the desired state of ConnectionPool. See below for nested schema.Appears on ConnectionPool
.
ConnectionPoolSpec defines the desired state of ConnectionPool.
Required
databaseName
(string, MaxLength: 40). Name of the database the pool connects to.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.serviceName
(string, MaxLength: 63). Service name.username
(string, MaxLength: 64). Name of the service user used to connect to the database.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: CONNECTIONPOOL_HOST
, CONNECTIONPOOL_PORT
, CONNECTIONPOOL_DATABASE
, CONNECTIONPOOL_USER
, CONNECTIONPOOL_PASSWORD
, CONNECTIONPOOL_SSLMODE
, CONNECTIONPOOL_DATABASE_URI
. See below for nested schema.poolMode
(string, Enum: session
, transaction
, statement
). Mode the pool operates in (session, transaction, statement).poolSize
(integer). Number of connections the pool may create towards the backend server.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: CONNECTIONPOOL_HOST
, CONNECTIONPOOL_PORT
, CONNECTIONPOOL_DATABASE
, CONNECTIONPOOL_USER
, CONNECTIONPOOL_PASSWORD
, CONNECTIONPOOL_SSLMODE
, CONNECTIONPOOL_DATABASE_URI
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.apiVersion: aiven.io/v1alpha1\nkind: Database\nmetadata:\n name: my-db\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n serviceName: google-europe-west1\n\n lcCtype: en_US.UTF-8\n lcCollate: en_US.UTF-8\n
"},{"location":"api-reference/database.html#Database","title":"Database","text":"Database is the Schema for the databases API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Database
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). DatabaseSpec defines the desired state of Database. See below for nested schema.Appears on Database
.
DatabaseSpec defines the desired state of Database.
Required
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the database to.serviceName
(string, MaxLength: 63). PostgreSQL service to link the database to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.lcCollate
(string, MaxLength: 128). Default string sort order (LC_COLLATE) of the database. Default value: en_US.UTF-8.lcCtype
(string, MaxLength: 128). Default character classification (LC_CTYPE) of the database. Default value: en_US.UTF-8.terminationProtection
(boolean). It is a Kubernetes side deletion protections, which prevents the database from being deleted by Kubernetes. It is recommended to enable this for any production databases containing critical data.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: Grafana\nmetadata:\n name: my-grafana\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: grafana-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-1\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n public_access:\n grafana: true\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/grafana.html#Grafana","title":"Grafana","text":"Grafana is the Schema for the grafanas API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Grafana
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). GrafanaSpec defines the desired state of Grafana. See below for nested schema.Appears on Grafana
.
GrafanaSpec defines the desired state of Grafana.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: GRAFANA_HOST
, GRAFANA_PORT
, GRAFANA_USER
, GRAFANA_PASSWORD
, GRAFANA_URI
, GRAFANA_HOSTS
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Cassandra specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: GRAFANA_HOST
, GRAFANA_PORT
, GRAFANA_USER
, GRAFANA_PASSWORD
, GRAFANA_URI
, GRAFANA_HOSTS
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Cassandra specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.alerting_enabled
(boolean). Enable or disable Grafana legacy alerting functionality. This should not be enabled with unified_alerting_enabled.alerting_error_or_timeout
(string, Enum: alerting
, keep_state
). Default error or timeout setting for new alerting rules.alerting_max_annotations_to_keep
(integer, Minimum: 0, Maximum: 1000000). Max number of alert annotations that Grafana stores. 0 (default) keeps all alert annotations.alerting_nodata_or_nullvalues
(string, Enum: alerting
, no_data
, keep_state
, ok
). Default value for 'no data or null values' for new alerting rules.allow_embedding
(boolean). Allow embedding Grafana dashboards with iframe/frame/object/embed tags. Disabled by default to limit impact of clickjacking.auth_azuread
(object). Azure AD OAuth integration. See below for nested schema.auth_basic_enabled
(boolean). Enable or disable basic authentication form, used by Grafana built-in login.auth_generic_oauth
(object). Generic OAuth integration. See below for nested schema.auth_github
(object). Github Auth integration. See below for nested schema.auth_gitlab
(object). GitLab Auth integration. See below for nested schema.auth_google
(object). Google Auth integration. See below for nested schema.cookie_samesite
(string, Enum: lax
, strict
, none
). Cookie SameSite attribute: strict
prevents sending cookie for cross-site requests, effectively disabling direct linking from other sites to Grafana. lax
is the default value.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.dashboard_previews_enabled
(boolean). This feature is new in Grafana 9 and is quite resource intensive. It may cause low-end plans to work more slowly while the dashboard previews are rendering.dashboards_min_refresh_interval
(string, Pattern: ^[0-9]+(ms|s|m|h|d)$
, MaxLength: 16). Signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s, 1h.dashboards_versions_to_keep
(integer, Minimum: 1, Maximum: 100). Dashboard versions to keep per dashboard.dataproxy_send_user_header
(boolean). Send X-Grafana-User
header to data source.dataproxy_timeout
(integer, Minimum: 15, Maximum: 90). Timeout for data proxy requests in seconds.date_formats
(object). Grafana date format specifications. See below for nested schema.disable_gravatar
(boolean). Set to true to disable gravatar. Defaults to false (gravatar is enabled).editors_can_admin
(boolean). Editors can manage folders, teams and dashboards created by them.external_image_storage
(object). External image store settings. See below for nested schema.google_analytics_ua_id
(string, Pattern: ^(G|UA|YT|MO)-[a-zA-Z0-9-]+$
, MaxLength: 64). Google Analytics ID.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.metrics_enabled
(boolean). Enable Grafana /metrics endpoint.oauth_allow_insecure_email_lookup
(boolean). Enforce user lookup based on email instead of the unique ID provided by the IdP.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.smtp_server
(object). SMTP server settings. See below for nested schema.static_ips
(boolean). Use static public IP addresses.unified_alerting_enabled
(boolean). Enable or disable Grafana unified alerting functionality. By default this is enabled and any legacy alerts will be migrated on upgrade to Grafana 9+. To stay on legacy alerting, set unified_alerting_enabled to false and alerting_enabled to true. See https://grafana.com/docs/grafana/latest/alerting/set-up/migrating-alerts/ for more details.user_auto_assign_org
(boolean). Auto-assign new users on signup to main organization. Defaults to false.user_auto_assign_org_role
(string, Enum: Viewer
, Admin
, Editor
). Set role for new signups. Defaults to Viewer.viewers_can_edit
(boolean). Users with view-only permission can edit but not save dashboards.Appears on spec.userConfig
.
Azure AD OAuth integration.
Required
auth_url
(string, MaxLength: 2048). Authorization URL.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.token_url
(string, MaxLength: 2048). Token URL.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_domains
(array of strings, MaxItems: 50). Allowed domains.allowed_groups
(array of strings, MaxItems: 50). Require users to belong to one of given groups.Appears on spec.userConfig
.
Generic OAuth integration.
Required
api_url
(string, MaxLength: 2048). API URL.auth_url
(string, MaxLength: 2048). Authorization URL.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.token_url
(string, MaxLength: 2048). Token URL.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_domains
(array of strings, MaxItems: 50). Allowed domains.allowed_organizations
(array of strings, MaxItems: 50). Require user to be member of one of the listed organizations.auto_login
(boolean). Allow users to bypass the login screen and automatically log in.name
(string, Pattern: ^[a-zA-Z0-9_\\- ]+$
, MaxLength: 128). Name of the OAuth integration.scopes
(array of strings, MaxItems: 50). OAuth scopes.Appears on spec.userConfig
.
Github Auth integration.
Required
client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.allowed_organizations
(array of strings, MaxItems: 50). Require users to belong to one of given organizations.team_ids
(array of integers, MaxItems: 50). Require users to belong to one of given team IDs.Appears on spec.userConfig
.
GitLab Auth integration.
Required
allowed_groups
(array of strings, MaxItems: 50). Require users to belong to one of given groups.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.api_url
(string, MaxLength: 2048). API URL. This only needs to be set when using self hosted GitLab.auth_url
(string, MaxLength: 2048). Authorization URL. This only needs to be set when using self hosted GitLab.token_url
(string, MaxLength: 2048). Token URL. This only needs to be set when using self hosted GitLab.Appears on spec.userConfig
.
Google Auth integration.
Required
allowed_domains
(array of strings, MaxItems: 64). Domains allowed to sign-in to this Grafana.client_id
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client ID from provider.client_secret
(string, Pattern: ^[\\040-\\176]+$
, MaxLength: 1024). Client secret from provider.Optional
allow_sign_up
(boolean). Automatically sign-up users on successful sign-in.Appears on spec.userConfig
.
Grafana date format specifications.
Optional
default_timezone
(string, MaxLength: 64). Default time zone for user preferences. Value browser
uses browser local time zone.full_date
(string, MaxLength: 128). Moment.js style format string for cases where full date is shown.interval_day
(string, MaxLength: 128). Moment.js style format string used when a time requiring day accuracy is shown.interval_hour
(string, MaxLength: 128). Moment.js style format string used when a time requiring hour accuracy is shown.interval_minute
(string, MaxLength: 128). Moment.js style format string used when a time requiring minute accuracy is shown.interval_month
(string, MaxLength: 128). Moment.js style format string used when a time requiring month accuracy is shown.interval_second
(string, MaxLength: 128). Moment.js style format string used when a time requiring second accuracy is shown.interval_year
(string, MaxLength: 128). Moment.js style format string used when a time requiring year accuracy is shown.Appears on spec.userConfig
.
External image store settings.
Required
access_key
(string, Pattern: ^[A-Z0-9]+$
, MaxLength: 4096). S3 access key. Requires permissions to the S3 bucket for the s3:PutObject and s3:PutObjectAcl actions.bucket_url
(string, MaxLength: 2048). Bucket URL for S3.provider
(string, Enum: s3
). Provider type.secret_key
(string, Pattern: ^[A-Za-z0-9/+=]+$
, MaxLength: 4096). S3 secret key.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Required
grafana
(boolean). Allow clients to connect to grafana with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Required
grafana
(boolean). Enable grafana.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Required
grafana
(boolean). Allow clients to connect to grafana from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
SMTP server settings.
Required
from_address
(string, MaxLength: 319). Address used for sending emails.host
(string, MaxLength: 255). Server hostname or IP.port
(integer, Minimum: 1, Maximum: 65535). SMTP server port.Optional
from_name
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 128). Name used in outgoing emails, defaults to Grafana.password
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 255). Password for SMTP authentication.skip_verify
(boolean). Skip verifying server certificate. Defaults to false.starttls_policy
(string, Enum: OpportunisticStartTLS
, MandatoryStartTLS
, NoStartTLS
). Either OpportunisticStartTLS, MandatoryStartTLS or NoStartTLS. Default is OpportunisticStartTLS.username
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 255). Username for SMTP authentication.apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: my-kafka\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: kafka-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-2\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/kafka.html#Kafka","title":"Kafka","text":"Kafka is the Schema for the kafkas API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Kafka
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaSpec defines the desired state of Kafka. See below for nested schema.Appears on Kafka
.
KafkaSpec defines the desired state of Kafka.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: KAFKA_HOST
, KAFKA_PORT
, KAFKA_USERNAME
, KAFKA_PASSWORD
, KAFKA_ACCESS_CERT
, KAFKA_ACCESS_KEY
, KAFKA_SASL_HOST
, KAFKA_SASL_PORT
, KAFKA_SCHEMA_REGISTRY_HOST
, KAFKA_SCHEMA_REGISTRY_PORT
, KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
, KAFKA_REST_PORT
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.karapace
(boolean). Switch the service to use Karapace for schema registry and REST proxy.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Kafka specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: KAFKA_HOST
, KAFKA_PORT
, KAFKA_USERNAME
, KAFKA_PASSWORD
, KAFKA_ACCESS_CERT
, KAFKA_ACCESS_KEY
, KAFKA_SASL_HOST
, KAFKA_SASL_PORT
, KAFKA_SCHEMA_REGISTRY_HOST
, KAFKA_SCHEMA_REGISTRY_PORT
, KAFKA_CONNECT_HOST
, KAFKA_CONNECT_PORT
, KAFKA_REST_HOST
, KAFKA_REST_PORT
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Kafka specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.aiven_kafka_topic_messages
(boolean). Allow access to read Kafka topic messages in the Aiven Console and REST API.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.kafka
(object). Kafka broker configuration values. See below for nested schema.kafka_authentication_methods
(object). Kafka authentication methods. See below for nested schema.kafka_connect
(boolean). Enable Kafka Connect service.kafka_connect_config
(object). Kafka Connect configuration values. See below for nested schema.kafka_rest
(boolean). Enable Kafka-REST service.kafka_rest_authorization
(boolean). Enable authorization in Kafka-REST service.kafka_rest_config
(object). Kafka REST configuration. See below for nested schema.kafka_version
(string, Enum: 3.3
, 3.1
, 3.4
, 3.5
, 3.6
). Kafka major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.schema_registry
(boolean). Enable Schema-Registry service.schema_registry_config
(object). Schema Registry configuration. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.static_ips
(boolean). Use static public IP addresses.tiered_storage
(object). Tiered storage configuration. See below for nested schema.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Kafka broker configuration values.
Optional
auto_create_topics_enable
(boolean). Enable auto creation of topics.compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, uncompressed
, producer
). Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts uncompressed
which is equivalent to no compression; and producer
which means retain the original compression codec set by the producer.connections_max_idle_ms
(integer, Minimum: 1000, Maximum: 3600000). Idle connections timeout: the server socket processor threads close the connections that idle for longer than this.default_replication_factor
(integer, Minimum: 1, Maximum: 10). Replication factor for autocreated topics.group_initial_rebalance_delay_ms
(integer, Minimum: 0, Maximum: 300000). The amount of time, in milliseconds, the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins. The default value for this is 3 seconds. During development and testing it might be desirable to set this to 0 in order to not delay test execution time.group_max_session_timeout_ms
(integer, Minimum: 0, Maximum: 1800000). The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.group_min_session_timeout_ms
(integer, Minimum: 0, Maximum: 60000). The minimum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.log_cleaner_delete_retention_ms
(integer, Minimum: 0, Maximum: 315569260000). How long are delete records retained?.log_cleaner_max_compaction_lag_ms
(integer, Minimum: 30000). The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted.log_cleaner_min_cleanable_ratio
(number, Minimum: 0.2, Maximum: 0.9). Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option.log_cleaner_min_compaction_lag_ms
(integer, Minimum: 0). The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.log_cleanup_policy
(string, Enum: delete
, compact
, compact,delete
). The default cleanup policy for segments beyond the retention window.log_flush_interval_messages
(integer, Minimum: 1). The number of messages accumulated on a log partition before messages are flushed to disk.log_flush_interval_ms
(integer, Minimum: 0). The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used.log_index_interval_bytes
(integer, Minimum: 0, Maximum: 104857600). The interval with which Kafka adds an entry to the offset index.log_index_size_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). The maximum size in bytes of the offset index.log_local_retention_bytes
(integer, Minimum: -2). The maximum size of local log segments that can grow for a partition before it gets eligible for deletion. If set to -2, the value of log.retention.bytes is used. The effective value should always be less than or equal to log.retention.bytes value.log_local_retention_ms
(integer, Minimum: -2). The number of milliseconds to keep the local log segments before it gets eligible for deletion. If set to -2, the value of log.retention.ms is used. The effective value should always be less than or equal to log.retention.ms value.log_message_downconversion_enable
(boolean). This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.log_message_timestamp_difference_max_ms
(integer, Minimum: 0). The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message.log_message_timestamp_type
(string, Enum: CreateTime
, LogAppendTime
). Define whether the timestamp in the message is message create time or log append time.log_preallocate
(boolean). Should pre allocate file when create new segment?.log_retention_bytes
(integer, Minimum: -1). The maximum size of the log before deleting messages.log_retention_hours
(integer, Minimum: -1, Maximum: 2147483647). The number of hours to keep a log file before deleting it.log_retention_ms
(integer, Minimum: -1). The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.log_roll_jitter_ms
(integer, Minimum: 0). The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used.log_roll_ms
(integer, Minimum: 1). The maximum time before a new log segment is rolled out (in milliseconds).log_segment_bytes
(integer, Minimum: 10485760, Maximum: 1073741824). The maximum size of a single log file.log_segment_delete_delay_ms
(integer, Minimum: 0, Maximum: 3600000). The amount of time to wait before deleting a file from the filesystem.max_connections_per_ip
(integer, Minimum: 256, Maximum: 2147483647). The maximum number of connections allowed from each ip address (defaults to 2147483647).max_incremental_fetch_session_cache_slots
(integer, Minimum: 1000, Maximum: 10000). The maximum number of incremental fetch sessions that the broker will maintain.message_max_bytes
(integer, Minimum: 0, Maximum: 100001200). The maximum size of message that the server can receive.min_insync_replicas
(integer, Minimum: 1, Maximum: 7). When a producer sets acks to all
(or -1
), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.num_partitions
(integer, Minimum: 1, Maximum: 1000). Number of partitions for autocreated topics.offsets_retention_minutes
(integer, Minimum: 1, Maximum: 2147483647). Log retention window in minutes for offsets topic.producer_purgatory_purge_interval_requests
(integer, Minimum: 10, Maximum: 10000). The purge interval (in number of requests) of the producer request purgatory(defaults to 1000).replica_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). The number of bytes of messages to attempt to fetch for each partition (defaults to 1048576). This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made.replica_fetch_response_max_bytes
(integer, Minimum: 10485760, Maximum: 1048576000). Maximum bytes expected for the entire fetch response (defaults to 10485760). Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum.sasl_oauthbearer_expected_audience
(string, MaxLength: 128). The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences.sasl_oauthbearer_expected_issuer
(string, MaxLength: 128). Optional setting for the broker to use to verify that the JWT was created by the expected issuer.sasl_oauthbearer_jwks_endpoint_url
(string, MaxLength: 2048). OIDC JWKS endpoint URL. By setting this the SASL SSL OAuth2/OIDC authentication is enabled. See also other options for SASL OAuth2/OIDC.sasl_oauthbearer_sub_claim_name
(string, MaxLength: 128). Name of the scope from which to extract the subject claim from the JWT. Defaults to sub.socket_request_max_bytes
(integer, Minimum: 10485760, Maximum: 209715200). The maximum number of bytes in a socket request (defaults to 104857600).transaction_partition_verification_enable
(boolean). Enable verification that checks that the partition has been added to the transaction before writing transactional records to the partition.transaction_remove_expired_transaction_cleanup_interval_ms
(integer, Minimum: 600000, Maximum: 3600000). The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing (defaults to 3600000 (1 hour)).transaction_state_log_segment_bytes
(integer, Minimum: 1048576, Maximum: 2147483647). The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads (defaults to 104857600 (100 mebibytes)).Appears on spec.userConfig
.
Kafka authentication methods.
Optional
certificate
(boolean). Enable certificate/SSL authentication.sasl
(boolean). Enable SASL authentication.Appears on spec.userConfig
.
Kafka Connect configuration values.
Optional
connector_client_config_override_policy
(string, Enum: None
, All
). Defines what client configurations can be overridden by the connector. Default is None.consumer_auto_offset_reset
(string, Enum: earliest
, latest
). What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.consumer_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.consumer_isolation_level
(string, Enum: read_uncommitted
, read_committed
). Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.consumer_max_partition_fetch_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.consumer_max_poll_interval_ms
(integer, Minimum: 1, Maximum: 2147483647). The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).consumer_max_poll_records
(integer, Minimum: 1, Maximum: 10000). The maximum number of records returned in a single call to poll() (defaults to 500).offset_flush_interval_ms
(integer, Minimum: 1, Maximum: 100000000). The interval at which to try committing offsets for tasks (defaults to 60000).offset_flush_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger
for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger
for the specified time waiting for more records to show up. Defaults to 0.producer_max_request_size
(integer, Minimum: 131072, Maximum: 67108864). This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.scheduled_rebalance_max_delay_ms
(integer, Minimum: 0, Maximum: 600000). The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.session_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). The timeout in milliseconds used to detect failures when using Kafka\u2019s group management facilities (defaults to 10000).Appears on spec.userConfig
.
Kafka REST configuration.
Optional
consumer_enable_auto_commit
(boolean). If true the consumer's offset will be periodically committed to Kafka in the background.consumer_request_max_bytes
(integer, Minimum: 0, Maximum: 671088640). Maximum number of bytes in unencoded message keys and values by a single request.consumer_request_timeout_ms
(integer, Enum: 1000
, 15000
, 30000
, Minimum: 1000, Maximum: 30000). The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached.name_strategy_validation
(boolean). If true, validate that given schema is registered under expected subject name by the used name strategy when producing messages.producer_acks
(string, Enum: all
, -1
, 0
, 1
). The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to all
or -1
, the leader will wait for the full set of in-sync replicas to acknowledge the record.producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). Wait for up to the given delay to allow batching records together.producer_max_request_size
(integer, Minimum: 0, Maximum: 2147483647). The maximum size of a request in bytes. Note that Kafka broker can also cap the record batch size.simpleconsumer_pool_size_max
(integer, Minimum: 10, Maximum: 250). Maximum number of SimpleConsumers that can be instantiated per broker.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
kafka
(boolean). Allow clients to connect to kafka with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.kafka_connect
(boolean). Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.kafka_rest
(boolean). Allow clients to connect to kafka_rest with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.schema_registry
(boolean). Allow clients to connect to schema_registry with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
jolokia
(boolean). Enable jolokia.kafka
(boolean). Enable kafka.kafka_connect
(boolean). Enable kafka_connect.kafka_rest
(boolean). Enable kafka_rest.prometheus
(boolean). Enable prometheus.schema_registry
(boolean). Enable schema_registry.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
kafka
(boolean). Allow clients to connect to kafka from the public internet for service nodes that are in a project VPC or another type of private network.kafka_connect
(boolean). Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.kafka_rest
(boolean). Allow clients to connect to kafka_rest from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.schema_registry
(boolean). Allow clients to connect to schema_registry from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
Schema Registry configuration.
Optional
leader_eligibility
(boolean). If true, Karapace / Schema Registry on the service nodes can participate in leader election. It might be needed to disable this when the schemas topic is replicated to a secondary cluster and Karapace / Schema Registry there must not participate in leader election. Defaults to true
.topic_name
(string, MinLength: 1, MaxLength: 249). The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy. Please note that changing this configuration in an existing Schema Registry / Karapace setup leads to previous schemas being inaccessible, data encoded with them potentially unreadable and schema ID sequence put out of order. It's only possible to do the switch while Schema Registry / Karapace is disabled. Defaults to _schemas
.Appears on spec.userConfig
.
Tiered storage configuration.
Optional
enabled
(boolean). Whether to enable the tiered storage functionality.local_cache
(object). Deprecated. Local cache configuration. See below for nested schema.Appears on spec.userConfig.tiered_storage
.
Deprecated. Local cache configuration.
Required
size
(integer, Minimum: 1, Maximum: 107374182400). Deprecated. Local cache size in bytes.apiVersion: aiven.io/v1alpha1\nkind: KafkaACL\nmetadata:\n name: my-kafka-acl\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n topic: my-topic\n username: my-user\n permission: admin\n
"},{"location":"api-reference/kafkaacl.html#KafkaACL","title":"KafkaACL","text":"KafkaACL is the Schema for the kafkaacls API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaACL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaACLSpec defines the desired state of KafkaACL. See below for nested schema.Appears on KafkaACL
.
KafkaACLSpec defines the desired state of KafkaACL.
Required
permission
(string, Enum: admin
, read
, readwrite
, write
). Kafka permission to grant (admin, read, readwrite, write).project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the Kafka ACL to.serviceName
(string, MaxLength: 63). Service to link the Kafka ACL to.topic
(string). Topic name pattern for the ACL entry.username
(string). Username pattern for the ACL entry.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: KafkaConnect\nmetadata:\n name: my-kafka-connect\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: business-4\n\n userConfig:\n kafka_connect:\n consumer_isolation_level: read_committed\n public_access:\n kafka_connect: true\n
"},{"location":"api-reference/kafkaconnect.html#KafkaConnect","title":"KafkaConnect","text":"KafkaConnect is the Schema for the kafkaconnects API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaConnect
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaConnectSpec defines the desired state of KafkaConnect. See below for nested schema.Appears on KafkaConnect
.
KafkaConnectSpec defines the desired state of KafkaConnect.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). KafkaConnect specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
KafkaConnect specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.kafka_connect
(object). Kafka Connect configuration values. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Kafka Connect configuration values.
Optional
connector_client_config_override_policy
(string, Enum: None
, All
). Defines what client configurations can be overridden by the connector. Default is None.consumer_auto_offset_reset
(string, Enum: earliest
, latest
). What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.consumer_fetch_max_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.consumer_isolation_level
(string, Enum: read_uncommitted
, read_committed
). Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.consumer_max_partition_fetch_bytes
(integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.consumer_max_poll_interval_ms
(integer, Minimum: 1, Maximum: 2147483647). The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).consumer_max_poll_records
(integer, Minimum: 1, Maximum: 10000). The maximum number of records returned in a single call to poll() (defaults to 500).offset_flush_interval_ms
(integer, Minimum: 1, Maximum: 100000000). The interval at which to try committing offsets for tasks (defaults to 60000).offset_flush_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger
for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger
for the specified time waiting for more records to show up. Defaults to 0.producer_max_request_size
(integer, Minimum: 131072, Maximum: 67108864). This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.scheduled_rebalance_max_delay_ms
(integer, Minimum: 0, Maximum: 600000). The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.session_timeout_ms
(integer, Minimum: 1, Maximum: 2147483647). The timeout in milliseconds used to detect failures when using Kafka\u2019s group management facilities (defaults to 10000).Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
kafka_connect
(boolean). Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
jolokia
(boolean). Enable jolokia.kafka_connect
(boolean). Enable kafka_connect.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
kafka_connect
(boolean). Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.KafkaConnector is the Schema for the kafkaconnectors API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaConnector
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaConnectorSpec defines the desired state of KafkaConnector. See below for nested schema.Appears on KafkaConnector
.
KafkaConnectorSpec defines the desired state of KafkaConnector.
Required
connectorClass
(string, MaxLength: 1024). The Java class of the connector.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.serviceName
(string, MaxLength: 63). Service name.userConfig
(object, AdditionalProperties: string). The connector specific configuration To build config values from secret the template function {{ fromSecret \"name\" \"key\" }}
is provided when interpreting the keys.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: KafkaSchema\nmetadata:\n name: my-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n subjectName: mny-subject\n compatibilityLevel: BACKWARD\n schema: |\n {\n \"doc\": \"example_doc\",\n \"fields\": [{\n \"default\": 5,\n \"doc\": \"field_doc\",\n \"name\": \"field_name\",\n \"namespace\": \"field_namespace\",\n \"type\": \"int\"\n }],\n \"name\": \"example_name\",\n \"namespace\": \"example_namespace\",\n \"type\": \"record\"\n }\n
"},{"location":"api-reference/kafkaschema.html#KafkaSchema","title":"KafkaSchema","text":"KafkaSchema is the Schema for the kafkaschemas API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaSchema
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaSchemaSpec defines the desired state of KafkaSchema. See below for nested schema.Appears on KafkaSchema
.
KafkaSchemaSpec defines the desired state of KafkaSchema.
Required
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the Kafka Schema to.schema
(string). Kafka Schema configuration should be a valid Avro Schema JSON format.serviceName
(string, MaxLength: 63). Service to link the Kafka Schema to.subjectName
(string, MaxLength: 63). Kafka Schema Subject name.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.compatibilityLevel
(string, Enum: BACKWARD
, BACKWARD_TRANSITIVE
, FORWARD
, FORWARD_TRANSITIVE
, FULL
, FULL_TRANSITIVE
, NONE
). Kafka Schemas compatibility level.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: kafka-topic\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: my-aiven-project\n serviceName: my-kafka\n topicName: my-kafka-topic\n\n replication: 2\n partitions: 1\n\n config:\n min_cleanable_dirty_ratio: 0.2\n
"},{"location":"api-reference/kafkatopic.html#KafkaTopic","title":"KafkaTopic","text":"KafkaTopic is the Schema for the kafkatopics API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value KafkaTopic
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). KafkaTopicSpec defines the desired state of KafkaTopic. See below for nested schema.Appears on KafkaTopic
.
KafkaTopicSpec defines the desired state of KafkaTopic.
Required
partitions
(integer, Minimum: 1, Maximum: 1000000). Number of partitions to create in the topic.project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.replication
(integer, Minimum: 2). Replication factor for the topic.serviceName
(string, MaxLength: 63). Service name.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.config
(object). Kafka topic configuration. See below for nested schema.tags
(array of objects). Kafka topic tags. See below for nested schema.termination_protection
(boolean). It is a Kubernetes side deletion protections, which prevents the kafka topic from being deleted by Kubernetes. It is recommended to enable this for any production databases containing critical data.topicName
(string, Immutable, MinLength: 1, MaxLength: 249). Topic name. If provided, is used instead of metadata.name. This field supports additional characters, has a longer length, and will replace metadata.name in future releases.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Kafka topic configuration.
Optional
cleanup_policy
(string). cleanup.policy value.compression_type
(string). compression.type value.delete_retention_ms
(integer). delete.retention.ms value.file_delete_delay_ms
(integer). file.delete.delay.ms value.flush_messages
(integer). flush.messages value.flush_ms
(integer). flush.ms value.index_interval_bytes
(integer). index.interval.bytes value.max_compaction_lag_ms
(integer). max.compaction.lag.ms value.max_message_bytes
(integer). max.message.bytes value.message_downconversion_enable
(boolean). message.downconversion.enable value.message_format_version
(string). message.format.version value.message_timestamp_difference_max_ms
(integer). message.timestamp.difference.max.ms value.message_timestamp_type
(string). message.timestamp.type value.min_cleanable_dirty_ratio
(number). min.cleanable.dirty.ratio value.min_compaction_lag_ms
(integer). min.compaction.lag.ms value.min_insync_replicas
(integer). min.insync.replicas value.preallocate
(boolean). preallocate value.retention_bytes
(integer). retention.bytes value.retention_ms
(integer). retention.ms value.segment_bytes
(integer). segment.bytes value.segment_index_bytes
(integer). segment.index.bytes value.segment_jitter_ms
(integer). segment.jitter.ms value.segment_ms
(integer). segment.ms value.Appears on spec
.
Kafka topic tags.
Required
key
(string, MinLength: 1, MaxLength: 64, Format: ^[a-zA-Z0-9_-]*$
). Optional
value
(string, MaxLength: 256, Format: ^[a-zA-Z0-9_-]*$
). apiVersion: aiven.io/v1alpha1\nkind: MySQL\nmetadata:\n name: my-mysql\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: mysql-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: business-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n backup_hour: 17\n backup_minute: 11\n ip_filter:\n - network: 0.0.0.0\n description: whatever\n - network: 10.20.0.0/16\n
"},{"location":"api-reference/mysql.html#MySQL","title":"MySQL","text":"MySQL is the Schema for the mysqls API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value MySQL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). MySQLSpec defines the desired state of MySQL. See below for nested schema.Appears on MySQL
.
MySQLSpec defines the desired state of MySQL.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: MYSQL_HOST
, MYSQL_PORT
, MYSQL_DATABASE
, MYSQL_USER
, MYSQL_PASSWORD
, MYSQL_SSL_MODE
, MYSQL_URI
, MYSQL_REPLICA_URI
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). MySQL specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: MYSQL_HOST
, MYSQL_PORT
, MYSQL_DATABASE
, MYSQL_USER
, MYSQL_PASSWORD
, MYSQL_SSL_MODE
, MYSQL_URI
, MYSQL_REPLICA_URI
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
MySQL specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.admin_password
(string, Immutable, Pattern: ^[a-zA-Z0-9-_]+$
, MinLength: 8, MaxLength: 256). Custom password for admin user. Defaults to random string. This must be set only when a new service is being created.admin_username
(string, Immutable, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Custom username for admin user. This must be set only when a new service is being created.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.binlog_retention_period
(integer, Minimum: 600, Maximum: 86400). The minimum amount of time in seconds to keep binlog entries before deletion. This may be extended for services that require binlog entries for longer than the default for example if using the MySQL Debezium Kafka connector.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.mysql
(object). mysql.conf configuration values. See below for nested schema.mysql_version
(string, Enum: 8
). MySQL major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_target_time
(string, Immutable, MaxLength: 32). Recovery target time when forking a service. This has effect only when a new service is being created.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Migrate data from existing server.
Required
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.Optional
dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.Appears on spec.userConfig
.
mysql.conf configuration values.
Optional
connect_timeout
(integer, Minimum: 2, Maximum: 3600). The number of seconds that the mysqld server waits for a connect packet before responding with Bad handshake.default_time_zone
(string, MinLength: 2, MaxLength: 100). Default server time zone as an offset from UTC (from -12:00 to +12:00), a time zone name, or SYSTEM
to use the MySQL server default.group_concat_max_len
(integer, Minimum: 4). The maximum permitted result length in bytes for the GROUP_CONCAT() function.information_schema_stats_expiry
(integer, Minimum: 900, Maximum: 31536000). The time, in seconds, before cached statistics expire.innodb_change_buffer_max_size
(integer, Minimum: 0, Maximum: 50). Maximum size for the InnoDB change buffer, as a percentage of the total size of the buffer pool. Default is 25.innodb_flush_neighbors
(integer, Minimum: 0, Maximum: 2). Specifies whether flushing a page from the InnoDB buffer pool also flushes other dirty pages in the same extent (default is 1): 0 - dirty pages in the same extent are not flushed, 1 - flush contiguous dirty pages in the same extent, 2 - flush dirty pages in the same extent.innodb_ft_min_token_size
(integer, Minimum: 0, Maximum: 16). Minimum length of words that are stored in an InnoDB FULLTEXT index. Changing this parameter will lead to a restart of the MySQL service.innodb_ft_server_stopword_table
(string, Pattern: ^.+/.+$
, MaxLength: 1024). This option is used to specify your own InnoDB FULLTEXT index stopword list for all InnoDB tables.innodb_lock_wait_timeout
(integer, Minimum: 1, Maximum: 3600). The length of time in seconds an InnoDB transaction waits for a row lock before giving up. Default is 120.innodb_log_buffer_size
(integer, Minimum: 1048576, Maximum: 4294967295). The size in bytes of the buffer that InnoDB uses to write to the log files on disk.innodb_online_alter_log_max_size
(integer, Minimum: 65536, Maximum: 1099511627776). The upper limit in bytes on the size of the temporary log files used during online DDL operations for InnoDB tables.innodb_print_all_deadlocks
(boolean). When enabled, information about all deadlocks in InnoDB user transactions is recorded in the error log. Disabled by default.innodb_read_io_threads
(integer, Minimum: 1, Maximum: 64). The number of I/O threads for read operations in InnoDB. Default is 4. Changing this parameter will lead to a restart of the MySQL service.innodb_rollback_on_timeout
(boolean). When enabled a transaction timeout causes InnoDB to abort and roll back the entire transaction. Changing this parameter will lead to a restart of the MySQL service.innodb_thread_concurrency
(integer, Minimum: 0, Maximum: 1000). Defines the maximum number of threads permitted inside of InnoDB. Default is 0 (infinite concurrency - no limit).innodb_write_io_threads
(integer, Minimum: 1, Maximum: 64). The number of I/O threads for write operations in InnoDB. Default is 4. Changing this parameter will lead to a restart of the MySQL service.interactive_timeout
(integer, Minimum: 30, Maximum: 604800). The number of seconds the server waits for activity on an interactive connection before closing it.internal_tmp_mem_storage_engine
(string, Enum: TempTable
, MEMORY
). The storage engine for in-memory internal temporary tables.long_query_time
(number, Minimum: 0, Maximum: 3600). The slow_query_logs work as SQL statements that take more than long_query_time seconds to execute. Default is 10s.max_allowed_packet
(integer, Minimum: 102400, Maximum: 1073741824). Size of the largest message in bytes that can be received by the server. Default is 67108864 (64M).max_heap_table_size
(integer, Minimum: 1048576, Maximum: 1073741824). Limits the size of internal in-memory tables. Also set tmp_table_size. Default is 16777216 (16M).net_buffer_length
(integer, Minimum: 1024, Maximum: 1048576). Start sizes of connection buffer and result buffer. Default is 16384 (16K). Changing this parameter will lead to a restart of the MySQL service.net_read_timeout
(integer, Minimum: 1, Maximum: 3600). The number of seconds to wait for more data from a connection before aborting the read.net_write_timeout
(integer, Minimum: 1, Maximum: 3600). The number of seconds to wait for a block to be written to a connection before aborting the write.slow_query_log
(boolean). Slow query log enables capturing of slow queries. Setting slow_query_log to false also truncates the mysql.slow_log table. Default is off.sort_buffer_size
(integer, Minimum: 32768, Maximum: 1073741824). Sort buffer size in bytes for ORDER BY optimization. Default is 262144 (256K).sql_mode
(string, Pattern: ^[A-Z_]*(,[A-Z_]+)*$
, MaxLength: 1024). Global SQL mode. Set to empty to use MySQL server defaults. When creating a new service and not setting this field Aiven default SQL mode (strict, SQL standard compliant) will be assigned.sql_require_primary_key
(boolean). Require primary key to be defined for new tables or old tables modified with ALTER TABLE and fail if missing. It is recommended to always have primary keys because various functionality may break if any large table is missing them.tmp_table_size
(integer, Minimum: 1048576, Maximum: 1073741824). Limits the size of internal in-memory tables. Also set max_heap_table_size. Default is 16777216 (16M).wait_timeout
(integer, Minimum: 1, Maximum: 2147483). The number of seconds the server waits for activity on a noninteractive connection before closing it.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
mysql
(boolean). Allow clients to connect to mysql with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.mysqlx
(boolean). Allow clients to connect to mysqlx with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
mysql
(boolean). Enable mysql.mysqlx
(boolean). Enable mysqlx.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
mysql
(boolean). Allow clients to connect to mysql from the public internet for service nodes that are in a project VPC or another type of private network.mysqlx
(boolean). Allow clients to connect to mysqlx from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: OpenSearch\nmetadata:\n name: my-os\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: os-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-4\n disk_space: 80Gib\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
"},{"location":"api-reference/opensearch.html#OpenSearch","title":"OpenSearch","text":"OpenSearch is the Schema for the opensearches API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value OpenSearch
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). OpenSearchSpec defines the desired state of OpenSearch. See below for nested schema.Appears on OpenSearch
.
OpenSearchSpec defines the desired state of OpenSearch.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: OPENSEARCH_HOST
, OPENSEARCH_PORT
, OPENSEARCH_USER
, OPENSEARCH_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). OpenSearch specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: OPENSEARCH_HOST
, OPENSEARCH_PORT
, OPENSEARCH_USER
, OPENSEARCH_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
OpenSearch specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.custom_domain
(string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.disable_replication_factor_adjustment
(boolean). DEPRECATED: Disable automatic replication factor adjustment for multi-node services. By default, Aiven ensures all indexes are replicated at least to two nodes. Note: Due to potential data loss in case of losing a service node, this setting can no longer be activated.index_patterns
(array of objects, MaxItems: 512). Index patterns. See below for nested schema.index_template
(object). Template settings for all new indexes. See below for nested schema.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.keep_index_refresh_interval
(boolean). Aiven automation resets index.refresh_interval to default value for every index to be sure that indices are always visible to search. If it doesn't fit your case, you can disable this by setting up this flag to true.max_index_count
(integer, Minimum: 0). DEPRECATED: use index_patterns instead.openid
(object). OpenSearch OpenID Connect Configuration. See below for nested schema.opensearch
(object). OpenSearch settings. See below for nested schema.opensearch_dashboards
(object). OpenSearch Dashboards settings. See below for nested schema.opensearch_version
(string, Enum: 1
, 2
). OpenSearch major version.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.saml
(object). OpenSearch SAML configuration. See below for nested schema.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Index patterns.
Required
max_index_count
(integer, Minimum: 0). Maximum number of indexes to keep.pattern
(string, Pattern: ^[A-Za-z0-9-_.*?]+$
, MaxLength: 1024). fnmatch pattern.Optional
sorting_algorithm
(string, Enum: alphabetical
, creation_date
). Deletion sorting algorithm.Appears on spec.userConfig
.
Template settings for all new indexes.
Optional
mapping_nested_objects_limit
(integer, Minimum: 0, Maximum: 100000). The maximum number of nested JSON objects that a single document can contain across all nested types. This limit helps to prevent out of memory errors when a document contains too many nested objects. Default is 10000.number_of_replicas
(integer, Minimum: 0, Maximum: 29). The number of replicas each primary shard has.number_of_shards
(integer, Minimum: 1, Maximum: 1024). The number of primary shards that an index should have.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
OpenSearch OpenID Connect Configuration.
Required
client_id
(string, MinLength: 1, MaxLength: 1024). The ID of the OpenID Connect client configured in your IdP. Required.client_secret
(string, MinLength: 1, MaxLength: 1024). The client secret of the OpenID Connect client configured in your IdP. Required.connect_url
(string, MaxLength: 2048). The URL of your IdP where the Security plugin can find the OpenID Connect metadata/configuration settings.Optional
enabled
(boolean). Enables or disables OpenID Connect authentication for OpenSearch. When enabled, users can authenticate using OpenID Connect with an Identity Provider.header
(string, MinLength: 1, MaxLength: 1024). HTTP header name of the JWT token. Optional. Default is Authorization.jwt_header
(string, MinLength: 1, MaxLength: 1024). The HTTP header that stores the token. Typically the Authorization header with the Bearer schema: Authorization: Bearer . Optional. Default is Authorization. jwt_url_parameter
(string, MinLength: 1, MaxLength: 1024). If the token is not transmitted in the HTTP header, but as an URL parameter, define the name of the parameter here. Optional.refresh_rate_limit_count
(integer, Minimum: 10). The maximum number of unknown key IDs in the time frame. Default is 10. Optional.refresh_rate_limit_time_window_ms
(integer, Minimum: 10000). The time frame to use when checking the maximum number of unknown key IDs, in milliseconds. Optional.Default is 10000 (10 seconds).roles_key
(string, MinLength: 1, MaxLength: 1024). The key in the JSON payload that stores the user\u2019s roles. The value of this key must be a comma-separated list of roles. Required only if you want to use roles in the JWT.scope
(string, MinLength: 1, MaxLength: 1024). The scope of the identity token issued by the IdP. Optional. Default is openid profile email address phone.subject_key
(string, MinLength: 1, MaxLength: 1024). The key in the JSON payload that stores the user\u2019s name. If not defined, the subject registered claim is used. Most IdP providers use the preferred_username claim. Optional.Appears on spec.userConfig
.
OpenSearch settings.
Optional
action_auto_create_index_enabled
(boolean). Explicitly allow or block automatic creation of indices. Defaults to true.action_destructive_requires_name
(boolean). Require explicit index names when deleting.auth_failure_listeners
(object). Opensearch Security Plugin Settings. See below for nested schema.cluster_max_shards_per_node
(integer, Minimum: 100, Maximum: 10000). Controls the number of shards allowed in the cluster per data node.cluster_routing_allocation_node_concurrent_recoveries
(integer, Minimum: 2, Maximum: 16). How many concurrent incoming/outgoing shard recoveries (normally replicas) are allowed to happen on a node. Defaults to 2.email_sender_name
(string, Pattern: ^[a-zA-Z0-9-_]+$
, MaxLength: 40). Sender name placeholder to be used in Opensearch Dashboards and Opensearch keystore.email_sender_password
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 1024). Sender password for Opensearch alerts to authenticate with SMTP server.email_sender_username
(string, Pattern: ^[^\\x00-\\x1F]+$
, MaxLength: 320). Sender username for Opensearch alerts.http_max_content_length
(integer, Minimum: 1, Maximum: 2147483647). Maximum content length for HTTP requests to the OpenSearch HTTP API, in bytes.http_max_header_size
(integer, Minimum: 1024, Maximum: 262144). The max size of allowed headers, in bytes.http_max_initial_line_length
(integer, Minimum: 1024, Maximum: 65536). The max length of an HTTP URL, in bytes.indices_fielddata_cache_size
(integer, Minimum: 3, Maximum: 100). Relative amount. Maximum amount of heap memory used for field data cache. This is an expert setting; decreasing the value too much will increase overhead of loading field data; too much memory used for field data cache will decrease amount of heap available for other operations.indices_memory_index_buffer_size
(integer, Minimum: 3, Maximum: 40). Percentage value. Default is 10%. Total amount of heap used for indexing buffer, before writing segments to disk. This is an expert setting. Too low value will slow down indexing; too high value will increase indexing performance but causes performance issues for query performance.indices_memory_max_index_buffer_size
(integer, Minimum: 3, Maximum: 2048). Absolute value. Default is unbound. Doesn't work without indices.memory.index_buffer_size. Maximum amount of heap used for query cache, an absolute indices.memory.index_buffer_size maximum hard limit.indices_memory_min_index_buffer_size
(integer, Minimum: 3, Maximum: 2048). Absolute value. Default is 48mb. Doesn't work without indices.memory.index_buffer_size. Minimum amount of heap used for query cache, an absolute indices.memory.index_buffer_size minimal hard limit.indices_queries_cache_size
(integer, Minimum: 3, Maximum: 40). Percentage value. Default is 10%. Maximum amount of heap used for query cache. This is an expert setting. Too low value will decrease query performance and increase performance for other operations; too high value will cause issues with other OpenSearch functionality.indices_query_bool_max_clause_count
(integer, Minimum: 64, Maximum: 4096). Maximum number of clauses Lucene BooleanQuery can have. The default value (1024) is relatively high, and increasing it may cause performance issues. Investigate other approaches first before increasing this value.indices_recovery_max_bytes_per_sec
(integer, Minimum: 40, Maximum: 400). Limits total inbound and outbound recovery traffic for each node. Applies to both peer recoveries as well as snapshot recoveries (i.e., restores from a snapshot). Defaults to 40mb.indices_recovery_max_concurrent_file_chunks
(integer, Minimum: 2, Maximum: 5). Number of file chunks sent in parallel for each recovery. Defaults to 2.ism_enabled
(boolean). Specifies whether ISM is enabled or not.ism_history_enabled
(boolean). Specifies whether audit history is enabled or not. The logs from ISM are automatically indexed to a logs document.ism_history_max_age
(integer, Minimum: 1, Maximum: 2147483647). The maximum age before rolling over the audit history index in hours.ism_history_max_docs
(integer, Minimum: 1). The maximum number of documents before rolling over the audit history index.ism_history_rollover_check_period
(integer, Minimum: 1, Maximum: 2147483647). The time between rollover checks for the audit history index in hours.ism_history_rollover_retention_period
(integer, Minimum: 1, Maximum: 2147483647). How long audit history indices are kept in days.override_main_response_version
(boolean). Compatibility mode sets OpenSearch to report its version as 7.10 so clients continue to work. Default is false.reindex_remote_whitelist
(array of strings, MaxItems: 32). Whitelisted addresses for reindexing. Changing this value will cause all OpenSearch instances to restart.script_max_compilations_rate
(string, MaxLength: 1024). Script compilation circuit breaker limits the number of inline script compilations within a period of time. Default is use-context.search_max_buckets
(integer, Minimum: 1, Maximum: 1000000). Maximum number of aggregation buckets allowed in a single response. OpenSearch default value is used when this is not defined.thread_pool_analyze_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_analyze_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_force_merge_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_get_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_get_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_search_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_search_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_search_throttled_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_search_throttled_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.thread_pool_write_queue_size
(integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.thread_pool_write_size
(integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.Appears on spec.userConfig.opensearch
.
Opensearch Security Plugin Settings.
Optional
internal_authentication_backend_limiting
(object). See below for nested schema.ip_rate_limiting
(object). IP address rate limiting settings. See below for nested schema.Appears on spec.userConfig.opensearch.auth_failure_listeners
.
Optional
allowed_tries
(integer, Minimum: 0, Maximum: 2147483647). The number of login attempts allowed before login is blocked.authentication_backend
(string, Enum: internal
, MaxLength: 1024). internal_authentication_backend_limiting.authentication_backend.block_expiry_seconds
(integer, Minimum: 0, Maximum: 2147483647). The duration of time that login remains blocked after a failed login.max_blocked_clients
(integer, Minimum: 0, Maximum: 2147483647). internal_authentication_backend_limiting.max_blocked_clients.max_tracked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of tracked IP addresses that have failed login.time_window_seconds
(integer, Minimum: 0, Maximum: 2147483647). The window of time in which the value for allowed_tries
is enforced.type
(string, Enum: username
, MaxLength: 1024). internal_authentication_backend_limiting.type.Appears on spec.userConfig.opensearch.auth_failure_listeners
.
IP address rate limiting settings.
Optional
allowed_tries
(integer, Minimum: 1, Maximum: 2147483647). The number of login attempts allowed before login is blocked.block_expiry_seconds
(integer, Minimum: 1, Maximum: 36000). The duration of time that login remains blocked after a failed login.max_blocked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of blocked IP addresses.max_tracked_clients
(integer, Minimum: 0, Maximum: 2147483647). The maximum number of tracked IP addresses that have failed login.time_window_seconds
(integer, Minimum: 1, Maximum: 36000). The window of time in which the value for allowed_tries
is enforced.type
(string, Enum: ip
, MaxLength: 1024). The type of rate limiting.Appears on spec.userConfig
.
OpenSearch Dashboards settings.
Optional
enabled
(boolean). Enable or disable OpenSearch Dashboards.max_old_space_size
(integer, Minimum: 64, Maximum: 2048). Limits the maximum amount of memory (in MiB) the OpenSearch Dashboards process can use. This sets the max_old_space_size option of the nodejs running the OpenSearch Dashboards. Note: the memory reserved by OpenSearch Dashboards is not available for OpenSearch.opensearch_request_timeout
(integer, Minimum: 5000, Maximum: 120000). Timeout in milliseconds for requests made by OpenSearch Dashboards towards OpenSearch.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
opensearch
(boolean). Allow clients to connect to opensearch with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.opensearch_dashboards
(boolean). Allow clients to connect to opensearch_dashboards with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
opensearch
(boolean). Enable opensearch.opensearch_dashboards
(boolean). Enable opensearch_dashboards.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
opensearch
(boolean). Allow clients to connect to opensearch from the public internet for service nodes that are in a project VPC or another type of private network.opensearch_dashboards
(boolean). Allow clients to connect to opensearch_dashboards from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
OpenSearch SAML configuration.
Required
enabled
(boolean). Enables or disables SAML-based authentication for OpenSearch. When enabled, users can authenticate using SAML with an Identity Provider.idp_entity_id
(string, MinLength: 1, MaxLength: 1024). The unique identifier for the Identity Provider (IdP) entity that is used for SAML authentication. This value is typically provided by the IdP.idp_metadata_url
(string, MinLength: 1, MaxLength: 2048). The URL of the SAML metadata for the Identity Provider (IdP). This is used to configure SAML-based authentication with the IdP.sp_entity_id
(string, MinLength: 1, MaxLength: 1024). The unique identifier for the Service Provider (SP) entity that is used for SAML authentication. This value is typically provided by the SP.Optional
idp_pemtrustedcas_content
(string, MaxLength: 16384). This parameter specifies the PEM-encoded root certificate authority (CA) content for the SAML identity provider (IdP) server verification. The root CA content is used to verify the SSL/TLS certificate presented by the server.roles_key
(string, MinLength: 1, MaxLength: 256). Optional. Specifies the attribute in the SAML response where role information is stored, if available. Role attributes are not required for SAML authentication, but can be included in SAML assertions by most Identity Providers (IdPs) to determine user access levels or permissions.subject_key
(string, MinLength: 1, MaxLength: 256). Optional. Specifies the attribute in the SAML response where the subject identifier is stored. If not configured, the NameID attribute is used by default.apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: my-postgresql\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: postgresql-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: sunday\n maintenanceWindowTime: 11:00:00\n\n userConfig:\n pg_version: \"15\"\n
"},{"location":"api-reference/postgresql.html#PostgreSQL","title":"PostgreSQL","text":"PostgreSQL is the Schema for the postgresql API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value PostgreSQL
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). PostgreSQLSpec defines the desired state of postgres instance. See below for nested schema.Appears on PostgreSQL
.
PostgreSQLSpec defines the desired state of postgres instance.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: POSTGRESQL_HOST
, POSTGRESQL_PORT
, POSTGRESQL_DATABASE
, POSTGRESQL_USER
, POSTGRESQL_PASSWORD
, POSTGRESQL_SSLMODE
, POSTGRESQL_DATABASE_URI
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). PostgreSQL specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: POSTGRESQL_HOST
, POSTGRESQL_PORT
, POSTGRESQL_DATABASE
, POSTGRESQL_USER
, POSTGRESQL_PASSWORD
, POSTGRESQL_SSLMODE
, POSTGRESQL_DATABASE_URI
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
PostgreSQL specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.admin_password
(string, Immutable, Pattern: ^[a-zA-Z0-9-_]+$
, MinLength: 8, MaxLength: 256). Custom password for admin user. Defaults to random string. This must be set only when a new service is being created.admin_username
(string, Immutable, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Custom username for admin user. This must be set only when a new service is being created.backup_hour
(integer, Minimum: 0, Maximum: 23). The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.backup_minute
(integer, Minimum: 0, Maximum: 59). The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.enable_ipv6
(boolean). Register AAAA DNS records for the service, and allow IPv6 packets to service ports.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.pg
(object). postgresql.conf configuration values. See below for nested schema.pg_qualstats
(object). System-wide settings for the pg_qualstats extension. See below for nested schema.pg_read_replica
(boolean). Should the service which is being forked be a read replica (deprecated, use read_replica service integration instead).pg_service_to_fork_from
(string, Immutable, MaxLength: 64). Name of the PG Service from which to fork (deprecated, use service_to_fork_from). This has effect only when a new service is being created.pg_stat_monitor_enable
(boolean). Enable the pg_stat_monitor extension. Enabling this extension will cause the cluster to be restarted.When this extension is enabled, pg_stat_statements results for utility commands are unreliable.pg_version
(string, Enum: 11
, 12
, 13
, 14
, 15
). PostgreSQL major version.pgbouncer
(object). PGBouncer connection pooling settings. See below for nested schema.pglookout
(object). System-wide settings for pglookout. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_target_time
(string, Immutable, MaxLength: 32). Recovery target time when forking a service. This has effect only when a new service is being created.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.shared_buffers_percentage
(number, Minimum: 20, Maximum: 60). Percentage of total RAM that the database server uses for shared memory buffers. Valid range is 20-60 (float), which corresponds to 20% - 60%. This setting adjusts the shared_buffers configuration value.static_ips
(boolean). Use static public IP addresses.synchronous_replication
(string, Enum: quorum
, off
). Synchronous replication type. Note that the service plan also needs to support synchronous replication.timescaledb
(object). System-wide settings for the timescaledb extension. See below for nested schema.variant
(string, Enum: aiven
, timescale
). Variant of the PostgreSQL service, may affect the features that are exposed by default.work_mem
(integer, Minimum: 1, Maximum: 1024). Sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files, in MB. Default is 1MB + 0.075% of total RAM (up to 32MB).Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Migrate data from existing server.
Required
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.Optional
dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.Appears on spec.userConfig
.
postgresql.conf configuration values.
Optional
autovacuum_analyze_scale_factor
(number, Minimum: 0, Maximum: 1). Specifies a fraction of the table size to add to autovacuum_analyze_threshold when deciding whether to trigger an ANALYZE. The default is 0.2 (20% of table size).autovacuum_analyze_threshold
(integer, Minimum: 0, Maximum: 2147483647). Specifies the minimum number of inserted, updated or deleted tuples needed to trigger an ANALYZE in any one table. The default is 50 tuples.autovacuum_freeze_max_age
(integer, Minimum: 200000000, Maximum: 1500000000). Specifies the maximum age (in transactions) that a table's pg_class.relfrozenxid field can attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. This parameter will cause the server to be restarted.autovacuum_max_workers
(integer, Minimum: 1, Maximum: 20). Specifies the maximum number of autovacuum processes (other than the autovacuum launcher) that may be running at any one time. The default is three. This parameter can only be set at server start.autovacuum_naptime
(integer, Minimum: 1, Maximum: 86400). Specifies the minimum delay between autovacuum runs on any given database. The delay is measured in seconds, and the default is one minute.autovacuum_vacuum_cost_delay
(integer, Minimum: -1, Maximum: 100). Specifies the cost delay value that will be used in automatic VACUUM operations. If -1 is specified, the regular vacuum_cost_delay value will be used. The default value is 20 milliseconds.autovacuum_vacuum_cost_limit
(integer, Minimum: -1, Maximum: 10000). Specifies the cost limit value that will be used in automatic VACUUM operations. If -1 is specified (which is the default), the regular vacuum_cost_limit value will be used.autovacuum_vacuum_scale_factor
(number, Minimum: 0, Maximum: 1). Specifies a fraction of the table size to add to autovacuum_vacuum_threshold when deciding whether to trigger a VACUUM. The default is 0.2 (20% of table size).autovacuum_vacuum_threshold
(integer, Minimum: 0, Maximum: 2147483647). Specifies the minimum number of updated or deleted tuples needed to trigger a VACUUM in any one table. The default is 50 tuples.bgwriter_delay
(integer, Minimum: 10, Maximum: 10000). Specifies the delay between activity rounds for the background writer in milliseconds. Default is 200.bgwriter_flush_after
(integer, Minimum: 0, Maximum: 2048). Whenever more than bgwriter_flush_after bytes have been written by the background writer, attempt to force the OS to issue these writes to the underlying storage. Specified in kilobytes, default is 512. Setting of 0 disables forced writeback.bgwriter_lru_maxpages
(integer, Minimum: 0, Maximum: 1073741823). In each round, no more than this many buffers will be written by the background writer. Setting this to zero disables background writing. Default is 100.bgwriter_lru_multiplier
(number, Minimum: 0, Maximum: 10). The average recent need for new buffers is multiplied by bgwriter_lru_multiplier to arrive at an estimate of the number that will be needed during the next round, (up to bgwriter_lru_maxpages). 1.0 represents a \u201cjust in time\u201d policy of writing exactly the number of buffers predicted to be needed. Larger values provide some cushion against spikes in demand, while smaller values intentionally leave writes to be done by server processes. The default is 2.0.deadlock_timeout
(integer, Minimum: 500, Maximum: 1800000). This is the amount of time, in milliseconds, to wait on a lock before checking to see if there is a deadlock condition.default_toast_compression
(string, Enum: lz4
, pglz
). Specifies the default TOAST compression method for values of compressible columns (the default is lz4).idle_in_transaction_session_timeout
(integer, Minimum: 0, Maximum: 604800000). Time out sessions with open transactions after this number of milliseconds.jit
(boolean). Controls system-wide use of Just-in-Time Compilation (JIT).log_autovacuum_min_duration
(integer, Minimum: -1, Maximum: 2147483647). Causes each action executed by autovacuum to be logged if it ran for at least the specified number of milliseconds. Setting this to zero logs all autovacuum actions. Minus-one (the default) disables logging autovacuum actions.log_error_verbosity
(string, Enum: TERSE
, DEFAULT
, VERBOSE
). Controls the amount of detail written in the server log for each message that is logged.log_line_prefix
(string, Enum: 'pid=%p,user=%u,db=%d,app=%a,client=%h '
, '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
, '%m [%p] %q[user=%u,db=%d,app=%a] '
). Choose from one of the available log-formats. These can support popular log analyzers like pgbadger, pganalyze etc.log_min_duration_statement
(integer, Minimum: -1, Maximum: 86400000). Log statements that take more than this number of milliseconds to run, -1 disables.log_temp_files
(integer, Minimum: -1, Maximum: 2147483647). Log statements for each temporary file created larger than this number of kilobytes, -1 disables.max_files_per_process
(integer, Minimum: 1000, Maximum: 4096). PostgreSQL maximum number of files that can be open per process.max_locks_per_transaction
(integer, Minimum: 64, Maximum: 6400). PostgreSQL maximum locks per transaction.max_logical_replication_workers
(integer, Minimum: 4, Maximum: 64). PostgreSQL maximum logical replication workers (taken from the pool of max_parallel_workers).max_parallel_workers
(integer, Minimum: 0, Maximum: 96). Sets the maximum number of workers that the system can support for parallel queries.max_parallel_workers_per_gather
(integer, Minimum: 0, Maximum: 96). Sets the maximum number of workers that can be started by a single Gather or Gather Merge node.max_pred_locks_per_transaction
(integer, Minimum: 64, Maximum: 5120). PostgreSQL maximum predicate locks per transaction.max_prepared_transactions
(integer, Minimum: 0, Maximum: 10000). PostgreSQL maximum prepared transactions.max_replication_slots
(integer, Minimum: 8, Maximum: 64). PostgreSQL maximum replication slots.max_slot_wal_keep_size
(integer, Minimum: -1, Maximum: 2147483647). PostgreSQL maximum WAL size (MB) reserved for replication slots. Default is -1 (unlimited). wal_keep_size minimum WAL size setting takes precedence over this.max_stack_depth
(integer, Minimum: 2097152, Maximum: 6291456). Maximum depth of the stack in bytes.max_standby_archive_delay
(integer, Minimum: 1, Maximum: 43200000). Max standby archive delay in milliseconds.max_standby_streaming_delay
(integer, Minimum: 1, Maximum: 43200000). Max standby streaming delay in milliseconds.max_wal_senders
(integer, Minimum: 20, Maximum: 64). PostgreSQL maximum WAL senders.max_worker_processes
(integer, Minimum: 8, Maximum: 96). Sets the maximum number of background processes that the system can support.pg_partman_bgw.interval
(integer, Minimum: 3600, Maximum: 604800). Sets the time interval to run pg_partman's scheduled tasks.pg_partman_bgw.role
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$
, MaxLength: 64). Controls which role to use for pg_partman's scheduled background tasks.pg_stat_monitor.pgsm_enable_query_plan
(boolean). Enables or disables query plan monitoring.pg_stat_monitor.pgsm_max_buckets
(integer, Minimum: 1, Maximum: 10). Sets the maximum number of buckets.pg_stat_statements.track
(string, Enum: all
, top
, none
). Controls which statements are counted. Specify top to track top-level statements (those issued directly by clients), all to also track nested statements (such as statements invoked within functions), or none to disable statement statistics collection. The default value is top.temp_file_limit
(integer, Minimum: -1, Maximum: 2147483647). PostgreSQL temporary file limit in KiB, -1 for unlimited.timezone
(string, MaxLength: 64). PostgreSQL service timezone.track_activity_query_size
(integer, Minimum: 1024, Maximum: 10240). Specifies the number of bytes reserved to track the currently executing command for each active session.track_commit_timestamp
(string, Enum: off
, on
). Record commit time of transactions.track_functions
(string, Enum: all
, pl
, none
). Enables tracking of function call counts and time used.track_io_timing
(string, Enum: off
, on
). Enables timing of database I/O calls. This parameter is off by default, because it will repeatedly query the operating system for the current time, which may cause significant overhead on some platforms.wal_sender_timeout
(integer). Terminate replication connections that are inactive for longer than this amount of time, in milliseconds. Setting this value to zero disables the timeout.wal_writer_delay
(integer, Minimum: 10, Maximum: 200). WAL flush interval in milliseconds. Note that setting this value to lower than the default 200ms may negatively impact performance.Appears on spec.userConfig
.
System-wide settings for the pg_qualstats extension.
Optional
enabled
(boolean). Enable / Disable pg_qualstats.min_err_estimate_num
(integer, Minimum: 0). Error estimation num threshold to save quals.min_err_estimate_ratio
(integer, Minimum: 0). Error estimation ratio threshold to save quals.track_constants
(boolean). Enable / Disable pg_qualstats constants tracking.track_pg_catalog
(boolean). Track quals on system catalogs too.Appears on spec.userConfig
.
PGBouncer connection pooling settings.
Optional
autodb_idle_timeout
(integer, Minimum: 0, Maximum: 86400). If the automatically created database pools have been unused this many seconds, they are freed. If 0 then timeout is disabled. [seconds].autodb_max_db_connections
(integer, Minimum: 0, Maximum: 2147483647). Do not allow more than this many server connections per database (regardless of user). Setting it to 0 means unlimited.autodb_pool_mode
(string, Enum: session
, transaction
, statement
). PGBouncer pool mode.autodb_pool_size
(integer, Minimum: 0, Maximum: 10000). If non-zero then create automatically a pool of that size per user when a pool doesn't exist.ignore_startup_parameters
(array of strings, MaxItems: 32). List of parameters to ignore when given in startup packet.min_pool_size
(integer, Minimum: 0, Maximum: 10000). Add more server connections to pool if below this number. Improves behavior when usual load comes suddenly back after period of total inactivity. The value is effectively capped at the pool size.server_idle_timeout
(integer, Minimum: 0, Maximum: 86400). If a server connection has been idle more than this many seconds it will be dropped. If 0 then timeout is disabled. [seconds].server_lifetime
(integer, Minimum: 60, Maximum: 86400). The pooler will close an unused server connection that has been connected longer than this. [seconds].server_reset_query_always
(boolean). Run server_reset_query (DISCARD ALL) in all pooling modes.Appears on spec.userConfig
.
System-wide settings for pglookout.
Required
max_failover_replication_time_lag
(integer, Minimum: 10). Number of seconds of master unavailability before triggering database failover to standby.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
pg
(boolean). Allow clients to connect to pg with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.pgbouncer
(boolean). Allow clients to connect to pgbouncer with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
pg
(boolean). Enable pg.pgbouncer
(boolean). Enable pgbouncer.prometheus
(boolean). Enable prometheus.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
pg
(boolean). Allow clients to connect to pg from the public internet for service nodes that are in a project VPC or another type of private network.pgbouncer
(boolean). Allow clients to connect to pgbouncer from the public internet for service nodes that are in a project VPC or another type of private network.prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.Appears on spec.userConfig
.
System-wide settings for the timescaledb extension.
Required
max_background_workers
(integer, Minimum: 1, Maximum: 4096). The number of background workers for timescaledb operations. You should configure this setting to the sum of your number of databases and the total number of concurrent background workers you want running at any given point in time.apiVersion: aiven.io/v1alpha1\nkind: Project\nmetadata:\n name: my-project\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: project-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n tags:\n env: prod\n\n billingAddress: NYC\n cloud: aws-eu-west-1\n
"},{"location":"api-reference/project.html#Project","title":"Project","text":"Project is the Schema for the projects API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Project
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ProjectSpec defines the desired state of Project. See below for nested schema.Appears on Project
.
ProjectSpec defines the desired state of Project.
Optional
accountId
(string, MaxLength: 32). Account ID.authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.billingAddress
(string, MaxLength: 1000). Billing name and address of the project.billingCurrency
(string, Enum: AUD
, CAD
, CHF
, DKK
, EUR
, GBP
, NOK
, SEK
, USD
). Billing currency.billingEmails
(array of strings, MaxItems: 10). Billing contact emails of the project.billingExtraText
(string, MaxLength: 1000). Extra text to be included in all project invoices, e.g. purchase order or cost center number.billingGroupId
(string, MinLength: 36, MaxLength: 36). BillingGroup ID.cardId
(string, MaxLength: 64). Credit card ID; The ID may be either last 4 digits of the card or the actual ID.cloud
(string, MaxLength: 256). Target cloud, example: aws-eu-central-1.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: PROJECT_CA_CERT
. See below for nested schema.copyFromProject
(string, MaxLength: 63). Project name from which to copy settings to the new project.countryCode
(string, MinLength: 2, MaxLength: 2). Billing country code of the project.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize projects.technicalEmails
(array of strings, MaxItems: 10). Technical contact emails of the project.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: PROJECT_CA_CERT
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.apiVersion: aiven.io/v1alpha1\nkind: ProjectVPC\nmetadata:\n name: my-project-vpc\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n cloudName: google-europe-west1\n networkCidr: 10.0.0.0/24\n
"},{"location":"api-reference/projectvpc.html#ProjectVPC","title":"ProjectVPC","text":"ProjectVPC is the Schema for the projectvpcs API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ProjectVPC
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ProjectVPCSpec defines the desired state of ProjectVPC. See below for nested schema.Appears on ProjectVPC
.
ProjectVPCSpec defines the desired state of ProjectVPC.
Required
cloudName
(string, Immutable, MaxLength: 256). Cloud the VPC is in.networkCidr
(string, Immutable, MaxLength: 36). Network address range used by the VPC like 192.168.0.0/24.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). The project the VPC belongs to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). apiVersion: aiven.io/v1alpha1\nkind: Redis\nmetadata:\n name: k8s-redis\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: redis-token\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: my-aiven-project\n cloudName: google-europe-west1\n plan: startup-4\n\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n userConfig:\n redis_maxmemory_policy: \"allkeys-random\"\n
"},{"location":"api-reference/redis.html#Redis","title":"Redis","text":"Redis is the Schema for the redis API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value Redis
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). RedisSpec defines the desired state of Redis. See below for nested schema.Appears on Redis
.
RedisSpec defines the desired state of Redis.
Required
plan
(string, MaxLength: 128). Subscription plan.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Target project.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.cloudName
(string, MaxLength: 256). Cloud the service runs in.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: REDIS_HOST
, REDIS_PORT
, REDIS_USER
, REDIS_PASSWORD
. See below for nested schema.disk_space
(string, Format: ^[1-9][0-9]*(GiB|G)*
). The disk space of the service, possible values depend on the service type, the cloud provider and the project. Reducing will result in the service re-balancing.maintenanceWindowDow
(string, Enum: monday
, tuesday
, wednesday
, thursday
, friday
, saturday
, sunday
). Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.maintenanceWindowTime
(string, MaxLength: 8). Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.projectVPCRef
(object). ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically. See below for nested schema.projectVpcId
(string, MaxLength: 36). Identifier of the VPC the service should be in, if any.serviceIntegrations
(array of objects, Immutable, MaxItems: 1). Service integrations to specify when creating a service. Not applied after initial service creation. See below for nested schema.tags
(object, AdditionalProperties: string). Tags are key-value pairs that allow you to categorize services.terminationProtection
(boolean). Prevent service from being deleted. It is recommended to have this enabled for all services.userConfig
(object). Redis specific user configuration options. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: REDIS_HOST
, REDIS_PORT
, REDIS_USER
, REDIS_PASSWORD
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.Appears on spec
.
ProjectVPCRef reference to ProjectVPC resource to use its ID as ProjectVPCID automatically.
Required
name
(string, MinLength: 1). Optional
namespace
(string, MinLength: 1). Appears on spec
.
Service integrations to specify when creating a service. Not applied after initial service creation.
Required
integrationType
(string, Enum: read_replica
). sourceServiceName
(string, MinLength: 1, MaxLength: 64). Appears on spec
.
Redis specific user configuration options.
Optional
additional_backup_regions
(array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.ip_filter
(array of objects, MaxItems: 1024). Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
. See below for nested schema.migration
(object). Migrate data from existing server. See below for nested schema.private_access
(object). Allow access to selected service ports from private networks. See below for nested schema.privatelink_access
(object). Allow access to selected service components through Privatelink. See below for nested schema.project_to_fork_from
(string, Immutable, MaxLength: 63). Name of another project to fork a service from. This has effect only when a new service is being created.public_access
(object). Allow access to selected service ports from the public Internet. See below for nested schema.recovery_basebackup_name
(string, Pattern: ^[a-zA-Z0-9-_:.]+$
, MaxLength: 128). Name of the basebackup to restore in forked service.redis_acl_channels_default
(string, Enum: allchannels
, resetchannels
). Determines default pub/sub channels' ACL for new users if ACL is not supplied. When this option is not defined, all_channels is assumed to keep backward compatibility. This option doesn't affect Redis configuration acl-pubsub-default.redis_io_threads
(integer, Minimum: 1, Maximum: 32). Set Redis IO thread count. Changing this will cause a restart of the Redis service.redis_lfu_decay_time
(integer, Minimum: 1, Maximum: 120). LFU maxmemory-policy counter decay time in minutes.redis_lfu_log_factor
(integer, Minimum: 0, Maximum: 100). Counter logarithm factor for volatile-lfu and allkeys-lfu maxmemory-policies.redis_maxmemory_policy
(string, Enum: noeviction
, allkeys-lru
, volatile-lru
, allkeys-random
, volatile-random
, volatile-ttl
, volatile-lfu
, allkeys-lfu
). Redis maxmemory-policy.redis_notify_keyspace_events
(string, Pattern: ^[KEg\\$lshzxeA]*$
, MaxLength: 32). Set notify-keyspace-events option.redis_number_of_databases
(integer, Minimum: 1, Maximum: 128). Set number of Redis databases. Changing this will cause a restart of the Redis service.redis_persistence
(string, Enum: off
, rdb
). When persistence is rdb
, Redis does RDB dumps each 10 minutes if any key is changed. Also RDB dumps are done according to backup schedule for backup purposes. When persistence is off
, no RDB dumps and backups are done, so data can be lost at any moment if service is restarted for any reason, or if service is powered off. Also service can't be forked.redis_pubsub_client_output_buffer_limit
(integer, Minimum: 32, Maximum: 512). Set output buffer limit for pub / sub clients in MB. The value is the hard limit, the soft limit is 1/4 of the hard limit. When setting the limit, be mindful of the available memory in the selected service plan.redis_ssl
(boolean). Require SSL to access Redis.redis_timeout
(integer, Minimum: 0, Maximum: 31536000). Redis idle connection timeout in seconds.service_log
(boolean). Store logs for the service so that they are available in the HTTP API and console.service_to_fork_from
(string, Immutable, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.static_ips
(boolean). Use static public IP addresses.Appears on spec.userConfig
.
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
.
Required
network
(string, MaxLength: 43). CIDR address block.Optional
description
(string, MaxLength: 1024). Description for IP filter list entry.Appears on spec.userConfig
.
Migrate data from existing server.
Required
host
(string, MaxLength: 255). Hostname or IP address of the server where to migrate data from.port
(integer, Minimum: 1, Maximum: 65535). Port number of the server where to migrate data from.Optional
dbname
(string, MaxLength: 63). Database name for bootstrapping the initial connection.ignore_dbs
(string, MaxLength: 2048). Comma-separated list of databases, which should be ignored during migration (supported by MySQL and PostgreSQL only at the moment).method
(string, Enum: dump
, replication
). The migration method to be used (currently supported only by Redis, Dragonfly, MySQL and PostgreSQL service types).password
(string, MaxLength: 256). Password for authentication with the server where to migrate data from.ssl
(boolean). The server where to migrate data from is secured with SSL.username
(string, MaxLength: 256). User name for authentication with the server where to migrate data from.Appears on spec.userConfig
.
Allow access to selected service ports from private networks.
Optional
prometheus
(boolean). Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.redis
(boolean). Allow clients to connect to redis with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.Appears on spec.userConfig
.
Allow access to selected service components through Privatelink.
Optional
prometheus
(boolean). Enable prometheus.redis
(boolean). Enable redis.Appears on spec.userConfig
.
Allow access to selected service ports from the public Internet.
Optional
prometheus
(boolean). Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.redis
(boolean). Allow clients to connect to redis from the public internet for service nodes that are in a project VPC or another type of private network.apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: my-service-integration\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: aiven-project-name\n\n integrationType: kafka_logs\n sourceServiceName: my-source-service-name\n destinationServiceName: my-destination-service-name\n\n kafkaLogs:\n kafka_topic: my-kafka-topic\n
"},{"location":"api-reference/serviceintegration.html#ServiceIntegration","title":"ServiceIntegration","text":"ServiceIntegration is the Schema for the serviceintegrations API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ServiceIntegration
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ServiceIntegrationSpec defines the desired state of ServiceIntegration. See below for nested schema.Appears on ServiceIntegration
.
ServiceIntegrationSpec defines the desired state of ServiceIntegration.
Required
integrationType
(string, Enum: alertmanager
, autoscaler
, caching
, cassandra_cross_service_cluster
, clickhouse_kafka
, clickhouse_postgresql
, dashboard
, datadog
, datasource
, external_aws_cloudwatch_logs
, external_aws_cloudwatch_metrics
, external_elasticsearch_logs
, external_google_cloud_logging
, external_opensearch_logs
, flink
, flink_external_kafka
, internal_connectivity
, jolokia
, kafka_connect
, kafka_logs
, kafka_mirrormaker
, logs
, m3aggregator
, m3coordinator
, metrics
, opensearch_cross_cluster_replication
, opensearch_cross_cluster_search
, prometheus
, read_replica
, rsyslog
, schema_registry_proxy
, stresstester
, thanosquery
, thanosstore
, vmalert
, Immutable). Type of the service integration accepted by Aiven API. Some values may not be supported by the operator.project
(string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project the integration belongs to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.clickhouseKafka
(object). Clickhouse Kafka configuration values. See below for nested schema.clickhousePostgresql
(object). Clickhouse PostgreSQL configuration values. See below for nested schema.datadog
(object). Datadog specific user configuration options. See below for nested schema.destinationEndpointId
(string, Immutable, MaxLength: 36). Destination endpoint for the integration (if any).destinationProjectName
(string, Immutable, MaxLength: 63). Destination project for the integration (if any).destinationServiceName
(string, Immutable, MaxLength: 64). Destination service for the integration (if any).externalAWSCloudwatchMetrics
(object). External AWS CloudWatch Metrics integration Logs configuration values. See below for nested schema.kafkaConnect
(object). Kafka Connect service configuration values. See below for nested schema.kafkaLogs
(object). Kafka logs configuration values. See below for nested schema.kafkaMirrormaker
(object). Kafka MirrorMaker configuration values. See below for nested schema.logs
(object). Logs configuration values. See below for nested schema.metrics
(object). Metrics configuration values. See below for nested schema.sourceEndpointID
(string, Immutable, MaxLength: 36). Source endpoint for the integration (if any).sourceProjectName
(string, Immutable, MaxLength: 63). Source project for the integration (if any).sourceServiceName
(string, Immutable, MaxLength: 64). Source service for the integration (if any).Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Clickhouse Kafka configuration values.
Required
tables
(array of objects, MaxItems: 100). Tables to create. See below for nested schema.Appears on spec.clickhouseKafka
.
Tables to create.
Required
columns
(array of objects, MaxItems: 100). Table columns. See below for nested schema.data_format
(string, Enum: Avro
, CSV
, JSONAsString
, JSONCompactEachRow
, JSONCompactStringsEachRow
, JSONEachRow
, JSONStringsEachRow
, MsgPack
, TSKV
, TSV
, TabSeparated
, RawBLOB
, AvroConfluent
). Message data format.group_name
(string, MinLength: 1, MaxLength: 249). Kafka consumers group.name
(string, MinLength: 1, MaxLength: 40). Name of the table.topics
(array of objects, MaxItems: 100). Kafka topics. See below for nested schema.Optional
auto_offset_reset
(string, Enum: smallest
, earliest
, beginning
, largest
, latest
, end
). Action to take when there is no initial offset in offset store or the desired offset is out of range.date_time_input_format
(string, Enum: basic
, best_effort
, best_effort_us
). Method to read DateTime from text input formats.handle_error_mode
(string, Enum: default
, stream
). How to handle errors for Kafka engine.max_block_size
(integer, Minimum: 0, Maximum: 1000000000). Number of row collected by poll(s) for flushing data from Kafka.max_rows_per_message
(integer, Minimum: 1, Maximum: 1000000000). The maximum number of rows produced in one kafka message for row-based formats.num_consumers
(integer, Minimum: 1, Maximum: 10). The number of consumers per table per replica.poll_max_batch_size
(integer, Minimum: 0, Maximum: 1000000000). Maximum amount of messages to be polled in a single Kafka poll.skip_broken_messages
(integer, Minimum: 0, Maximum: 1000000000). Skip at least this number of broken messages from Kafka topic per block.Appears on spec.clickhouseKafka.tables
.
Table columns.
Required
name
(string, MinLength: 1, MaxLength: 40). Column name.type
(string, MinLength: 1, MaxLength: 1000). Column type.Appears on spec.clickhouseKafka.tables
.
Kafka topics.
Required
name
(string, MinLength: 1, MaxLength: 249). Name of the topic.Appears on spec
.
Clickhouse PostgreSQL configuration values.
Required
databases
(array of objects, MaxItems: 10). Databases to expose. See below for nested schema.Appears on spec.clickhousePostgresql
.
Databases to expose.
Optional
database
(string, MinLength: 1, MaxLength: 63). PostgreSQL database to expose.schema
(string, MinLength: 1, MaxLength: 63). PostgreSQL schema to expose.Appears on spec
.
Datadog specific user configuration options.
Optional
datadog_dbm_enabled
(boolean). Enable Datadog Database Monitoring.datadog_tags
(array of objects, MaxItems: 32). Custom tags provided by user. See below for nested schema.exclude_consumer_groups
(array of strings, MaxItems: 1024). List of custom metrics.exclude_topics
(array of strings, MaxItems: 1024). List of topics to exclude.include_consumer_groups
(array of strings, MaxItems: 1024). List of custom metrics.include_topics
(array of strings, MaxItems: 1024). List of topics to include.kafka_custom_metrics
(array of strings, MaxItems: 1024). List of custom metrics.max_jmx_metrics
(integer, Minimum: 10, Maximum: 100000). Maximum number of JMX metrics to send.opensearch
(object). Datadog Opensearch Options. See below for nested schema.redis
(object). Datadog Redis Options. See below for nested schema.Appears on spec.datadog
.
Custom tags provided by user.
Required
tag
(string, MinLength: 1, MaxLength: 200). Tag format and usage are described here: https://docs.datadoghq.com/getting_started/tagging. Tags with prefix aiven-
are reserved for Aiven.Optional
comment
(string, MaxLength: 1024). Optional tag explanation.Appears on spec.datadog
.
Datadog Opensearch Options.
Optional
index_stats_enabled
(boolean). Enable Datadog Opensearch Index Monitoring.pending_task_stats_enabled
(boolean). Enable Datadog Opensearch Pending Task Monitoring.pshard_stats_enabled
(boolean). Enable Datadog Opensearch Primary Shard Monitoring.Appears on spec.datadog
.
Datadog Redis Options.
Required
command_stats_enabled
(boolean). Enable command_stats option in the agent's configuration.Appears on spec
.
External AWS CloudWatch Metrics integration Logs configuration values.
Optional
dropped_metrics
(array of objects, MaxItems: 1024). Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics). See below for nested schema.extra_metrics
(array of objects, MaxItems: 1024). Metrics to allow through to AWS CloudWatch (in addition to default metrics). See below for nested schema.Appears on spec.externalAWSCloudwatchMetrics
.
Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics).
Required
field
(string, MaxLength: 1000). Identifier of a value in the metric.metric
(string, MaxLength: 1000). Identifier of the metric.Appears on spec.externalAWSCloudwatchMetrics
.
Metrics to allow through to AWS CloudWatch (in addition to default metrics).
Required
field
(string, MaxLength: 1000). Identifier of a value in the metric.metric
(string, MaxLength: 1000). Identifier of the metric.Appears on spec
.
Kafka Connect service configuration values.
Required
kafka_connect
(object). Kafka Connect service configuration values. See below for nested schema.Appears on spec.kafkaConnect
.
Kafka Connect service configuration values.
Optional
config_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration data are stored.This must be the same for all workers with the same group_id.group_id
(string, MaxLength: 249). A unique string that identifies the Connect cluster group this worker belongs to.offset_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration offsets are stored.This must be the same for all workers with the same group_id.status_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration status updates are stored.This must be the same for all workers with the same group_id.Appears on spec
.
Kafka logs configuration values.
Required
kafka_topic
(string, MinLength: 1, MaxLength: 249). Topic name.Optional
selected_log_fields
(array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.Appears on spec
.
Kafka MirrorMaker configuration values.
Optional
cluster_alias
(string, Pattern: ^[a-zA-Z0-9_.-]+$
, MaxLength: 128). The alias under which the Kafka cluster is known to MirrorMaker. Can contain the following symbols: ASCII alphanumerics, .
, _
, and -
.kafka_mirrormaker
(object). Kafka MirrorMaker configuration values. See below for nested schema.Appears on spec.kafkaMirrormaker
.
Kafka MirrorMaker configuration values.
Optional
consumer_fetch_min_bytes
(integer, Minimum: 1, Maximum: 5242880). The minimum amount of data the server should return for a fetch request.producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). The batch size in bytes producer will attempt to collect before publishing to broker.producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The amount of bytes producer can use for buffering data before publishing to broker.producer_compression_type
(string, Enum: gzip
, snappy
, lz4
, zstd
, none
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
, snappy
, lz4
, zstd
). It additionally accepts none
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). The linger time (ms) for waiting new data to arrive for publishing.producer_max_request_size
(integer, Minimum: 0, Maximum: 268435456). The maximum request size in bytes.Appears on spec
.
Logs configuration values.
Optional
elasticsearch_index_days_max
(integer, Minimum: 1, Maximum: 10000). Elasticsearch index retention limit.elasticsearch_index_prefix
(string, MinLength: 1, MaxLength: 1024). Elasticsearch index prefix.selected_log_fields
(array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.Appears on spec
.
Metrics configuration values.
Optional
database
(string, Pattern: ^[_A-Za-z0-9][-_A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of the database where to store metric datapoints. Only affects PostgreSQL destinations. Defaults to metrics
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.retention_days
(integer, Minimum: 0, Maximum: 10000). Number of days to keep old metrics. Only affects PostgreSQL destinations. Set to 0 for no automatic cleanup. Defaults to 30 days.ro_username
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of a user that can be used to read metrics. This will be used for Grafana integration (if enabled) to prevent Grafana users from making undesired changes. Only affects PostgreSQL destinations. Defaults to metrics_reader
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.source_mysql
(object). Configuration options for metrics where source service is MySQL. See below for nested schema.username
(string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of the user used to write metrics. Only affects PostgreSQL destinations. Defaults to metrics_writer
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.Appears on spec.metrics
.
Configuration options for metrics where source service is MySQL.
Required
telegraf
(object). Configuration options for Telegraf MySQL input plugin. See below for nested schema.Appears on spec.metrics.source_mysql
.
Configuration options for Telegraf MySQL input plugin.
Optional
gather_event_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS.gather_file_events_stats
(boolean). gather metrics from PERFORMANCE_SCHEMA.FILE_SUMMARY_BY_EVENT_NAME.gather_index_io_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE.gather_info_schema_auto_inc
(boolean). Gather auto_increment columns and max values from information schema.gather_innodb_metrics
(boolean). Gather metrics from INFORMATION_SCHEMA.INNODB_METRICS.gather_perf_events_statements
(boolean). Gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_DIGEST.gather_process_list
(boolean). Gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST.gather_slave_status
(boolean). Gather metrics from SHOW SLAVE STATUS command output.gather_table_io_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE.gather_table_lock_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS.gather_table_schema
(boolean). Gather metrics from INFORMATION_SCHEMA.TABLES.perf_events_statements_digest_text_limit
(integer, Minimum: 1, Maximum: 2048). Truncates digest text from perf_events_statements into this many characters.perf_events_statements_limit
(integer, Minimum: 1, Maximum: 4000). Limits metrics from perf_events_statements.perf_events_statements_time_limit
(integer, Minimum: 1, Maximum: 2592000). Only include perf_events_statements whose last seen is less than this many seconds.apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: my-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: service-user-secret\n prefix: MY_SECRET_PREFIX_\n annotations:\n foo: bar\n labels:\n baz: egg\n\n project: aiven-project-name\n serviceName: my-service-name\n
"},{"location":"api-reference/serviceuser.html#ServiceUser","title":"ServiceUser","text":"ServiceUser is the Schema for the serviceusers API.
Required
apiVersion
(string). Value aiven.io/v1alpha1
.kind
(string). Value ServiceUser
.metadata
(object). Data that identifies the object, including a name
string and optional namespace
.spec
(object). ServiceUserSpec defines the desired state of ServiceUser. See below for nested schema.Appears on ServiceUser
.
ServiceUserSpec defines the desired state of ServiceUser.
Required
project
(string, MaxLength: 63, Format: ^[a-zA-Z0-9_-]*$
). Project to link the user to.serviceName
(string, MaxLength: 63). Service to link the user to.Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.authentication
(string, Enum: caching_sha2_password
, mysql_native_password
). Authentication details.connInfoSecretTarget
(object). Information regarding secret creation. Exposed keys: SERVICEUSER_HOST
, SERVICEUSER_PORT
, SERVICEUSER_USERNAME
, SERVICEUSER_PASSWORD
, SERVICEUSER_CA_CERT
, SERVICEUSER_ACCESS_CERT
, SERVICEUSER_ACCESS_KEY
. See below for nested schema.Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
key
(string, MinLength: 1). name
(string, MinLength: 1). Appears on spec
.
Information regarding secret creation. Exposed keys: SERVICEUSER_HOST
, SERVICEUSER_PORT
, SERVICEUSER_USERNAME
, SERVICEUSER_PASSWORD
, SERVICEUSER_CA_CERT
, SERVICEUSER_ACCESS_CERT
, SERVICEUSER_ACCESS_KEY
.
Required
name
(string). Name of the secret resource to be created. By default, is equal to the resource name.Optional
annotations
(object, AdditionalProperties: string). Annotations added to the secret.labels
(object, AdditionalProperties: string). Labels added to the secret.prefix
(string). Prefix for the secret's keys. Added \"as is\" without any transformations. By default, is equal to the kind name in uppercase + underscore, e.g. KAFKA_
, REDIS_
, etc.The Aiven Operator for Kubernetes project accepts contributions via GitHub pull requests. This document outlines the process to help get your contribution accepted.
Please see also the Aiven Operator for Kubernetes Developer Guide.
"},{"location":"contributing/index.html#support-channels","title":"Support Channels","text":"This project offers support through GitHub issues and can be filed here. Moreover, GitHub issues are used as the primary method for tracking anything to do with the Aiven Operator for Kubernetes project.
"},{"location":"contributing/index.html#pull-request-process","title":"Pull Request Process","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
"},{"location":"contributing/index.html#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
Examples of unacceptable behavior by participants include:
This project adheres to the Conventional Commits specification. Please, make sure that your commit messages follow that specification.
"},{"location":"contributing/index.html#our-responsibilities","title":"Our Responsibilities","text":"Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"contributing/index.html#scope","title":"Scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
"},{"location":"contributing/index.html#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at opensource@aiven.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
"},{"location":"contributing/developer-guide.html","title":"Developer guide","text":""},{"location":"contributing/developer-guide.html#getting-started","title":"Getting Started","text":"You must have a working Go environment and then clone the repository:
git clone git@github.com:aiven/aiven-operator.git\ncd aiven-operator\n
"},{"location":"contributing/developer-guide.html#resource-generation","title":"Resource generation","text":"Please see this page for more information.
"},{"location":"contributing/developer-guide.html#building","title":"Building","text":"The project uses the make
build system.
Building the operator binary:
make build\n
"},{"location":"contributing/developer-guide.html#testing","title":"Testing","text":"As of now, we only support integration tests who interact directly with Aiven. To run the tests, you'll need an Aiven account and an Aiven authentication code.
"},{"location":"contributing/developer-guide.html#prerequisites","title":"Prerequisites","text":"Please have installed first:
-w0
flag, some tests may not work properly kind create cluster --image kindest/node:v1.24.0 --wait 5m\n
The following commands must be executed with these environment variables (keep them in secret!):
AIVEN_TOKEN
\u2014 your authentication token AIVEN_PROJECT_NAME
\u2014 your Aiven project name to run services inSetup everything:
make e2e-setup-kind\n
Note
Additionally, webhooks can be disabled, if there are any problems with them.
WEBHOOKS_ENABLED=false make e2e-setup-kind\n
Run e2e tests (creates real services in AIVEN_PROJECT_NAME
):
make test-e2e-preinstalled \n
When you're done, just drop the cluster:
kind delete cluster\n
"},{"location":"contributing/developer-guide.html#documentation","title":"Documentation","text":"The documentation is written in markdown and generated by mkdocs and mkdocs-material.
To run the documentation live preview:
make serve-docs\n
And open the http://localhost:8000/aiven-operator/
page in your web browser.
The documentation API Reference section is generated automatically from the source code during the documentation deployment. To generate it locally, run the following command:
make docs\n
"},{"location":"contributing/resource-generation.html","title":"Resource generation","text":"Aiven Kubernetes Operator generates service configs code (also known as user configs) and documentation from public service types schema.
"},{"location":"contributing/resource-generation.html#flow-overview","title":"Flow overview","text":"When a new schema is issued on the API, a cron job fetches it, parses, patches, and saves in a shared library \u2014 go-api-schemas.
When the library is updated, the GitHub dependabot creates PRs to the dependant repositories, like Aiven Kubernetes Operator and Aiven Terraform Provider.
Then the make generate
command is called by GitHub action. And the PR is ready for review.
flowchart TB\n API(Aiven API) <-.->|polls schema updates| Schema([go-api-schemas])\n Bot(dependabot) <-.->|polls updates| Schema \n Bot-->|pull request|UpdateOP[/\"\u2728 $ make generate \u2728\"/]\n UpdateOP-->|review| OP([operator repository])
"},{"location":"contributing/resource-generation.html#make-generate","title":"make generate","text":"The command runs several generators in a certain sequence. First, the user config generator is called. Then controller-gen cli. Then API reference docs generator and charts generator.
Here how it goes in the details:
./<api-reference-docs>/example/
, if it finds one, it validates that with the CRD. Each CRD has an OpenAPI v3 schema as a part of it. This is also used by Kubernetes itself to validate user input.flowchart TB\n Make[/$ make generate/]-->Generator(userconfig generator<br> creates/updates structs using updated spec)\n Generator-->|go: KafkaUserConfig struct| K8S(controller-gen<br> adds k8s methods to structs)\n K8S-->|go files| CRD(controller-gen<br> creates CRDs out of structs)\n CRD-->|CRD: aiven.io_kafkas.yaml| Docs(docs generator)\n subgraph API reference generation\n Docs-->|aiven.io_kafkas.yaml|Reference(creates reference<br> out of CRD)\n Docs-->|examples/kafka.yaml,<br> aiven.io_kafkas.yaml|Examples(validates example<br> using CRD)\n Examples--> Markdown(creates docs out of CRDs, adds examples)\n Reference-->Markdown(kafka.md)\n end\n CRD-->|yaml files|Charts(charts generator<br> updates helm charts<br> and the changelog)\n Charts-->ToRelease(\"Ready to release \ud83c\udf89\")\n Markdown-->ToRelease
"},{"location":"contributing/resource-generation.html#charts-version-bump","title":"Charts version bump","text":"By default, charts generator keeps the current helm chart's version, because it doesn't know semver. You need it to do manually.
To do so run the following command with the version of your choice:
make version=v1.0.0 charts\n
"},{"location":"installation/helm.html","title":"Installing with Helm (recommended)","text":""},{"location":"installation/helm.html#installing","title":"Installing","text":"The Aiven Operator for Kubernetes can be installed via Helm.
Before you start, make sure you have the prerequisites.
First add the Aiven Helm repository:
helm repo add aiven https://aiven.github.io/aiven-charts && helm repo update\n
"},{"location":"installation/helm.html#installing-custom-resource-definitions","title":"Installing Custom Resource Definitions","text":"helm install aiven-operator-crds aiven/aiven-operator-crds\n
Verify the installation:
kubectl api-resources --api-group=aiven.io\n
The output is similar to the following: NAME SHORTNAMES APIVERSION NAMESPACED KIND\nconnectionpools aiven.io/v1alpha1 true ConnectionPool\ndatabases aiven.io/v1alpha1 true Database\n... < several omitted lines >\n
"},{"location":"installation/helm.html#installing-the-operator","title":"Installing the Operator","text":"helm install aiven-operator aiven/aiven-operator\n
Note
Installation will fail if webhooks are enabled and the CRDs for the cert-manager are not installed.
Verify the installation:
helm status aiven-operator\n
The output is similar to the following:
NAME: aiven-operator\nLAST DEPLOYED: Fri Sep 10 15:23:26 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\n
It is also possible to install the operator without webhooks enabled:
helm install aiven-operator aiven/aiven-operator --set webhooks.enabled=false\n
"},{"location":"installation/helm.html#configuration-options","title":"Configuration Options","text":"Please refer to the values.yaml of the chart.
"},{"location":"installation/helm.html#installing-without-full-cluster-administrator-access","title":"Installing without full cluster administrator access","text":"There can be some scenarios where the individual installing the Helm chart does not have the ability to provision cluster-wide resources (e.g. ClusterRoles/ClusterRoleBindings). In this scenario, you can have a cluster administrator manually install the ClusterRole and ClusterRoleBinding the operator requires prior to installing the Helm chart specifying false
for the clusterRole.create
attribute.
Important
Please see this page for more information.
Find out the name of your deployment:
helm list\n
The output has the name of each deployment similar to the following:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION\naiven-operator default 1 2021-09-09 10:56:14.623700249 +0200 CEST deployed aiven-operator-v0.1.0 v0.1.0 \naiven-operator-crds default 1 2021-09-09 10:56:05.736411868 +0200 CEST deployed aiven-operator-crds-v0.1.0 v0.1.0\n
Remove the CRDs:
helm uninstall aiven-operator-crds\n
The confirmation message is similar to the following:
release \"aiven-operator-crds\" uninstalled\n
Remove the operator:
helm uninstall aiven-operator\n
The confirmation message is similar to the following:
release \"aiven-operator\" uninstalled\n
"},{"location":"installation/kubectl.html","title":"Installing with kubectl","text":""},{"location":"installation/kubectl.html#installing","title":"Installing","text":"Before you start, make sure you have the prerequisites.
All Aiven Operator for Kubernetes components can be installed from one YAML file that is uploaded for every release:
kubectl apply -f https://github.com/aiven/aiven-operator/releases/latest/download/deployment.yaml\n
By default the Deployment is installed into the aiven-operator-system
namespace.
Assuming you installed version vX.Y.Z
of the operator it can be uninstalled via
kubectl delete -f https://github.com/aiven/aiven-operator/releases/download/vX.Y.Z/deployment.yaml\n
"},{"location":"installation/prerequisites.html","title":"Prerequisites","text":"The Aiven Operator for Kubernetes supports all major Kubernetes distributions, both locally and in the cloud.
Make sure you have the following:
The Aiven Operator for Kubernetes uses cert-manager
to configure the service reference of our webhooks.
Please follow the installation instructions on their website.
Note
This is not required in the Helm installation if you select to disable webhooks, but that is not recommended outside of playground use. The Aiven Operator for Kubernetes uses webhooks for setting defaults and enforcing invariants that are expected by the aiven API and will lead to errors if ignored. In the future webhooks will also be used for conversion and supporting multiple CRD versions.
"},{"location":"installation/uninstalling.html","title":"Uninstalling","text":"Danger
Uninstalling the Aiven Operator for Kubernetes can remove the resources created in Aiven, possibly resulting in data loss.
Depending on your installation, please follow one of:
Aiven resources need to have an accompanying secret that contains the token that is used to authorize the manipulation of that resource. If that token expired then you will not be able to delete the custom resource and deletion will also hang until the situation is resolved. The recommended approach to deal with that situation is to patch a valid token into the secret again so that proper cleanup of aiven resources can take place.
"},{"location":"installation/uninstalling.html#hanging-deletions","title":"Hanging deletions","text":"To protect the secrets that the operator is using from deletion, it adds the finalizer finalizers.aiven.io/needed-to-delete-services
to the secret. This solves a race condition that happens when deleting a namespace, where there is a possibility of the secret getting deleted before the resource that uses it. When the controller is deleted it may not cleanup the finalizers from all secrets. If there is a secret with this finalizer blocking deletion of a namespace, for now please do
kubectl patch secret <offending-secret> -p '{\"metadata\":{\"finalizers\":null}}' --type=merge\n
to remove the finalizer.
"},{"location":"resources/cassandra.html","title":"Cassandra","text":"Aiven for Apache Cassandra\u00ae is a distributed database designed to handle large volumes of writes.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/cassandra.html#creating-a-cassandra-instance","title":"Creating a Cassandra instance","text":"1. Create a file named cassandra-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Cassandra\nmetadata:\n name: cassandra-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Cassandra connection on the `cassandra-secret` Secret\n connInfoSecretTarget:\n name: cassandra-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f cassandra-sample.yaml \n
The output is:
cassandra.aiven.io/cassandra-sample created\n
3. Review the resource you created with this command:
kubectl describe cassandra.aiven.io cassandra-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-31T10:17:25Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-31T10:24:00Z\n Message: Instance is running on Aiven side\n Reason: CheckRunning\n Status: True\n Type: Running\n State: RUNNING\n...\n
The resource can be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the Cassandra connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret cassandra-secret \n
The output is similar to the following:
Name: cassandra-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nCASSANDRA_HOSTS: 59 bytes\nCASSANDRA_PASSWORD: 24 bytes\nCASSANDRA_PORT: 5 bytes\nCASSANDRA_URI: 66 bytes\nCASSANDRA_USER: 8 bytes\nCASSANDRA_HOST: 60 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret cassandra-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"CASSANDRA_HOST\": \"<secret>\",\n \"CASSANDRA_HOSTS\": \"<secret>\",\n \"CASSANDRA_PASSWORD\": \"<secret>\",\n \"CASSANDRA_PORT\": \"14609\",\n \"CASSANDRA_URI\": \"<secret>\",\n \"CASSANDRA_USER\": \"avnadmin\"\n}\n
"},{"location":"resources/cassandra.html#creating-a-cassandra-user","title":"Creating a Cassandra user","text":"You can create service users for your instance of Aiven for Apache Cassandra. Service users are unique to this instance and are not shared with any other services.
1. Create a file named cassandra-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: cassandra-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: cassandra-service-user-secret\n\n project: <your-project-name>\n serviceName: cassandra-sample\n
2. Create the user by applying the configuration:
kubectl apply -f cassandra-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using the following command:
kubectl get secret cassandra-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"<secret>\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"cassandra-service-user\"\n}\n
You can connect to the Cassandra instance using these credentials and the host information from the cassandra-secret
Secret.
Aiven for MySQL is a fully managed relational database service, deployable in the cloud of your choice.
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/mysql.html#creating-a-mysql-instance","title":"Creating a MySQL instance","text":"1. Create a file named mysql-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: MySQL\nmetadata:\n name: mysql-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the MySQL connection on the `mysql-secret` Secret\n connInfoSecretTarget:\n name: mysql-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f mysql-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe mysql.aiven.io mysql-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-02-22T15:43:44Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-02-22T15:43:44Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the MySQL connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret mysql-secret \n
The output is similar to the following:
Name: mysql-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nMYSQL_PORT: 5 bytes\nMYSQL_SSL_MODE: 8 bytes\nMYSQL_URI: 115 bytes\nMYSQL_USER: 8 bytes\nMYSQL_DATABASE: 9 bytes\nMYSQL_HOST: 39 bytes\nMYSQL_PASSWORD: 24 bytes\n
You can use jq to quickly decode the Secret:
kubectl get secret mysql-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"MYSQL_DATABASE\": \"defaultdb\",\n \"MYSQL_HOST\": \"<secret>\",\n \"MYSQL_PASSWORD\": \"<secret>\",\n \"MYSQL_PORT\": \"12691\",\n \"MYSQL_SSL_MODE\": \"REQUIRED\",\n \"MYSQL_URI\": \"<secret>\",\n \"MYSQL_USER\": \"avnadmin\"\n}\n
"},{"location":"resources/mysql.html#creating-a-mysql-user","title":"Creating a MySQL user","text":"You can create service users for your instance of Aiven for MySQL. Service users are unique to this instance and are not shared with any other services.
1. Create a file named mysql-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: mysql-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: mysql-service-user-secret\n\n project: <your-project-name>\n serviceName: mysql-sample\n
2. Create the user by applying the configuration:
kubectl apply -f mysql-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using jq:
kubectl get secret mysql-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"<secret>\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"mysql-service-user\"\n}\n
You can connect to the MySQL instance using these credentials and the host information from the mysql-secret
Secret.
OpenSearch\u00ae is an open source search and analytics suite including search engine, NoSQL document database, and visualization interface. OpenSearch offers a distributed, full-text search engine based on Apache Lucene\u00ae with a RESTful API interface and support for JSON documents.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/opensearch.html#creating-an-opensearch-instance","title":"Creating an OpenSearch instance","text":"1. Create a file named os-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: OpenSearch\nmetadata:\n name: os-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the OpenSearch connection on the `os-secret` Secret\n connInfoSecretTarget:\n name: os-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
2. Create the service by applying the configuration:
kubectl apply -f os-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe opensearch.aiven.io os-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-19T14:41:43Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-19T14:41:43Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the OpenSearch connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret os-secret \n
The output is similar to the following:
Name: os-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nHOST: 61 bytes\nPASSWORD: 24 bytes\nPORT: 5 bytes\nUSER: 8 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret os-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"HOST\": \"os-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"13041\",\n \"USER\": \"avnadmin\"\n}\n
"},{"location":"resources/opensearch.html#creating-an-opensearch-user","title":"Creating an OpenSearch user","text":"You can create service users for your instance of Aiven for OpenSearch. Service users are unique to this instance and are not shared with any other services.
1. Create a file named os-service-user.yaml:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: os-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: os-service-user-secret\n\n project: <your-project-name>\n serviceName: os-sample\n
2. Create the user by applying the configuration:
kubectl apply -f os-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information.
3. View the details of the Secret using the following command:
kubectl get secret os-service-user-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"ACCESS_CERT\": \"<secret>\",\n \"ACCESS_KEY\": \"<secret>\",\n \"CA_CERT\": \"<secret>\",\n \"HOST\": \"os-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret>\",\n \"PORT\": \"14609\",\n \"USERNAME\": \"os-service-user\"\n}\n
You can connect to the OpenSearch instance using these credentials and the host information from the os-secret
Secret.
PostgreSQL is an open source, relational database. It's ideal for organisations that need a well organised tabular datastore. On top of the strict table and columns formats, PostgreSQL also offers solutions for nested datasets with the native jsonb
format and advanced set of extensions including PostGIS, a spatial database extender for location queries. Aiven for PostgreSQL is the perfect fit for your relational data.
With Aiven Kubernetes Operator, you can manage Aiven for PostgreSQL through the well defined Kubernetes API.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/postgresql.html#creating-a-postgresql-instance","title":"Creating a PostgreSQL instance","text":"1. Create a file named pg-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-sample\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the PostgreSQL connection on the `pg-connection` Secret\n connInfoSecretTarget:\n name: pg-connection\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific PostgreSQL configuration\n userConfig:\n pg_version: '11'\n
2. Create the service by applying the configuration:
kubectl apply -f pg-sample.yaml\n
3. Review the resource you created with the following command:
kubectl get postgresqls.aiven.io pg-sample\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\npg-sample your-project google-europe-west1 startup-4 RUNNING\n
The resource can stay in the BUILDING
state for a couple of minutes. Once the state changes to RUNNING
, you are ready to access it.
For your convenience, the operator automatically stores the PostgreSQL connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
kubectl describe secret pg-connection\n
The output is similar to the following:
Name: pg-connection\nNamespace: default\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nDATABASE_URI: 107 bytes\nPGDATABASE: 9 bytes\nPGHOST: 38 bytes\nPGPASSWORD: 16 bytes\nPGPORT: 5 bytes\nPGSSLMODE: 7 bytes\nPGUSER: 8 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret pg-connection -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"DATABASE_URI\": \"postgres://avnadmin:<secret-password>@pg-sample-your-project.aivencloud.com:13039/defaultdb?sslmode=require\",\n \"PGDATABASE\": \"defaultdb\",\n \"PGHOST\": \"pg-sample-your-project.aivencloud.com\",\n \"PGPASSWORD\": \"<secret-password>\",\n \"PGPORT\": \"13039\",\n \"PGSSLMODE\": \"require\",\n \"PGUSER\": \"avnadmin\"\n}\n
"},{"location":"resources/postgresql.html#testing-the-connection","title":"Testing the connection","text":"You can verify your PostgreSQL connection from a Kubernetes workload by deploying a Pod that runs the psql
command.
1. Create a file named pod-psql.yaml
apiVersion: v1\nkind: Pod\nmetadata:\n name: psql-test-connection\nspec:\n restartPolicy: Never\n containers:\n - image: postgres:11-alpine\n name: postgres\n command: [ 'psql', '$(DATABASE_URI)', '-c', 'SELECT version();' ]\n\n # the pg-connection Secret becomes environment variables \n envFrom:\n - secretRef:\n name: pg-connection\n
It runs once and stops, due to the restartPolicy: Never
flag.
2. Inspect the log:
kubectl logs psql-test-connection\n
The output is similar to the following:
version \n---------------------------------------------------------------------------------------------\n PostgreSQL 11.12 on x86_64-pc-linux-gnu, compiled by gcc, a 68c5366192 p 6b9244f01a, 64-bit\n(1 row)\n
You have now connected to the PostgreSQL, and executed the SELECT version();
query.
The Database
Kubernetes resource allows you to create a logical database within the PostgreSQL instance.
Create the pg-database-sample.yaml
file with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Database\nmetadata:\n name: pg-database-sample\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n # the name of the previously created PostgreSQL instance\n serviceName: pg-sample\n\n project: <your-project-name>\n lcCollate: en_US.UTF-8\n lcCtype: en_US.UTF-8\n
You can now connect to the pg-database-sample
using the credentials stored in the pg-connection
Secret.
Aiven uses the concept of service user that allows you to create users for different services. You can create one for the PostgreSQL instance.
1. Create a file named pg-service-user.yaml
.
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n name: pg-service-user\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: pg-service-user-connection\n\n project: <your-project-name>\n serviceName: pg-sample\n
2. Apply the configuration with the following command.
kubectl apply -f pg-service-user.yaml\n
The ServiceUser
resource generates a Secret with connection information, in this case named pg-service-user-connection
:
kubectl get secret pg-service-user-connection -o json | jq '.data | map_values(@base64d)'\n
The output has the password and username:
{\n \"PASSWORD\": \"<secret-password>\",\n \"USERNAME\": \"pg-service-user\"\n}\n
You can now connect to the PostgreSQL instance using the credentials generated above, and the host information from the pg-connection
Secret.
Connection pooling allows you to maintain very large numbers of connections to a database while minimizing the consumption of server resources. For more information, refer to the connection pooling article in Aiven Docs. Aiven for PostgreSQL uses PGBouncer for connection pooling.
You can create a connection pool with the ConnectionPool
resource using the previously created Database
and ServiceUser
:
Create a new file named pg-connection-pool.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: ConnectionPool\nmetadata:\n name: pg-connection-pool\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: pg-connection-pool-connection\n\n project: <your-project-name>\n serviceName: pg-sample\n databaseName: pg-database-sample\n username: pg-service-user\n poolSize: 10\n poolMode: transaction\n
The ConnectionPool
generates a Secret with the connection info using the name from the connInfoSecretTarget.Name
field:
kubectl get secret pg-connection-pool-connection -o json | jq '.data | map_values(@base64d)' \n
The output is similar to the following: {\n \"DATABASE_URI\": \"postgres://pg-service-user:<secret-password>@pg-sample-you-project.aivencloud.com:13040/pg-connection-pool?sslmode=require\",\n \"PGDATABASE\": \"pg-database-sample\",\n \"PGHOST\": \"pg-sample-your-project.aivencloud.com\",\n \"PGPASSWORD\": \"<secret-password>\",\n \"PGPORT\": \"13040\",\n \"PGSSLMODE\": \"require\",\n \"PGUSER\": \"pg-service-user\"\n}\n
"},{"location":"resources/postgresql.html#creating-a-postgresql-read-only-replica","title":"Creating a PostgreSQL read-only replica","text":"Read-only replicas can be used to reduce the load on the primary service by making read-only queries against the replica service.
To create a read-only replica for a PostgreSQL service, you create a second PostgreSQL service and use serviceIntegrations to replicate data from your primary service.
The example that follows creates a primary service and a read-only replica.
1. Create a new file named pg-read-replica.yaml
with the following:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: primary-pg-service\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your project's name here\n project: <your-project-name>\n\n # add the cloud provider and plan of your choice\n # you can see all of the options at https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n userConfig:\n pg_version: '15'\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: read-replica-pg\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your project's name here\n project: <your-project-name>\n\n # add the cloud provider and plan of your choice\n # you can see all of the options at https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: saturday\n maintenanceWindowTime: 23:00:00\n userConfig:\n pg_version: '15'\n\n # use the read_replica integration and point it to your primary service\n serviceIntegrations:\n - integrationType: read_replica\n sourceServiceName: primary-pg-service\n
Note
You can create the replica service in a different region or on a different cloud provider.
2. Apply the configuration with the following command:
kubectl apply -f pg-read-replica.yaml\n
The output is similar to the following:
postgresql.aiven.io/primary-pg-service created\npostgresql.aiven.io/read-replica-pg created\n
3. Check the status of the primary service with the following command:
kubectl get postgresqls.aiven.io primary-pg-service\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\nprimary-pg-service <your-project-name> google-europe-west1 startup-4 RUNNING\n
The resource can be in the BUILDING
state for a few minutes. After the state of the primary service changes to RUNNING
, the read-only replica is created. You can check the status of the replica using the same command with the name of the replica:
kubectl get postgresqls.aiven.io read-replica-pg\n
"},{"location":"resources/project-vpc.html","title":"Aiven Project VPC","text":"Virtual Private Cloud (VPC) peering is a method of connecting separate AWS, Google Cloud or Microsoft Azure private networks to each other. It makes it possible for the virtual machines in the different VPCs to talk to each other directly without going through the public internet.
Within the Aiven Kubernetes Operator, you can create a ProjectVPC
on Aiven's side to connect to your cloud provider.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/project-vpc.html#creating-an-aiven-vpc","title":"Creating an Aiven VPC","text":"1. Create a file named vpc-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: ProjectVPC\nmetadata:\n name: vpc-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # creates a VPC to link an AWS account on the South Africa region\n cloudName: aws-af-south-1\n\n # the network range used by the VPC\n networkCidr: 192.168.0.0/24\n
2. Create the Project by applying the configuration:
kubectl apply -f vpc-sample.yaml\n
3. Review the resource you created with the following command:
kubectl get projects.aiven.io vpc-sample\n
The output is similar to the following:
NAME PROJECT CLOUD NETWORK CIDR\nvpc-sample <your-project> aws-af-south-1 192.168.0.0/24\n
"},{"location":"resources/project-vpc.html#using-the-aiven-vpc","title":"Using the Aiven VPC","text":"Follow the official VPC documentation to complete the VPC peering on your cloud of choice.
"},{"location":"resources/project.html","title":"Aiven Project","text":"Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
The Project
CRD allows you to create Aiven Projects, where your resources can be located.
To create a fully working Aiven Project with the Aiven Operator you need a source Aiven Project already created with a working billing configuration, like a credit card.
Create a file named project-sample.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Project\nmetadata:\n name: project-sample\nspec:\n # the source Project to copy the billing information from\n copyFromProject: <your-source-project>\n\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: project-sample\n
Apply the resource with:
kubectl apply -f project-sample.yaml\n
Verify the newly created Project:
kubectl get projects.aiven.io project-sample\n
The output is similar to the following:
NAME AGE\nproject-sample 22s\n
"},{"location":"resources/redis.html","title":"Redis","text":"Aiven for Redis\u00ae* is a fully managed in-memory NoSQL database that you can deploy in the cloud of your choice to store and access data quickly and efficiently.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/redis.html#creating-a-redis-instance","title":"Creating a Redis instance","text":"1. Create a file named redis-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Redis\nmetadata:\n name: redis-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Redis connection on the `redis-secret` Secret\n connInfoSecretTarget:\n name: redis-secret\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Redis configuration\n userConfig:\n redis_maxmemory_policy: \"allkeys-random\"\n
2. Create the service by applying the configuration:
kubectl apply -f redis-sample.yaml \n
3. Review the resource you created with this command:
kubectl describe redis.aiven.io redis-sample\n
The output is similar to the following:
...\nStatus:\n Conditions:\n Last Transition Time: 2023-01-19T14:48:59Z\n Message: Instance was created or update on Aiven side\n Reason: Created\n Status: True\n Type: Initialized\n Last Transition Time: 2023-01-19T14:48:59Z\n Message: Instance was created or update on Aiven side, status remains unknown\n Reason: Created\n Status: Unknown\n Type: Running\n State: REBUILDING\n...\n
The resource will be in the REBUILDING
state for a few minutes. Once the state changes to RUNNING
, you can access the resource.
For your convenience, the operator automatically stores the Redis connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
To view the details of the Secret, use the following command:
kubectl describe secret redis-secret \n
The output is similar to the following:
Name: redis-secret\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nSSL: 8 bytes\nUSER: 7 bytes\nHOST: 60 bytes\nPASSWORD: 24 bytes\nPORT: 5 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret redis-secret -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"HOST\": \"redis-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret-password>\",\n \"PORT\": \"14610\",\n \"SSL\": \"required\",\n \"USER\": \"default\"\n}\n
"},{"location":"resources/service-integrations.html","title":"Service Integrations","text":"Service Integrations provide additional functionality and features by connecting different Aiven services together.
See our Getting Started with Service Integrations guide for more information.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed, and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/service-integrations.html#send-kafka-logs-to-a-kafka-topic","title":"Send Kafka logs to a Kafka Topic","text":"This integration allows you to send Kafka service logs to a specific Kafka Topic.
First, let's create a Kafka service and a topic.
1. Create a new file named kafka-sample-topic.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-2\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: logs\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # here we can specify how many partitions the topic should have\n partitions: 3\n # and the topic replication factor\n replication: 2\n\n # we also support various topic-specific configurations\n config:\n flush_ms: 100\n
2. Create the resource on Kubernetes:
kubectl apply -f kafka-sample-topic.yaml \n
3. Now, create a ServiceIntegration
resource to send the Kafka logs to the created topic. In the same file, add the following YAML:
apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: service-integration-kafka-logs\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # indicates the type of the integration\n integrationType: kafka_logs\n\n # we will send the logs to the same kafka-sample instance\n # the source and destination are the same\n sourceServiceName: kafka-sample\n destinationServiceName: kafka-sample\n\n # the topic name we will send to\n kafkaLogs:\n kafka_topic: logs\n
4. Reapply the resource on Kubernetes:
kubectl apply -f kafka-sample-topic.yaml \n
5. Let's check the created service integration:
kubectl get serviceintegrations.aiven.io service-integration-kafka-logs\n
The output is similar to the following:
NAME PROJECT TYPE SOURCE SERVICE NAME DESTINATION SERVICE NAME SOURCE ENDPOINT ID DESTINATION ENDPOINT ID\nservice-integration-kafka-logs your-project kafka_logs kafka-sample kafka-sample \n
Your Kafka service logs are now being streamed to the logs
Kafka topic.
Aiven for Apache Kafka is an excellent option if you need to run Apache Kafka at scale. With Aiven Kubernetes Operator you can get up and running with a suitably sized Apache Kafka service in a few minutes.
Note
Before going through this guide, make sure you have a Kubernetes cluster with the operator installed and a Kubernetes Secret with an Aiven authentication token.
"},{"location":"resources/kafka/index.html#creating-a-kafka-instance","title":"Creating a Kafka instance","text":"1. Create a file named kafka-sample.yaml
, and add the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-2\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n
2. Create the following resource on Kubernetes:
kubectl apply -f kafka-sample.yaml \n
3. Inspect the service created using the command below.
kubectl get kafka.aiven.io kafka-sample\n
The output has the project name and state, similar to the following:
NAME PROJECT REGION PLAN STATE\nkafka-sample <your-project> google-europe-west1 startup-2 RUNNING\n
After a couple of minutes, the STATE
field is changed to RUNNING
, and is ready to be used.
For your convenience, the operator automatically stores the Kafka connection information in a Secret created with the name specified on the connInfoSecretTarget
field.
kubectl describe secret kafka-auth \n
The output is similar to the following:
Name: kafka-auth\nNamespace: default\nAnnotations: <none>\n\nType: Opaque\n\nData\n====\nCA_CERT: 1537 bytes\nHOST: 41 bytes\nPASSWORD: 16 bytes\nPORT: 5 bytes\nUSERNAME: 8 bytes\nACCESS_CERT: 1533 bytes\nACCESS_KEY: 2484 bytes\n
You can use the jq to quickly decode the Secret:
kubectl get secret kafka-auth -o json | jq '.data | map_values(@base64d)'\n
The output is similar to the following:
{\n \"CA_CERT\": \"<secret-ca-cert>\",\n \"ACCESS_CERT\": \"<secret-cert>\",\n \"ACCESS_KEY\": \"<secret-access-key>\",\n \"HOST\": \"kafka-sample-your-project.aivencloud.com\",\n \"PASSWORD\": \"<secret-password>\",\n \"PORT\": \"13041\",\n \"USERNAME\": \"avnadmin\"\n}\n
"},{"location":"resources/kafka/index.html#testing-the-connection","title":"Testing the connection","text":"You can verify your access to the Kafka cluster from a Pod using the authentication data from the kafka-auth
Secret. kcat is used for our examples below.
1. Create a file named kafka-test-connection.yaml
, and add the following content:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-test-connection\nspec:\n restartPolicy: Never\n containers:\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n # the command below will connect to the Kafka cluster\n # and output its metadata\n command: [\n 'kcat', '-b', '$(HOST):$(PORT)',\n '-X', 'security.protocol=SSL',\n '-X', 'ssl.key.location=/kafka-auth/ACCESS_KEY',\n '-X', 'ssl.key.password=$(PASSWORD)',\n '-X', 'ssl.certificate.location=/kafka-auth/ACCESS_CERT',\n '-X', 'ssl.ca.location=/kafka-auth/CA_CERT',\n '-L'\n ]\n\n # loading the data from the Secret as environment variables\n # useful to access the Kafka information, like hostname and port\n envFrom:\n - secretRef:\n name: kafka-auth\n\n volumeMounts:\n - name: kafka-auth\n mountPath: \"/kafka-auth\"\n\n # loading the data from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: kafka-auth\n secret:\n secretName: kafka-auth\n
2. Apply the file.
kubectl apply -f kafka-test-connection.yaml\n
Once successfully applied, you have a log with the metadata information about the Kafka cluster.
kubectl logs kafka-test-connection \n
The output is similar to the following:
Metadata for all topics (from broker -1: ssl://kafka-sample-your-project.aivencloud.com:13041/bootstrap):\n 3 brokers:\n broker 2 at 35.205.234.70:13041\n broker 3 at 34.77.127.70:13041 (controller)\n broker 1 at 34.78.146.156:13041\n 0 topics:\n
"},{"location":"resources/kafka/index.html#creating-a-kafkatopic-and-kafkaacl","title":"Creating a KafkaTopic
and KafkaACL
","text":"To properly produce and consume content on Kafka, you need topics and ACLs. The operator supports both with the KafkaTopic
and KafkaACL
resources.
Below, here is how to create a Kafka topic named random-strings
where random string messages will be sent.
1. Create a file named kafka-topic-random-strings.yaml
with the content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: random-strings\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # here we can specify how many partitions the topic should have\n partitions: 3\n # and the topic replication factor\n replication: 2\n\n # we also support various topic-specific configurations\n config:\n flush_ms: 100\n
2. Create the resource on Kubernetes:
kubectl apply -f kafka-topic-random-strings.yaml\n
3. Create a user and an ACL. To use the Kafka topic, create a new user with the ServiceUser
resource (in order to avoid using the avnadmin
superuser), and the KafkaACL
to allow the user access to the topic.
In a file named kafka-acl-user-crab.yaml
, add the following two resources:
apiVersion: aiven.io/v1alpha1\nkind: ServiceUser\nmetadata:\n # the name of our user \ud83e\udd80\n name: crab\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n # the Secret name we will store the users' connection information\n # looks exactly the same as the Secret generated when creating the Kafka cluster\n # we will use this Secret to produce and consume events later!\n connInfoSecretTarget:\n name: kafka-crab-connection\n\n # the Aiven project the user is related to\n project: <your-project-name>\n\n # the name of our Kafka Service\n serviceName: kafka-sample\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaACL\nmetadata:\n name: crab\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample\n\n # the username from the ServiceUser above\n username: crab\n\n # the ACL allows to produce and consume on the topic\n permission: readwrite\n\n # specify the topic we created before\n topic: random-strings\n
To create the crab
user and its permissions, execute the following command:
kubectl apply -f kafka-acl-user-crab.yaml\n
"},{"location":"resources/kafka/index.html#producing-and-consuming-events","title":"Producing and consuming events","text":"Using the previously created KafkaTopic
, ServiceUser
, KafkaACL
, you can produce and consume events.
You can use kcat to produce a message into Kafka, and the -t random-strings
argument to select the desired topic, and use the content of the /etc/issue
file as the message's body.
1. Create a kafka-crab-produce.yaml
file with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-crab-produce\nspec:\n restartPolicy: Never\n containers:\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n # the command below will produce a message with the /etc/issue file content\n command: [\n 'kcat', '-b', '$(HOST):$(PORT)',\n '-X', 'security.protocol=SSL',\n '-X', 'ssl.key.location=/crab-auth/ACCESS_KEY',\n '-X', 'ssl.key.password=$(PASSWORD)',\n '-X', 'ssl.certificate.location=/crab-auth/ACCESS_CERT',\n '-X', 'ssl.ca.location=/crab-auth/CA_CERT',\n '-P', '-t', 'random-strings', '/etc/issue',\n ]\n\n # loading the crab user data from the Secret as environment variables\n # useful to access the Kafka information, like hostname and port\n envFrom:\n - secretRef:\n name: kafka-crab-connection\n\n volumeMounts:\n - name: crab-auth\n mountPath: \"/crab-auth\"\n\n # loading the crab user information from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: crab-auth\n secret:\n secretName: kafka-crab-connection\n
2. Create the Pod with the following content:
kubectl apply -f kafka-crab-produce.yaml\n
Now your event is stored in Kafka.
To consume a message, you can use a graphical interface called Kowl. It allows you to explore information about our Kafka cluster, such as brokers, topics, or consumer groups.
1. Create a Kubernetes Pod and service to deploy and access Kowl. Create a file named kafka-crab-consume.yaml
with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-crab-consume\n labels:\n app: kafka-crab-consume\nspec:\n containers:\n - image: quay.io/cloudhut/kowl:v1.4.0\n name: kowl\n\n # kowl configuration values\n env:\n - name: KAFKA_TLS_ENABLED\n value: 'true'\n\n - name: KAFKA_BROKERS\n value: $(HOST):$(PORT)\n - name: KAFKA_TLS_PASSPHRASE\n value: $(PASSWORD)\n\n - name: KAFKA_TLS_CAFILEPATH\n value: /crab-auth/CA_CERT\n - name: KAFKA_TLS_CERTFILEPATH\n value: /crab-auth/ACCESS_CERT\n - name: KAFKA_TLS_KEYFILEPATH\n value: /crab-auth/ACCESS_KEY\n\n # inject all connection information as environment variables\n envFrom:\n - secretRef:\n name: kafka-crab-connection\n\n volumeMounts:\n - name: crab-auth\n mountPath: /crab-auth\n\n # loading the crab user information from the Secret as files in a volume\n # useful to access the Kafka certificates \n volumes:\n - name: crab-auth\n secret:\n secretName: kafka-crab-connection\n\n---\n\n# we will be using a simple service to access Kowl on port 8080\napiVersion: v1\nkind: Service\nmetadata:\n name: kafka-crab-consume\nspec:\n selector:\n app: kafka-crab-consume\n ports:\n - port: 8080\n targetPort: 8080\n
2. Create the resources with:
kubectl apply -f kafka-crab-consume.yaml\n
3. In another terminal create a port-forward tunnel to your Pod:
kubectl port-forward kafka-crab-consume 8080:8080\n
4. In the browser of your choice, access the http://localhost:8080 address. You now see a page with the random-strings
topic listed:
5. Click the topic name to see the message.
You have now consumed the message.
"},{"location":"resources/kafka/connect.html","title":"Kafka Connect","text":"Aiven for Apache Kafka Connect is a framework and a runtime for integrating Kafka with other systems. Kafka connectors can either be a source (for pulling data from other systems into Kafka) or sink (for pushing data into other systems from Kafka).
This section involves a few different Kubernetes CRDs: 1. A KafkaService
service with a KafkaTopic
2. A KafkaConnect
service 3. A ServiceIntegration
to integrate the Kafka
and KafkaConnect
services 4. A PostgreSQL
used as a sink to receive messages from Kafka
5. A KafkaConnector
to finally connect the Kafka
with the PostgreSQL
Create a file named kafka-sample-connect.yaml
with the following content:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample-connect\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the Kafka connection on the `kafka-connection` Secret\n connInfoSecretTarget:\n name: kafka-auth\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: business-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n # specific Kafka configuration\n userConfig:\n kafka_version: '2.7'\n kafka_connect: true\n\n---\n\napiVersion: aiven.io/v1alpha1\nkind: KafkaTopic\nmetadata:\n name: kafka-topic-connect\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample-connect\n\n replication: 2\n partitions: 1\n
Next, create a file named kafka-connect.yaml
and add the following KafkaConnect
resource:
apiVersion: aiven.io/v1alpha1\nkind: KafkaConnect\nmetadata:\n name: kafka-connect\nspec:\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
Now let's create a ServiceIntegration
. It will use the fields sourceServiceName
and destinationServiceName
to integrate the previously created kafka-sample-connect
and kafka-connect
. Open a new file named service-integration-connect.yaml
and add the content below:
apiVersion: aiven.io/v1alpha1\nkind: ServiceIntegration\nmetadata:\n name: service-integration-kafka-connect\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # indicates the type of the integration\n integrationType: kafka_connect\n\n # we will send messages from the `kafka-sample-connect` to `kafka-connect`\n sourceServiceName: kafka-sample-connect\n destinationServiceName: kafka-connect\n
Let's add an Aiven for PostgreSQL service. It will be the service used as a sink, receiving messages from the kafka-sample-connect
cluster. Create a file named pg-sample-connect.yaml
with the content below:
apiVersion: aiven.io/v1alpha1\nkind: PostgreSQL\nmetadata:\n name: pg-connect\nspec:\n\n # gets the authentication token from the `aiven-token` Secret\n authSecretRef:\n name: aiven-token\n key: token\n\n # outputs the PostgreSQL connection on the `pg-connection` Secret\n connInfoSecretTarget:\n name: pg-connection\n\n # add your Project name here\n project: <your-project-name>\n\n # cloud provider and plan of your choice\n # you can check all of the possibilities here https://aiven.io/pricing\n cloudName: google-europe-west1\n plan: startup-4\n\n # general Aiven configuration\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n
Finally, let's add the glue of everything: a KafkaConnector
. As described in the specification, it will send receive messages from the kafka-sample-connect
and send them to the pg-connect
service. Check our official documentation for more connectors.
Create a file named kafka-connector-connect.yaml
and with the content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaConnector\nmetadata:\n name: kafka-connector\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n\n # the Kafka cluster name\n serviceName: kafka-sample-connect\n\n # the connector we will be using\n connectorClass: io.aiven.connect.jdbc.JdbcSinkConnector\n\n userConfig:\n auto.create: \"true\"\n\n # constructs the pg-connect connection information\n connection.url: 'jdbc:postgresql://{{ fromSecret \"pg-connection\" \"PGHOST\"}}:{{ fromSecret \"pg-connection\" \"PGPORT\" }}/{{ fromSecret \"pg-connection\" \"PGDATABASE\" }}'\n connection.user: '{{ fromSecret \"pg-connection\" \"PGUSER\" }}'\n connection.password: '{{ fromSecret \"pg-connection\" \"PGPASSWORD\" }}'\n\n # specify which topics it will watch\n topics: kafka-topic-connect\n\n key.converter: org.apache.kafka.connect.json.JsonConverter\n value.converter: org.apache.kafka.connect.json.JsonConverter\n value.converter.schemas.enable: \"true\"\n
With all the files created, apply the new Kubernetes resources:
kubectl apply \\\n -f kafka-sample-connect.yaml \\\n -f kafka-connect.yaml \\\n -f service-integration-connect.yaml \\\n -f pg-sample-connect.yaml \\\n -f kafka-connector-connect.yaml\n
It will take some time for all the services to be up and running. You can check their status with the following command:
kubectl get \\\n kafkas.aiven.io/kafka-sample-connect \\\n kafkaconnects.aiven.io/kafka-connect \\\n postgresqls.aiven.io/pg-connect \\\n kafkaconnectors.aiven.io/kafka-connector\n
The output is similar to the following:
NAME PROJECT REGION PLAN STATE\nkafka.aiven.io/kafka-sample-connect your-project google-europe-west1 business-4 RUNNING\n\nNAME STATE\nkafkaconnect.aiven.io/kafka-connect RUNNING\n\nNAME PROJECT REGION PLAN STATE\npostgresql.aiven.io/pg-connect your-project google-europe-west1 startup-4 RUNNING\n\nNAME SERVICE NAME PROJECT CONNECTOR CLASS STATE TASKS TOTAL TASKS RUNNING\nkafkaconnector.aiven.io/kafka-connector kafka-sample-connect your-project io.aiven.connect.jdbc.JdbcSinkConnector RUNNING 1 1\n
The deployment is finished when all services have the state RUNNING
."},{"location":"resources/kafka/connect.html#testing","title":"Testing","text":"To test the connection integration, let's produce a Kafka message using kcat from within the Kubernetes cluster. We will deploy a Pod responsible for crafting a message and sending to the Kafka cluster, using the kafka-auth
secret generate by the Kafka
CRD.
Create a new file named kcat-connect.yaml
and add the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: kafka-message\nspec:\n containers:\n\n restartPolicy: Never\n - image: edenhill/kcat:1.7.0\n name: kcat\n\n command: ['/bin/sh']\n args: [\n '-c',\n 'echo {\\\"schema\\\":{\\\"type\\\":\\\"struct\\\",\\\"fields\\\":[{ \\\"field\\\": \\\"text\\\", \\\"type\\\": \\\"string\\\", \\\"optional\\\": false } ] }, \\\"payload\\\": { \\\"text\\\": \\\"Hello World\\\" } } > /tmp/msg;\n\n kcat\n -b $(HOST):$(PORT)\n -X security.protocol=SSL\n -X ssl.key.location=/kafka-auth/ACCESS_KEY\n -X ssl.key.password=$(PASSWORD)\n -X ssl.certificate.location=/kafka-auth/ACCESS_CERT\n -X ssl.ca.location=/kafka-auth/CA_CERT\n -P -t kafka-topic-connect /tmp/msg'\n ]\n\n envFrom:\n - secretRef:\n name: kafka-auth\n\n volumeMounts:\n - name: kafka-auth\n mountPath: \"/kafka-auth\"\n\n volumes:\n - name: kafka-auth\n secret:\n secretName: kafka-auth\n
Apply the file with:
kubectl apply -f kcat-connect.yaml\n
The Pod will execute the commands and finish. You can confirm its Completed
state with:
kubectl get pod kafka-message\n
The output is similar to the following:
NAME READY STATUS RESTARTS AGE\nkafka-message 0/1 Completed 0 5m35s\n
If everything went smoothly, we should have our produced message in the PostgreSQL service. Let's check that out.
Create a file named psql-connect.yaml
with the content below:
apiVersion: v1\nkind: Pod\nmetadata:\n name: psql-connect\nspec:\n restartPolicy: Never\n containers:\n - image: postgres:13\n name: postgres\n # \"kafka-topic-connect\" is the table automatically created by KafkaConnect\n command: ['psql', '$(DATABASE_URI)', '-c', 'SELECT * from \"kafka-topic-connect\";']\n\n envFrom:\n - secretRef:\n name: pg-connection\n
Apply the file with:
kubectl apply -f psql-connect.yaml\n
After a couple of seconds, inspect its log with this command:
kubectl logs psql-connect \n
The output is similar to the following:
text \n-------------\n Hello World\n(1 row)\n
"},{"location":"resources/kafka/connect.html#clean-up","title":"Clean up","text":"To clean up all the created resources, use the following command:
kubectl delete \\\n -f kafka-sample-connect.yaml \\\n -f kafka-connect.yaml \\\n -f service-integration-connect.yaml \\\n -f pg-sample-connect.yaml \\\n -f kafka-connector-connect.yaml \\\n -f kcat-connect.yaml \\\n -f psql-connect.yaml\n
"},{"location":"resources/kafka/schema.html","title":"Kafka Schema","text":""},{"location":"resources/kafka/schema.html#creating-a-kafkaschema","title":"Creating a KafkaSchema
","text":"Aiven develops and maintain Karapace, an open source implementation of Kafka REST and schema registry. Is available out of the box for our managed Kafka service.
The schema registry address and authentication is the same as the Kafka broker, the only different is the usage of the port 13044.
First, let's create an Aiven for Apache Kafka service.
1. Create a file named kafka-sample-schema.yaml
and add the content below:
apiVersion: aiven.io/v1alpha1\nkind: Kafka\nmetadata:\n name: kafka-sample-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n connInfoSecretTarget:\n name: kafka-auth\n\n project: <your-project-name>\n cloudName: google-europe-west1\n plan: startup-2\n maintenanceWindowDow: friday\n maintenanceWindowTime: 23:00:00\n\n userConfig:\n kafka_version: '2.7'\n\n # this flag enables the Schema registry\n schema_registry: true\n
2. Apply the changes with the following command:
kubectl apply -f kafka-schema.yaml \n
Now, let's create the schema itself.
1. Create a new file named kafka-sample-schema.yaml
and add the YAML content below:
apiVersion: aiven.io/v1alpha1\nkind: KafkaSchema\nmetadata:\n name: kafka-schema\nspec:\n authSecretRef:\n name: aiven-token\n key: token\n\n project: <your-project-name>\n serviceName: kafka-sample-schema\n\n # the name of the Schema\n subjectName: MySchema\n\n # the schema itself, in JSON format\n schema: |\n {\n \"type\": \"record\",\n \"name\": \"MySchema\",\n \"fields\": [\n {\n \"name\": \"field\",\n \"type\": \"string\"\n }\n ]\n }\n\n # sets the schema compatibility level \n compatibilityLevel: BACKWARD\n
2. Create the schema with the command:
kubectl apply -f kafka-schema.yaml\n
3. Review the resource you created with the following command:
kubectl get kafkaschemas.aiven.io kafka-schema\n
The output is similar to the following:
NAME SERVICE NAME PROJECT SUBJECT COMPATIBILITY LEVEL VERSION\nkafka-schema kafka-sample <your-project> MySchema BACKWARD 1\n
Now you can follow the instructions to use a schema registry in Java on how to use the schema created.
"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index d1a60a43..48ca3b38 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,217 +2,217 @@