Below are the supported variables for the role variables
Version of Confluent Platform to install
Default: 7.0.1
Boolean to mask secrets in playbook output, defaults to true
Default: "{{mask_secrets}}"
Boolean to mask output generated by diff flag
Default: true
To copy from Ansible control host or download
Default: true
Boolean to enable Jolokia Agent installation and configuration on all components
Default: false
Full path to download the Jolokia Agent Jar
Default: /opt/jolokia/jolokia.jar
Authentication Mode for Jolokia Agent. Possible values: none, basic. If selecting basic, you must set jolokia_user and jolokia_password
Default: none
Username for Jolokia Agent when using Basic Auth
Default: admin
Password for Jolokia Agent when using Basic Auth
Default: password
To copy from Ansible control host or download
Default: true
Boolean to enable Prometheus Exporter Agent installation and configuration on all components
Default: false
Full path to download the Prometheus Exporter Agent Jar
Default: /opt/prometheus/jmx_prometheus_javaagent.jar
Boolean to have cp-ansible configure components with FIPS security settings. Must have ssl_enabled: true and use Java 8. Only valid for self signed certs and ssl_custom_certs: true, not ssl_provided_keystore_and_truststore: true.
Default: false
Boolean to configure ZK, Kafka Broker, Kafka Connect, and ksqlDB's logging with the RollingFileAppender and log cleanup functionality. Not necessary for other components.
Default: true
Boolean to configure Kerberos krb5.conf file, must also set kerberos.realm, keberos.kdc_hostname, kerberos.admin_hostname, where kerberos is a dictionary
Default: true
Boolean to install commercially licensed confluent-server instead of community version: confluent-kafka
Default: true
Boolean to enable health checks on all components
Default: true
Boolean to enable health checks on Zookeeper
Default: "{{health_checks_enabled}}"
Boolean to enable health checks on Kafka
Default: "{{health_checks_enabled}}"
Boolean to enable health checks on Schema Registry
Default: "{{health_checks_enabled}}"
Boolean to enable health checks on Kafka Connect
Default: "{{health_checks_enabled}}"
Boolean to enable health checks on Rest Proxy
Default: "{{health_checks_enabled}}"
Boolean to enable health checks on ksqlDB
Default: "{{health_checks_enabled}}"
Boolean to enable health checks on Control Center
Default: "{{health_checks_enabled}}"
Boolean to configure Monitoring Interceptors on ksqlDB, Rest Proxy, and Connect. Defaults to true if Control Center in inventory. Enable if you wish to have monitoring interceptors to report to a centralized monitoring cluster.
Default: "{{ 'control_center' in groups }}"
The method of installation. Valid values are "package" or "archive". If "archive" is selected then services will not be installed via the use of yum or apt, but will instead be installed via expanding the target .tar.gz file from the Confluent archive into the path defined by archive_destination_path
. Configuration files are also kept in this directory structure instead of /etc
. SystemD service units are copied from the ardhive for each target service and overrides are created pointing at the new paths. The "package" installation method is the default behavior that utilizes yum/apt.
Default: "package"
The path the downloaded archive is expanded into. Using the default with a confluent_package_version
of 5.5.1 results in the following installation path /opt/confluent/confluent-5.5.1/
that contains directories such as bin
and share
, but may be overridden usinf the binary_base_path
property.
Default: "/opt/confluent"
Owner of the downloaded archive. Not mandatory to set.
Default: ""
Group Owner of the downloaded archive. Not mandatory to set.
Default: ""
If the installation_method is 'archive' then this will be the base path for the configuration files, otherwise configuration files are in the default /etc locations. For example, configuration files may be placed in /opt/confluent/etc
using this variable.
Default: "{{ archive_destination_path }}"
Boolean to have cp-ansible download the Confluent CLI
Default: "{{rbac_enabled or secrets_protection_enabled}}"
The path the Confluent CLI archive is expanded into.
Default: /opt/confluent-cli
Full path on hosts for Confluent CLI symlink to executable
Default: "/usr/local/bin/confluent"
Confluent CLI version to download (e.g. "1.9.0"). Support matrix https://docs.confluent.io/platform/current/installation/versions-interoperability.html#confluent-cli
Default: 1.43.0
Recommended replication factor, defaults to 3. When splitting your cluster across 2 DCs with 4 or more Brokers, this should be increased to 4 to balance topic replicas.
Default: 3
SASL Mechanism to set on all Kafka Listeners. Configures all components to use that mechanism for authentication. Possible options none, kerberos, plain, scram, scram256
Default: none
Boolean to configure components with TLS Encryption. Also manages Java Keystore creation
Default: false
Boolean to enable mTLS Authentication on all components. Configures all components to use mTLS for authentication into Kafka
Default: false
Boolean to create Keystores with Self Signed Certificates, defaults to true. Alternatively can use ssl_provided_keystore_and_truststore or ssl_custom_certs
Default: "{{ false if ssl_provided_keystore_and_truststore|bool or ssl_custom_certs|bool else true }}"
Directory on hosts to store all ssl files.
Default: /var/ssl/private/
Boolean to have reruns of all.yml regenerate the certificate authority used for self signed certs.
Default: false
Boolean to have reruns of all.yml recreate Keystores. On first install, keystores will be created.
Default: "{{regenerate_ca}}"
Boolean for TLS Encryption option to provide own Host Keystores.
Default: false
Full path to host specific keystore on ansible control node. Used with ssl_provided_keystore_and_truststore: true. May set per host, or use inventory_hostname variable eg "/tmp/certs/{{inventory_hostname}}-keystore.jks"
Default: ""
Keystore Key Password for host specific keystore. Used with ssl_provided_keystore_and_truststore: true. May set per host if keystores have unique passwords
Default: ""
Keystore Password for host specific keystore. Used with ssl_provided_keystore_and_truststore: true. May set per host if keystores have unique passwords
Default: ""
Keystore source alias for host specific certificate. Only required if keystore contains more than one certificate. Used with ssl_provided_keystore_and_truststore: true. May set per host, or use inventory_hostname variable eg "{{inventory_hostname}}"
Default: ""
Full path to host specific truststore on ansible control node. Used with ssl_provided_keystore_and_truststore: true. Can share same keystore for all components if it contains all ca certs used to sign host certificates
Default: ""
Keystore Password for host specific truststore. Used with ssl_provided_keystore_and_truststore: true
Default: ""
Keystore alias for ca certificate
Default: ""
Boolean for TLS Encryption option to provide own Host Certificates. Must also set ssl_ca_cert_filepath, ssl_signed_cert_filepath, ssl_key_filepath, ssl_key_password
Default: false
Full path to CA Certificate Bundle on ansible control node. Used with ssl_custom_certs: true
Default: ""
Full path to host specific signed cert on ansible control node. Used with ssl_custom_certs: true. May set per host, or use inventory_hostname variable eg "/tmp/certs/{{inventory_hostname}}-signed.crt"
Default: ""
Full path to host specific key on ansible control node. Used with ssl_custom_certs: true. May set per host, or use inventory_hostname variable eg "/tmp/certs/{{inventory_hostname}}-key.pem"
Default: ""
Password to host specific key. Do not set if key does not require password. Used with ssl_custom_certs: true.
Default: ""
Boolean stating certs and keys are already on hosts. Used with ssl_custom_certs: true.
Default: false
Enable Hostname Aliasing for host addressing. This will enable logic, on an individual host basis, to look for the variable hostname
, followed by the reserved variable ansible_host
and then inventory_hostname
to resolve the appropriate FQDN of a host to use within configuration properties.
Default: false
Collection of Ansible Group names for All Kafka Connect Clusters that Control Center should be aware of.
Default: "{{ ['kafka_connect'] if 'kafka_connect' in groups else [] }}"
Collection of Ansible Group names for All ksqlDB Clusters that Control Center should be aware of.
Default: "{{ ['ksql'] if 'ksql' in groups else [] }}"
Boolean to Run Host Validations. Validations include OS Version compatibility and Proper Internet Connectivity
Default: true
Set this variable to customize the Linux User that the Zookeeper Service runs with. Default user is cp-kafka.
Default: "{{zookeeper_default_user}}"
Set this variable to customize the Linux Group that the Zookeeper Service user belongs to. Default group is confluent.
Default: "{{zookeeper_default_group}}"
Boolean to configure zookeeper with TLS Encryption. Also manages Java Keystore creation
Default: "{{ssl_enabled}}"
Deprecated- Boolean to enable mTLS Authentication on Zookeeper (Server to Server and Client to Server). Configures kafka to authenticate with mTLS.
Default: "{{ssl_mutual_auth_enabled}}"
Deprecated- SASL Mechanism for Zookeeper Server to Server and Server to Client Authentication. Options are none, kerberos, digest. Server to server auth only working for digest-md5
Default: "{{sasl_protocol if sasl_protocol == 'kerberos' else 'none'}}"
Authentication to put on ZK Server to Server connections. Available options: [mtls, digest].
Default: "{{ 'mtls' if zookeeper_ssl_enabled and zookeeper_ssl_mutual_auth_enabled else zookeeper_sasl_protocol }}"
Authentication to put on ZK Client to Server connections. This is Kafka's connection to ZK. Available options: [mtls, digest, kerberos].
Default: "{{ 'mtls' if zookeeper_ssl_enabled and zookeeper_ssl_mutual_auth_enabled else zookeeper_sasl_protocol }}"
Port for Kafka to Zookeeper connections
Default: "{{'2182' if zookeeper_ssl_enabled|bool else '2181'}}"
Set this variable to customize the directory that Zookeeper writes log files to. Default location is /var/log/kafka. NOTE- zookeeper.log_path is deprecated.
Default: "{{zookeeper.log_path}}"
Chroot path in Zookeeper used by Kafka. Defaults to no chroot. Must begin with a /
Default: ""
Boolean to enable Jolokia Agent installation and configuration on zookeeper
Default: "{{jolokia_enabled}}"
Port to expose jolokia metrics. Beware of port collisions if colocating components on same host
Default: 7770
Boolean to enable TLS encryption on Zookeeper jolokia metrics
Default: "{{ zookeeper_ssl_enabled }}"
Path on Zookeeper host for Jolokia Configuration file
Default: "{{ archive_config_base_path if installation_method == 'archive' else '' }}/etc/kafka/zookeeper_jolokia.properties"
Authentication Mode for Zookeeper's Jolokia Agent. Possible values: none, basic. If selecting basic, you must set zookeeper_jolokia_user and zookeeper_jolokia_password
Default: "{{jolokia_auth_mode}}"
Username for Zookeeper's Jolokia Agent when using Basic Auth
Default: "{{jolokia_user}}"
Password for Zookeeper's Jolokia Agent when using Basic Auth
Default: "{{jolokia_password}}"
Boolean to enable Prometheus Exporter Agent installation and configuration on zookeeper
Default: "{{jmxexporter_enabled}}"
Port to expose prometheus metrics. Beware of port collisions if colocating components on same host
Default: 8079
Path on Ansible Controller for Zookeeper jmx config file. Only necessary to set for custom config.
Default: zookeeper.yml
Destination path for Zookeeper jmx config file
Default: /opt/prometheus/zookeeper.yml
Zookeeper peer port
Default: 2888
Zookeeper leader port
Default: 3888
Use to copy files from control node to zookeeper hosts. Set to list of dictionaries with keys: source_path (full path of file on control node) and destination_path (full path to copy file to). Optionally specify directory_mode (default: '0750') and file_mode (default: '0640') to set directory and file permissions.
Default: []
Use to set custom zookeeper properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case. NOTE- zookeeper.properties is deprecated.
Default: "{{ zookeeper.properties }}"
Dictionary to put additional listeners to be configured within Kafka. Each listener must include a 'name' and 'port' key. Optionally they can include the keys 'ssl_enabled', 'ssl_mutual_auth_enabled', and 'sasl_protocol'
Default: {}
Boolean to configure more than one kafka listener. Defaults to true. NOTE- kafka_broker_configure_additional_brokers is deprecated
Default: "{{kafka_broker_configure_additional_brokers}}"
Boolean to configure control plane listener on separate port, which defaults to 8089. Applied only if kafka_broker_configure_multiple_listeners is true
Default: false
Control Planer listener name.
Default: controller
Set this variable to customize the Linux User that the Kafka Broker Service runs with. Default user is cp-kafka.
Default: "{{kafka_broker_default_user}}"
Set this variable to customize the Linux Group that the Kafka Broker Service user belongs to. Default group is confluent.
Default: "{{kafka_broker_default_group}}"
Set this variable to customize the directory that the Kafka Broker writes log files to. Default location is /var/log/kafka. NOTE- kafka_broker.appender_log_path is deprecated.
Default: "{{kafka_broker.appender_log_path}}"
Boolean to configure Schema Validation on Kafka
Default: true
Boolean to enable Jolokia Agent installation and configuration on kafka
Default: "{{jolokia_enabled}}"
Port to expose kafka jolokia metrics. Beware of port collisions if colocating components on same host
Default: 7771
Boolean to enable TLS encryption on Kafka jolokia metrics
Default: "{{ ssl_enabled }}"
Path on Kafka host for Jolokia Configuration file
Default: "{{ archive_config_base_path if installation_method == 'archive' else '' }}/etc/kafka/kafka_jolokia.properties"
Authentication Mode for Kafka's Jolokia Agent. Possible values: none, basic. If selecting basic, you must set kafka_broker_jolokia_user and kafka_broker_jolokia_password
Default: "{{jolokia_auth_mode}}"
Username for Kafka's Jolokia Agent when using Basic Auth
Default: "{{jolokia_user}}"
Password for Kafka's Jolokia Agent when using Basic Auth
Default: "{{jolokia_password}}"
Boolean to enable Prometheus Exporter Agent installation and configuration on kafka
Default: "{{jmxexporter_enabled}}"
Port to expose prometheus metrics. Beware of port collisions if colocating components on same host
Default: 8080
Path on Ansible Controller for Kafka Broker jmx config file. Only necessary to set for custom config.
Default: kafka.yml.j2
Destination path for Kafka Broker jmx config file
Default: /opt/prometheus/kafka.yml
Use to copy files from control node to kafka hosts. Set to list of dictionaries with keys: source_path (full path of file on control node) and destination_path (full path to copy file to). Optionally specify directory_mode (default: '0750') and file_mode (default: '0640') to set directory and file permissions.
Default: []
Replication Factor for internal topics. Defaults to the minimum of the number of brokers and can be overridden via default replication factor (see default_internal_replication_factor).
Default: "{{ [ groups['kafka_broker'] | default(['localhost']) | length, default_internal_replication_factor ] | min }}"
Boolean to enable the kafka's metrics reporter. Defaults to true if Control Center in inventory. Enable if you wish to have metrics reported to a centralized monitoring cluster.
Default: "{{ 'control_center' in groups }}"
Use to set custom kafka properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case. NOTE- kafka_broker.properties is deprecated.
Default: "{{ kafka_broker.properties }}"
Boolean to enable the embedded rest proxy within Kafka. NOTE- Embedded Rest Proxy must be enabled if RBAC is enabled and Confluent Server must be enabled
Default: "{{confluent_server_enabled and not ccloud_kafka_enabled}}"
Authentication type to add to Kafka's embedded rest proxy or Admin API. Do not set when RBAC is enabled. Options: [basic, none]
Default: none
Use to register and identify your Kafka cluster in the MDS.
Default: ""
Set this variable to customize the Linux User that the Schema Registry Service runs with. Default user is cp-schema-registry.
Default: "{{schema_registry_default_user}}"
Set this variable to customize the Linux Group that the Schema Registry Service user belongs to. Default group is confluent.
Default: "{{schema_registry_default_group}}"
Port Schema Registry API exposed over
Default: 8081
Replication Factor for schemas topic. Defaults to the minimum of the number of brokers and can be overridden via default replication factor (see default_internal_replication_factor).
Default: "{{ 3 if ccloud_kafka_enabled|bool else
Boolean to configure schema registry with TLS Encryption. Also manages Java Keystore creation
Default: "{{ssl_enabled}}"
Deprecated- Boolean to enable mTLS Authentication on Schema Registry
Default: "{{ ssl_mutual_auth_enabled }}"
Authentication to put on Schema Registry Rest Endpoint. Available options: [mtls, basic, none].
Default: "{{ 'mtls' if schema_registry_ssl_mutual_auth_enabled else 'none' }}"
Set this variable to customize the directory that the Schema Registry writes log files to. Default location is /var/log/confluent/schema-registry. NOTE- schema_registry.appender_log_path is deprecated.
Default: "{{schema_registry.appender_log_path}}"
Boolean to enable Jolokia Agent installation and configuration on schema registry
Default: "{{jolokia_enabled}}"
Port to expose schema registry jolokia metrics. Beware of port collisions if colocating components on same host
Default: 7772
Boolean to enable TLS encryption on Schema Registry jolokia metrics
Default: "{{ schema_registry_ssl_enabled }}"
Path on Schema Registry host for Jolokia Configuration file
Default: "{{ archive_config_base_path if installation_method == 'archive' else '' }}/etc/schema-registry/schema_registry_jolokia.properties"
Authentication Mode for Schema Registry's Jolokia Agent. Possible values: none, basic. If selecting basic, you must set schema_registry_jolokia_user and schema_registry_jolokia_password
Default: "{{jolokia_auth_mode}}"
Username for Schema Registry's Jolokia Agent when using Basic Auth
Default: "{{jolokia_user}}"
Password for Schema Registry's Jolokia Agent when using Basic Auth
Default: "{{jolokia_password}}"
Boolean to enable Prometheus Exporter Agent installation and configuration on schema registry
Default: "{{jmxexporter_enabled}}"
Path on Ansible Controller for Schema Registry jmx config file. Only necessary to set for custom config.
Default: schema_registry.yml
Destination path for Schema Registry jmx config file
Default: /opt/prometheus/schema_registry.yml
Port to expose prometheus metrics. Beware of port collisions if colocating components on same host
Default: 8078
Use to copy files from control node to schema registry hosts. Set to list of dictionaries with keys: source_path (full path of file on control node) and destination_path (full path to copy file to). Optionally specify directory_mode (default: '0750') and file_mode (default: '0640') to set directory and file permissions.
Default: []
Use to set custom schema registry properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case. NOTE- kafka_broker.properties is deprecated.
Default: "{{ schema_registry.properties }}"
Use to register and identify your Schema Registry cluster in the MDS.
Default: ""
Set this variable to customize the Linux User that the Rest Proxy Service runs with. Default user is cp-kafka-rest.
Default: "{{kafka_rest_default_user}}"
Set this variable to customize the Linux Group that the Rest Proxy Service user belongs to. Default group is confluent.
Default: "{{kafka_rest_default_group}}"
Port Rest Proxy API exposed over
Default: 8082
Boolean to configure Rest Proxy with TLS Encryption. Also manages Java Keystore creation
Default: "{{ssl_enabled}}"
Deprecated- Boolean to enable mTLS Authentication on Rest Proxy
Default: "{{ ssl_mutual_auth_enabled }}"
Authentication to put on Schema Registry Rest Endpoint. Available options: [mtls, basic, none].
Default: "{{ 'mtls' if kafka_rest_ssl_mutual_auth_enabled else 'none' }}"
Set this variable to customize the directory that the Rest Proxy writes log files to. Default location is /var/log/confluent/kafka-rest. NOTE- kafka_rest.appender_log_path is deprecated.
Default: "{{kafka_rest.appender_log_path}}"
Boolean to enable Jolokia Agent installation and configuration on Rest Proxy
Default: "{{jolokia_enabled}}"
Port to expose Rest Proxy jolokia metrics. Beware of port collisions if colocating components on same host
Default: 7775
Boolean to enable TLS encryption on Rest Proxy jolokia metrics
Default: "{{ kafka_rest_ssl_enabled }}"
Path on Rest Proxy host for Jolokia Configuration file
Default: "{{ archive_config_base_path if installation_method == 'archive' else '' }}/etc/kafka-rest/kafka_rest_jolokia.properties"
Authentication Mode for Rest Proxy's Jolokia Agent. Possible values: none, basic. If selecting basic, you must set schema_registry_jolokia_user and schema_registry_jolokia_password
Default: "{{jolokia_auth_mode}}"
Username for Rest Proxy's Jolokia Agent when using Basic Auth
Default: "{{jolokia_user}}"
Password for Rest Proxy's Jolokia Agent when using Basic Auth
Default: "{{jolokia_password}}"
Boolean to enable Prometheus Exporter Agent installation and configuration on Rest Proxy
Default: "{{jmxexporter_enabled}}"
Path on Ansible Controller for Rest Proxy jmx config file. Only necessary to set for custom config.
Default: kafka_rest.yml
Destination path for Rest Proxy jmx config file
Default: /opt/prometheus/kafka_rest.yml
Port to expose prometheus metrics. Beware of port collisions if colocating components on same host
Default: 8075
Use to copy files from control node to schema registry hosts. Set to list of dictionaries with keys: source_path (full path of file on control node) and destination_path (full path to copy file to). Optionally specify directory_mode (default: '0750') and file_mode (default: '0640') to set directory and file permissions.
Default: []
Use to set custom Rest Proxy properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case. NOTE- kafka_rest.properties is deprecated.
Default: "{{ kafka_rest.properties }}"
Boolean to configure Monitoring Interceptors on Rest Proxy.
Default: "{{ monitoring_interceptors_enabled }}"
Service Name to define/use for Kafka Connect System.d.
Default: "{{kafka_connect_default_service_name}}"
Config/Properties Filename to use when setting up and configuring Kafka Connect
Default: "{{kafka_connect_default_config_filename}}"
Set this variable to customize the Linux User that the Kafka Connect Service runs with. Default user is cp-kafka-connect.
Default: "{{kafka_connect_default_user}}"
Set this variable to customize the Linux Group that the Kafka Connect Service user belongs to. Default group is confluent.
Default: "{{kafka_connect_default_group}}"
Port Connect API exposed over
Default: 8083
Boolean to configure Connect with TLS Encryption. Also manages Java Keystore creation
Default: "{{ssl_enabled}}"
Deprecated- Boolean to enable mTLS Authentication on Connect
Default: "{{ ssl_mutual_auth_enabled }}"
Authentication to put on Connect's Rest Endpoint. Available options: [mtls, basic, none].
Default: "{{ 'mtls' if kafka_connect_ssl_mutual_auth_enabled|bool else 'none' }}"
Set this variable to customize the directory that Kafka Connect writes log files to. Default location is /var/log/kafka. NOTE- kafka_connect.appender_log_path is deprecated.
Default: "{{kafka_connect.appender_log_path}}"
Additional set of Connect extension classes.
Default: []
Boolean to enable Jolokia Agent installation and configuration on Connect
Default: "{{jolokia_enabled}}"
Port to expose Connect jolokia metrics. Beware of port collisions if colocating components on same host
Default: 7773
Boolean to enable TLS encryption on Connect jolokia metrics
Default: "{{ kafka_connect_ssl_enabled }}"
Path on Connect host for Jolokia Configuration file
Default: "{{ archive_config_base_path if installation_method == 'archive' else '' }}/etc/kafka/kafka_connect_jolokia.properties"
Authentication Mode for Connect's Jolokia Agent. Possible values: none, basic. If selecting basic, you must set schema_registry_jolokia_user and schema_registry_jolokia_password
Default: "{{jolokia_auth_mode}}"
Username for Connect's Jolokia Agent when using Basic Auth
Default: "{{jolokia_user}}"
Password for Connect's Jolokia Agent when using Basic Auth
Default: "{{jolokia_password}}"
Boolean to enable Prometheus Exporter Agent installation and configuration on Connect
Default: "{{jmxexporter_enabled}}"
Path on Ansible Controller for Connect jmx config file. Only necessary to set for custom config.
Default: kafka_connect.yml
Destination path for Connect jmx config file
Default: /opt/prometheus/kafka_connect.yml
Port to expose connect prometheus metrics. Beware of port collisions if colocating components on same host
Default: 8077
Use to copy files from control node to connect hosts. Set to list of dictionaries with keys: source_path (full path of file on control node) and destination_path (full path to copy file to). Optionally specify directory_mode (default: '0750') and file_mode (default: '0640') to set directory and file permissions.
Default: []
Connect Service Group Id. Customize when configuring multiple connect clusters in same inventory
Default: connect-cluster
Replication Factor for connect internal topics. Defaults to the minimum of the number of brokers and can be overridden via default replication factor (see default_internal_replication_factor).
Default: "{{ 3 if ccloud_kafka_enabled|bool else
Boolean to enable and configure Connect Secret Registry
Default: "{{rbac_enabled}}"
Connect Secret Registry Key
Default: 39ff95832750c0090d84ddf5344583832efe91ef
Use to set custom Connect properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case. NOTE- kafka_connect.properties is deprecated.
Default: "{{ kafka_connect.properties }}"
Boolean to configure Monitoring Interceptors on Connect.
Default: "{{ monitoring_interceptors_enabled }}"
Use to register and identify your Kafka Connect cluster in the MDS.
Default: ""
Set this variable to customize the Linux User that the ksqlDB Service runs with. Default user is cp-ksql.
Default: "{{ksql_default_user}}"
Set this variable to customize the Linux Group that the ksqlDB Service user belongs to. Default group is confluent.
Default: "{{ksql_default_group}}"
Port ksqlDB API exposed over
Default: 8088
Boolean to configure ksqlDB with TLS Encryption. Also manages Java Keystore creation
Default: "{{ssl_enabled}}"
Deprecated - Boolean to enable mTLS Authentication on ksqlDB
Default: "{{ ssl_mutual_auth_enabled }}"
Authentication to put on ksqlDB's Rest Endpoint. Available options: [mtls, basic, none].
Default: "{{ 'mtls' if ksql_ssl_mutual_auth_enabled|bool else 'none' }}"
Set this variable to customize the directory that ksqlDB writes log files to. Default location is /var/log/confluent/ksql. NOTE- ksql.appender_log_path is deprecated.
Default: "{{ksql.appender_log_path}}"
Boolean to enable Jolokia Agent installation and configuration on ksqlDB
Default: "{{jolokia_enabled}}"
Port to expose ksqlDB jolokia metrics. Beware of port collisions if colocating components on same host
Default: 7774
Boolean to enable TLS encryption on ksqlDB jolokia metrics
Default: "{{ ksql_ssl_enabled }}"
Path on ksqlDB host for Jolokia Configuration file
Default: "{{ archive_config_base_path if installation_method == 'archive' else '' }}{{(confluent_package_version is version('5.5.0', '>=')) | ternary('/etc/ksqldb/ksql_jolokia.properties' , '/etc/ksql/ksql_jolokia.properties')}}"
Authentication Mode for ksqlDB's Jolokia Agent. Possible values: none, basic. If selecting basic, you must set schema_registry_jolokia_user and schema_registry_jolokia_password
Default: "{{jolokia_auth_mode}}"
Username for ksqlDB's Jolokia Agent when using Basic Auth
Default: "{{jolokia_user}}"
Password for ksqlDB's Jolokia Agent when using Basic Auth
Default: "{{jolokia_password}}"
Boolean to enable Prometheus Exporter Agent installation and configuration on ksqlDB
Default: "{{jmxexporter_enabled}}"
Path on Ansible Controller for ksqlDB jmx config file. Only necessary to set for custom config.
Default: ksql.yml
Destination path for ksqlDB jmx config file
Default: /opt/prometheus/ksql.yml
Port to expose ksqlDB prometheus metrics. Beware of port collisions if colocating components on same host
Default: 8076
Use to copy files from control node to ksqlDB hosts. Set to list of dictionaries with keys: source_path (full path of file on control node) and destination_path (full path to copy file to). Optionally specify directory_mode (default: '0750') and file_mode (default: '0640') to set directory and file permissions.
Default: []
Replication Factor for ksqlDB internal topics. Defaults to the minimum of the number of brokers and can be overridden via default replication factor (see default_internal_replication_factor).
Default: "{{ 3 if ccloud_kafka_enabled|bool else
ksqlDB Service ID. Use when configuring multiple ksqldb clusters in the same inventory file.
Default: default_
Use to set custom ksqlDB properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case. NOTE- ksql.properties is deprecated.
Default: "{{ ksql.properties }}"
Boolean to configure Monitoring Interceptors on ksqlDB.
Default: "{{ monitoring_interceptors_enabled }}"
Use to register and identify your KSQL cluster in the MDS.
Default: ""
Boolean to enable ksqlDB Log Streaming.
Default: false
Set this variable to customize the Linux User that the Control Center Service runs with. Default user is cp-control-center.
Default: "{{control_center_default_user}}"
Set this variable to customize the Linux Group that the Control Center Service user belongs to. Default group is confluent.
Default: "{{control_center_default_group}}"
Port Control Center exposed over
Default: 9021
Interface on host for Control Center to listen on
Default: "0.0.0.0"
Boolean to configure Control Center with TLS Encryption. Also manages Java Keystore creation
Default: "{{ssl_enabled}}"
Control Center Authentication. Available options: [basic, none].
Default: none
Set this variable to customize the directory that Control Center writes log files to. Default location is /var/log/confluent/control-center. NOTE- control_center.appender_log_path is deprecated.
Default: "{{control_center.appender_log_path}}"
Use to copy files from control node to Control Center hosts. Set to list of dictionaries with keys: source_path (full path of file on control node) and destination_path (full path to copy file to). Optionally specify directory_mode (default: '0750') and file_mode (default: '0640') to set directory and file permissions.
Default: []
Replication Factor for Control Center internal topics. Defaults to the minimum of the number of brokers and can be overridden via default replication factor (see default_internal_replication_factor).
Default: "{{ 3 if ccloud_kafka_enabled|bool else
Use to set custom Control Center properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case. NOTE- control_center.properties is deprecated.
Default: "{{ control_center.properties }}"
Dictionary containing additional sasl scram users to be created during provisioning.
Default: {}
Dictionary containing additional sasl scram users to be created during provisioning.
Default: {}
Dictionary containing additional sasl plain users to be created during provisioning.
Default: {}
Dictionary containing additional sasl plain users to be created during provisioning.
Default: {}
Boolean to configure Confluent Platform with RBAC enabled. Creates Rolebindings for all components to function
Default: false
Port to expose MDS Server API on
Default: 8090
Boolean to configure TLS encryption on the Broker Rest endpoint. NOTE- mds_ssl_enabled is now deprecated
Default: "{{mds_ssl_enabled}}"
LDAP User which will be granted super user permissions to create role bindings in the MDS
Default: mds
Password to mds_super_user LDAP User
Default: password
LDAP User for Kafkas Embedded Rest Service to authenticate as
Default: "{{mds_super_user}}"
Password to kafka_broker_ldap_user LDAP User
Default: "{{mds_super_user_password}}"
LDAP User for Schema Registry to authenticate as
Default: schema-registry
Password to schema_registry_ldap_user LDAP User
Default: password
LDAP User for Connect to authenticate as
Default: connect
Password to kafka_connect_ldap_user LDAP User
Default: password
LDAP User for ksqlDB to authenticate as
Default: ksql
Password to ksql_ldap_user LDAP User
Default: password
LDAP User for Rest Proxy to authenticate as
Default: kafka-rest
Password to kafka_rest_ldap_user LDAP User
Default: password
LDAP User for Control Center to authenticate as
Default: control-center
Password to control_center_ldap_user LDAP User
Default: password
LDAP User for Confluent Replicator to authenticate as
Default: replicator
Password for kafka_connect_replicator_ldap_user LDAP User
Default: password
LDAP User for Confluent Replicator Consumer to authenticate as
Default: "{{kafka_connect_replicator_ldap_user}}"
Password for kafka_connect_replicator_consumer_ldap_user LDAP User
Default: "{{kafka_connect_replicator_ldap_password}}"
LDAP User for Confluent Replicator Producer to authenticate as
Default: "{{kafka_connect_replicator_ldap_user}}"
Password for kafka_connect_replicator_producer_ldap_user LDAP User
Default: "{{kafka_connect_replicator_ldap_password}}"
LDAP User for Confluent Replicator Monitoring Interceptor to authenticate as
Default: "{{kafka_connect_replicator_ldap_user}}"
Password for kafka_connect_replicator_monitoring_interceptor_ldap_user LDAP User
Default: "{{kafka_connect_replicator_ldap_password}}"
Boolean to describe if kafka group should be configured with an External MDS Kafka Cluster. If set to true, you must also set mds_broker_bootstrap_servers, mds_broker_listener, kafka_broker_rest_ssl_enabled
Default: false
Kafka hosts and listener ports on the Kafka Cluster acting as an external MDS Server. mds_broker_listener dictionary must describe its security settings. Must be configured if external_mds_enabled: true
Default: localhost:9092
Listener Dictionary that describes how kafka clusters connect to MDS Kafka cluster. Make sure it contains the keys: ssl_enabled, ssl_mutual_auth_enabled, sasl_protocol
Default:
Comma separated urls for mds servers. Only set if external_mds_enabled: true
Default: "{{mds_http_protocol}}://{{ groups['kafka_broker'] | default(['localhost']) | confluent.platform.resolve_hostnames(hostvars) | join(':' + mds_port|string + ',' + mds_http_protocol + '://') }}:{{mds_port}}"
To regenerate MDS Token Pem files on subsequent runs of the playbook, set this to true.
Default: false
List of users to be granted system admin Role Bindings across all components
Default: []
List of users to be granted system admin Role Bindings on the Kafka Cluster
Default: "{{rbac_component_additional_system_admins}}"
List of users to be granted system admin Role Bindings on the Schema Registry Cluster
Default: "{{rbac_component_additional_system_admins}}"
List of users to be granted system admin Role Bindings on the ksqlDB Cluster
Default: "{{rbac_component_additional_system_admins}}"
List of users to be granted system admin Role Bindings on the Connect Cluster
Default: "{{rbac_component_additional_system_admins}}"
List of users to be granted system admin Role Bindings on the Control Center Cluster
Default: "{{rbac_component_additional_system_admins}}"
Boolean to enable secrets protection on all components except Zookeeper
Default: false
Boolean to Recreate Secrets File and Masterkey. Only set to false AFTER first cp-ansible run.
Default: true
Masterkey generated by the Confluent Secret CLI. If empty and secrets protection is enabled, then a master key will be randomly generated.
Default: ""
Security file generated by the Confluent Secret CLI. If empty and secrets protection is enabled, then a security file will be randomly generated.
Default: generated_ssl_files/security.properties
Boolean to encrypt sensitive properties, such as those containing 'password', 'basic.auth.user.info', or 'sasl.jaas.config'.
Default: "{{secrets_protection_enabled}}"
Boolean to enable secrets protection in Kafka broker.
Default: "{{secrets_protection_enabled}}"
Boolean to enable secrets protection on kafka broker client configuration.
Default: "{{secrets_protection_enabled}}"
Boolean to encrypt sensitive properties, such as those containing 'password', 'basic.auth.user.info', or 'sasl.jaas.config' for Kafka.
Default: "{{secrets_protection_encrypt_passwords}}"
List of Kafka client properties to encrypt. Can be used in addition to kafka_broker_client_secrets_protection_encrypt_passwords.
Default: []
Boolean to encrypt sensitive properties, such as those containing 'password', 'basic.auth.user.info', or 'sasl.jaas.config' for Kafka.
Default: "{{secrets_protection_encrypt_passwords}}"
List of Kafka properties to encrypt. Can be used in addition to kafka_broker_secrets_protection_encrypt_passwords.
Default: []
Boolean to enable secrets protection in schema registry.
Default: "{{secrets_protection_enabled}}"
Boolean to encrypt sensitive properties, such as those containing 'password', 'basic.auth.user.info', or 'sasl.jaas.config' for Schema Registry.
Default: "{{secrets_protection_encrypt_passwords}}"
List of Schema Registry properties to encrypt. Can be used in addition to schema_registry_secrets_protection_encrypt_passwords.
Default: []
Boolean to enable secrets protection in Connect.
Default: "{{secrets_protection_enabled}}"
Boolean to encrypt sensitive properties, such as those containing 'password', 'basic.auth.user.info', or 'sasl.jaas.config' for Connect.
Default: "{{secrets_protection_encrypt_passwords}}"
List of Connect properties to encrypt. Can be used in addition to kafka_connect_secrets_protection_encrypt_passwords.
Default: []
Boolean to enable secrets protection in Rest Proxy.
Default: "{{secrets_protection_enabled}}"
Boolean to encrypt sensitive properties, such as those containing 'password', 'basic.auth.user.info', or 'sasl.jaas.config' for Rest Proxy.
Default: "{{secrets_protection_encrypt_passwords}}"
List of Rest Proxy properties to encrypt. Can be used in addition to kafka_rest_secrets_protection_encrypt_passwords.
Default: []
Boolean to enable secrets protection in KSQL.
Default: "{{secrets_protection_enabled}}"
Boolean to encrypt sensitive properties, such as those containing 'password', 'basic.auth.user.info', or 'sasl.jaas.config' for KSQL.
Default: "{{secrets_protection_encrypt_passwords}}"
List of KSQL properties to encrypt. Can be used in addition to ksql_secrets_protection_encrypt_passwords.
Default: []
Boolean to enable secrets protection in Control Center.
Default: "{{secrets_protection_enabled}}"
Boolean to encrypt sensitive properties, such as those containing 'password', 'basic.auth.user.info', or 'sasl.jaas.config' for Control Center.
Default: "{{secrets_protection_encrypt_passwords}}"
List of Control Center properties to encrypt. Can be used in addition to control_center_secrets_protection_encrypt_passwords.
Default: []
Boolean to configure Telemetry. Must also set telemetry_api_key and telemetry_api_secret
Default: false
API Key used by Telemetry. Mandatory variable for Telemetry
Default: ""
API Secret used by Telemetry. Mandatory variable for Telemetry
Default: ""
Proxy URL used by Telemetry. Only set if using a Proxy Server
Default: ""
Username for Proxy Server used by Telemetry. Only set if Proxy Server requires authentication
Default: ""
Password for Proxy Server used by Telemetry. Only set if Proxy Server requires authentication
Default: ""
Boolean to configure Telemetry on Kafka. Must also set telemetry_api_key and telemetry_api_secret
Default: "{{telemetry_enabled}}"
Boolean to send cp-ansible Telemetry Metrics from Kafka. Currently only sends cp-ansible version data
Default: "{{kafka_broker_telemetry_enabled}}"
Boolean to configure Telemetry on Schema Registry. Must also set telemetry_api_key and telemetry_api_secret
Default: "{{telemetry_enabled}}"
Boolean to send cp-ansible Telemetry Metrics from Schema Registry. Currently only sends cp-ansible version data
Default: "{{schema_registry_telemetry_enabled}}"
Boolean to configure Telemetry on Connect. Must also set telemetry_api_key and telemetry_api_secret
Default: "{{telemetry_enabled}}"
Boolean to send cp-ansible Telemetry Metrics from Connect. Currently only sends cp-ansible version data
Default: "{{kafka_connect_telemetry_enabled}}"
Boolean to configure Telemetry on Rest Proxy. Must also set telemetry_api_key and telemetry_api_secret
Default: "{{telemetry_enabled}}"
Boolean to send cp-ansible Telemetry Metrics from Rest Proxy. Currently only sends cp-ansible version data
Default: "{{kafka_rest_telemetry_enabled}}"
Boolean to configure Telemetry on ksqlDB. Must also set telemetry_api_key and telemetry_api_secret
Default: "{{telemetry_enabled}}"
Boolean to send cp-ansible Telemetry Metrics from ksqlDB. Currently only sends cp-ansible version data
Default: "{{ksql_telemetry_enabled}}"
Boolean to configure Telemetry on Control Center. Must also set telemetry_api_key and telemetry_api_secret
Default: "{{telemetry_enabled}}"
Boolean to send cp-ansible Telemetry Metrics from Control Center. Currently only sends cp-ansible version data
Default: "{{control_center_telemetry_enabled}}"
Boolean to configure Kafka to set Audit Logs on an external Kafka Cluster. Must also include audit_logs_destination_bootstrap_servers and audit_logs_destination_listener.
Default: false
Principal used to authenticate to the remote host where audit logs will be written to. This is a mandatory field.
Default: kafka
The URL to the ERP to register access permissions for Audit Logs.
Default: http://localhost:8090
The admin user for the ERP which can set the permissions for Audit Log access.
Default: mds
The password for the admin user on the ERP which can set the permissions for Audit Log access.
Default: password
Set this to the name of your destination kafka cluster.
Default: ""
Kafka hosts and listener ports on the Audit Logs Destination Kafka Cluster. audit_logs_destination_listener dictionary must describe its security settings. Must be configured if audit_logs_destination_enabled: true
Default: localhost:9092
Listener Dictionary that describes how kafka clients connect to Audit Log Destination Kafka cluster. Make sure it contains the keys: ssl_enabled, ssl_mutual_auth_enabled, sasl_protocol.
Default:
User for authenticated MDS Health Check. Only relevant if rbac_enabled: true.
Default: "{{mds_super_user}}"
Password for authenticated MDS Health Check. Only relevant if rbac_enabled: true.
Default: "{{mds_super_user_password}}"
User for authenticated Kafka Admin API Health Check.
Default: "{{ mds_super_user if rbac_enabled|bool else kafka_broker_rest_proxy_basic_users.admin.principal }}"
Password for authenticated Kafka Admin API Health Check.
Default: "{{ mds_super_user_password if rbac_enabled|bool else kafka_broker_rest_proxy_basic_users.admin.password }}"
User for authenticated Schema Registry Health Check.
Default: "{{ schema_registry_ldap_user if rbac_enabled|bool else schema_registry_basic_users_final.admin.principal }}"
Password for authenticated Schema Registry Health Check.
Default: "{{ schema_registry_ldap_password if rbac_enabled|bool else schema_registry_basic_users_final.admin.password }}"
User for authenticated Connect Health Check.
Default: "{{ kafka_connect_ldap_user if rbac_enabled|bool else kafka_connect_basic_users.admin.principal }}"
Password for authenticated Connect Health Check. Set if using customized security like Basic Auth.
Default: "{{ kafka_connect_ldap_password if rbac_enabled|bool else kafka_connect_basic_users.admin.password }}"
User for authenticated ksqlDB Health Check. Set if using customized security like Basic Auth.
Default: "{{ ksql_ldap_user if rbac_enabled|bool else ksql_basic_users.admin.principal }}"
Password for authenticated ksqlDB Health Check. Set if using customized security like Basic Auth.
Default: "{{ ksql_ldap_password if rbac_enabled|bool else ksql_basic_users.admin.password }}"
User for authenticated Rest Proxy Health Check. Set if using customized security like Basic Auth.
Default: "{{ kafka_rest_ldap_user if rbac_enabled|bool else kafka_rest_basic_users.admin.principal }}"
Password for authenticated Rest Proxy Health Check. Set if using customized security like Basic Auth.
Default: "{{ kafka_rest_ldap_password if rbac_enabled|bool else kafka_rest_basic_users.admin.password }}"
User for authenticated Control Center Health Check. Set if using customized security like Basic Auth.
Default: "{{ control_center_ldap_user if rbac_enabled|bool else control_center_basic_users.admin.principal }}"
Password for authenticated Control Center Health Check. Set if using customized security like Basic Auth.
Default: "{{ control_center_ldap_password if rbac_enabled|bool else control_center_basic_users.admin.password }}"
Set this variable to customize the Linux Group that the Kafka Connect Replicator Service user belongs to. Default group is confluent.
Default: "{{kafka_connect_replicator_default_group}}"
Variable to define bootstrap servers for Kafka Connect Replicator. Mandatory for Kafka Connect Replicator setup.
Default: localhost:9092
Use to set custom Kafka Connect Replicator properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case.
Default: {}
Set this variable to customize the Cluster ID used by Kafka Connect Replicator.
Default: replicator
Set this variable to customize the Cluster Name registered in the Cluster Registry.
Default: ""
Set this variable to customize the offset starting point for Kafka Connect Replicator.
Default: consumer
Set this variable to customize the topic that Kafka Connect Replicator stores it's offsets in.
Default: connect-offsets
Set this variable to customize the topic where Kafka Connect Replicator stores it's status.
Default: connect-status
Set this variable to customize the topic where Kafka Connect Replicator stores it's configuration.
Default: connect-configs
Set this variable to customize the topic where Kafka Connect Replicator consumer stores it's timestamps.
Default: __consumer_timestamps
Set this variable with a comma seperated list of Topics for Kafka Connect Replicator to replicate from. This is a mandatory variable.
Default: ""
Sets if Topics are auto created on the destintation cluster by Kafka Connect Replicator.
Default: true
Boolean to enable health checks on Kafka Connect Replicator.
Default: true
Use to copy files from control node to replicator hosts. Set to list of dictionaries with keys: source_path (full path of file on control node) and destination_path (full path to copy file to)
Default: []
Port Rest API exposed over.
Default: 8083
Sets Kafka Connect Replicator to enable monitoring intercepotors for monitoring in Control Center.
Default: true
User for authenticated Connect Health Check. Set if using customized security like Basic Auth.
Default: "{{kafka_connect_replicator_ldap_user}}"
Password for authenticated Connect Health Check. Set if using customized security like Basic Auth.
Default: "{{kafka_connect_replicator_ldap_password}}"
Boolean that determines if a provided keystore and truststore are being used for Kafka Connect Replicator configuration.
Default: false
Boolean to enable mTLS Authentication on Kafka Connect Replicator.
Default: "{{ssl_mutual_auth_enabled}}"
Boolean to enable TLS on Kafka Connect Replicator
Default: "{{ssl_enabled}}"
Set to the location of your TLS CA Certificate when configuring TLS for Kafka Connect Replicator.
Default: ""
Set to the location of your TLS signed certificate when configuring TLS for Kafka Connect Replicator.
Default: ""
Set to the location of your TLS key when configuring TLS for Kafka Connect Replicator.
Default: ""
Set to the password of your TLS key when configuring TLS for Kafka Connect Replicator.
Default: ""
Set to the location of your TLS TrustStore when configuring TLS using Keystores and TrustStores for Kafka Connect Replicator.
Default: ""
Set to the location of your TLS KeyStore when configuring TLS using Keystores and TrustStores for Kafka Connect Replicator.
Default: ""
SCRAM principal for Kafka Connect Replicator to authenticate with.
Default: "{{ sasl_scram_users_final.kafka_connect_replicator.principal }}"
SCRAM password for Kafka Connect Replicator to authenticate with.
Default: "{{ sasl_scram_users_final.kafka_connect_replicator.password }}"
SCRAM 256 principal for Kafka Connect Replicator to authenticate with.
Default: "{{ sasl_scram256_users_final.kafka_connect_replicator.principal }}"
SCRAM 256 password for Kafka Connect Replicator to authenticate with.
Default: "{{ sasl_scram256_users_final.kafka_connect_replicator.password }}"
SASL PLAIN principal for Kafka Connect Replicator to authenticate with.
Default: "{{ sasl_plain_users_final.kafka_connect_replicator.principal }}"
SASL PLAIN password for Kafka Connect Replicator to authenticate with.
Default: "{{ sasl_plain_users_final.kafka_connect_replicator.password }}"
Boolean that defines if the Jolokia agent is enabled on Kafka Connect Replicator.
Default: "{{jolokia_enabled}}"
Sets the auth mode for the Jolokia Agent on Kafka Connect Replicator.
Default: "{{jolokia_auth_mode}}"
Username for Jolokia Agent when using Basic Auth.
Default: "{{jolokia_user}}"
Password for Jolokia Agent when using Basic Auth.
Default: "{{jolokia_password}}"
Port for Jolokia agent for Kafka Connect Replicator.
Default: 7777
Full path to download the Jolokia Agent Jar.
Default: /opt/jolokia/jolokia.jar
Set this variable to customize the directory that Kafka Connect Replicator writes log files to.
Default: /var/log/confluent/kafka-connect-replicator
Set this variable to customize the default log name that Kafka Connect Replicator writes logs to.
Default: kafka-connect-replicator.log
Set this variable to customize the default max number of log files generated by Kafka Connect Replicator.
Default: 10
Set this variable to customize the default max size of a log file generated by Kafka Connect Replicator.
Default: 100mb
Boolean to configure Kafka Connect Replicator to support RBAC. Creates Rolebindings for client to function.
Default: false
Boolean to configure Kafka Connect Replicator to support connecting to ERP with TLS.
Default: false
Boolean to configure Kafka Connect Replicator Consumer to support RBAC. Creates Rolebindings for client to function.
Default: false
Boolean to configure Kafka Connect Replicator Consumer to support connecting to ERP with TLS.
Default: false
Boolean to configure Kafka Connect Replicator Producer to support RBAC. Creates Rolebindings for client to function.
Default: "{{ kakfa_connect_replicator_rbac_enabled }}"
Boolean to configure Kafka Connect Replicator Producer to support connecting to ERP with TLS.
Default: "{{ kafka_connect_replicator_erp_tls_enabled }}"
Boolean to configure Kafka Connect Replicator Monitoring Interceptor to support RBAC. Creates Rolebindings for client to function.
Default: "{{ kakfa_connect_replicator_rbac_enabled }}"
Boolean to configure Kafka Connect Replicator Monitoring Interceptor to support connecting to ERP with TLS.
Default: "{{ kafka_connect_replicator_erp_tls_enabled }}"
The password for the Kafka Connect Replicator TLS truststore.
Default: ""
The password for the Kafka Connect Replicator TLS keystore.
Default: ""
Variable to define the location of the Embedded Rest Proxy for configuring RBAC.
Default: localhost:8090
Set this variable to the user name of the admin user for the Embedded Rest Proxy, to configure RBAC.
Default: ""
Set this variable to the password for the Embedded Rest Proxy user, to configure RBAC.
Default: ""
Set this variable to the Cluster ID for the kafka cluster which you are interacting with.
Default: ""
Set this variable to the Cluster Name when using Cluster Registry for identification.
Default: ""
Set this variable to the path where the public pem file for the ERP server is located.
Default: ""
Set this variable to override the default location of the public pem file for connecting to the ERP when RBAC is enabled.
Default: /var/ssl/private/kafka_connect_replicator/public.pem
Variable to define bootstrap servers for Kafka Connect Replicator Consumer. Mandatory for Kafka Connect Replicator setup.
Default: localhost:9092
Use to set custom Kafka Connect Replicator Consumer properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case.
Default: {}
Boolean that determines if a provided keystore and truststore are being used for the Kafka Connect Replicator consumer configuration.
Default: false
Set to the location of your TLS CA Certificate when configuring TLS for Kafka Connect Replicator Consumer.
Default: "{{kafka_connect_replicator_ssl_ca_cert_path}}"
Set to the location of your TLS signed certificate when configuring TLS for Kafka Connect Replicator Consumer.
Default: "{{kafka_connect_replicator_ssl_cert_path}}"
Set to the location of your TLS key when configuring TLS for Kafka Connect Replicator Consumer.
Default: "{{kafka_connect_replicator_ssl_key_path}}"
Set to the password of your TLS key when configuring TLS for Kafka Connect Replicator Consumer.
Default: "{{kafka_connect_replicator_ssl_key_password}}"
Set to the location of your TLS TrustStore when configuring TLS using Keystores and TrustStores for Kafka Connect Replicator Consumer.
Default: "{{kafka_connect_replicator_ssl_truststore_file_path}}"
Set to the location of your TLS KeyStore when configuring TLS using Keystores and TrustStores for Kafka Connect Replicator Consumer.
Default: "{{kafka_connect_replicator_consumer_ssl_keystore_file_path}}"
SCRAM principal for the Consumer to authenticate with.
Default: "{{ sasl_scram_users_final.kafka_connect_replicator.principal }}"
SCRAM password for the Consumer to authenticate with.
Default: "{{ sasl_scram_users_final.kafka_connect_replicator.password }}"
SCRAM 256 principal for the Consumer to authenticate with.
Default: "{{ sasl_scram256_users_final.kafka_connect_replicator.principal }}"
SCRAM 256 password for the Consumer to authenticate with.
Default: "{{ sasl_scram256_users_final.kafka_connect_replicator.password }}"
SASL PLAIN principal for the Consumer to authenticate with.
Default: "{{ sasl_plain_users_final.kafka_connect_replicator.principal }}"
SASL PLAIN password for the Consumer to authenticate with.
Default: "{{ sasl_plain_users_final.kafka_connect_replicator.password }}"
The password for the Kafka Connect Replicator Consumer TLS truststore.
Default: "{{ kafka_connect_replicator_truststore_storepass }}"
The password for the Kafka Connect Replicator Consumer TLS keystore. Defaults to match kafka_connect_replicator_keystore_storepass.
Default: "{{ kafka_connect_replicator_keystore_storepass }}"
Variable to define the location of the Embedded Rest Proxy for configuring RBAC.
Default: localhost:8090
Set this variable to the user name of the admin user for the Embedded Rest Proxy, to configure RBAC.
Default: ""
Set this variable to the password for the Embedded Rest Proxy user, to configure RBAC.
Default: ""
Set this variable to the Cluster ID for the kafka cluster which you are interacting with.
Default: ""
Set this variable to the Cluster Name when using Cluster Registry for identification.
Default: ""
Set this variable to the path where the public pem file for the ERP server is located.
Default: ""
Set this variable to override the default location of the public pem file for connecting to the ERP when RBAC is enabled.
Default: /var/ssl/private/kafka_connect_replicator_consumer/public.pem
Variable to define bootstrap servers for Kafka Connect Replicator Producer. Mandatory for Kafka Connect Replicator setup.
Default: localhost:9092
Use to set custom Kafka Connect Replicator Producer properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case.
Default: {}
Boolean that determines if a provided keystore and truststore are being used for the Kafka Connect Replicator producer configuration.
Default: false
Set to the location of your TLS CA Certificate when configuring TLS for Kafka Connect Replicator Producer.
Default: "{{kafka_connect_replicator_ssl_ca_cert_path}}"
Set to the location of your TLS signed certificate when configuring TLS for Kafka Connect Replicator Producer.
Default: "{{kafka_connect_replicator_ssl_cert_path}}"
Set to the location of your TLS key when configuring TLS for Kafka Connect Replicator Producer.
Default: "{{kafka_connect_replicator_ssl_key_path}}"
Set to the password of your TLS key when configuring TLS for Kafka Connect Replicator Producer.
Default: "{{kafka_connect_replicator_ssl_key_password}}"
Set to the location of your TLS TrustStore when configuring TLS using Keystores and TrustStores for Kafka Connect Replicator Producer.
Default: "{{kafka_connect_replicator_ssl_truststore_file_path}}"
Set to the location of your TLS KeyStore when configuring TLS using Keystores and TrustStores for Kafka Connect Replicator Producer.
Default: "{{kafka_connect_replicator_ssl_keystore_file_path}}"
SCRAM principal for the Producer to authenticate with.
Default: "{{ sasl_scram_users_final.kafka_connect_replicator.principal }}"
SCRAM password for the Producer to authenticate with.
Default: "{{ sasl_scram_users_final.kafka_connect_replicator.password }}"
SCRAM 256 principal for the Producer to authenticate with.
Default: "{{ sasl_scram256_users_final.kafka_connect_replicator.principal }}"
SCRAM 256 password for the Producer to authenticate with.
Default: "{{ sasl_scram256_users_final.kafka_connect_replicator.password }}"
SASL PLAIN principal for the Producer to authenticate with.
Default: "{{ sasl_plain_users_final.kafka_connect_replicator.principal }}"
SASL PLAIN password for the Producer to authenticate with.
Default: "{{ sasl_plain_users_final.kafka_connect_replicator.password }}"
The password for the Kafka Connect Replicator Producer TLS truststore. Defaults to match kafka_connect_replicator_truststore_storepass.
Default: "{{ kafka_connect_replicator_truststore_storepass}}"
The password for the Kafka Connect Replicator Producer TLS keystore. Defaults to match kafka_connect_replicator_keystore_storepass.
Default: "{{ kafka_connect_replicator_keystore_storepass }}"
Variable to define the location of the Embedded Rest Proxy for configuring RBAC. Defaults to kafka_connect_replicator_erp_host.
Default: "{{ kafka_connect_replicator_erp_host }}"
Set this variable to the user name of the admin user for the Embedded Rest Proxy, to configure RBAC. Defaults to kafka_connect_replicator_erp_user.
Default: "{{ kafka_connect_replicator_erp_admin_user }}"
Set this variable to the password for the Embedded Rest Proxy user, to configure RBAC. Defaults to match kafka_connect_replicator_erp_admin_password.
Default: "{{ kafka_connect_replicator_erp_admin_password }}"
Set this variable to the Cluster ID for the kafka cluster which you are interacting with. Defaults to match kafka_connect_replicator_kafka_cluster_id.
Default: "{{ kafka_connect_replicator_kafka_cluster_id }}"
Set this variable to the Cluster Name when using Cluster Registry for identification.
Default: "{{ kafka_connect_replicator_kafka_cluster_name }}"
Set this variable to the path where the public pem file for the ERP server is located.
Default: "{{ kafka_connect_replicator_erp_pem_file }}"
Set this variable to override the default location of the public pem file for connecting to the ERP when RBAC is enabled.
Default: "{{ kafka_connect_replicator_rbac_enabled_public_pem_path }}"
Variable to define bootstrap servers for Kafka Connect Replicator Monitoring Interceptors.
Default: localhost:9092
Use to set custom Kafka Connect Replicator Monitoring Interceptor properties. This variable is a dictionary. Put values true/false in quotation marks to perserve case.
Default: {}
Boolean that determines if a provided keystore and truststore are being used for the Kafka Connect Replicator Monitoring Interceptor configuration.
Default: false
Set to the location of your TLS CA Certificate when configuring TLS for Kafka Connect Replicator Monitoring Interceptor.
Default: "{{kafka_connect_replicator_ssl_ca_cert_path}}"
Set to the location of your TLS signed certificate when configuring TLS for Kafka Connect Replicator Monitoring Interceptor.
Default: "{{kafka_connect_replicator_ssl_cert_path}}"
Set to the location of your TLS key when configuring TLS for Kafka Connect Replicator Monitoring Interceptor.
Default: "{{kafka_connect_replicator_ssl_key_path}}"
Set to the password of your TLS key when configuring TLS for Kafka Connect Replicator Monitoring Interceptor.
Default: "{{kafka_connect_replicator_ssl_key_password}}"
Set to the location of your TLS TrustStore when configuring TLS using Keystores and TrustStores for Kafka Connect Replicator Monitoring Interceptor.
Default: "{{kafka_connect_replicator_ssl_truststore_file_path}}"
Set to the location of your TLS KeyStore when configuring TLS using Keystores and TrustStores for Kafka Connect Replicator Monitoring Interceptor.
Default: "{{kafka_connect_replicator_ssl_keystore_file_path}}"
Defines the path to the keytab required for Kerberos Authentication of the monitoring Interceptors.
Default: "{{ kafka_connect_replicator_monitoring_interceptor_kerberos_keytab_path }}"
SCRAM principal for the Monitoring Interceptor to authenticate with.
Default: "{{ sasl_scram_users_final.kafka_connect_replicator.principal}}"
SCRAM password for the Monitoring Interceptor to authenticate with.
Default: "{{ sasl_scram_users_final.kafka_connect_replicator.password }}"
SCRAM 256 principal for the Monitoring Interceptor to authenticate with.
Default: "{{ sasl_scram256_users_final.kafka_connect_replicator.principal}}"
SCRAM 256 password for the Monitoring Interceptor to authenticate with.
Default: "{{ sasl_scram256_users_final.kafka_connect_replicator.password }}"
SASL PLAIN principal for the Monitoring Interceptor to authenticate with.
Default: "{{ sasl_plain_users_final.kafka_connect_replicator.principal }}"
SASL PLAIN password for the Monitoring Interceptor to authenticate with.
Default: "{{ sasl_plain_users_final.kafka_connect_replicator.password }}"
The password for the Kafka Connect Replicator Monitoring Interceptor TLS truststore. Defaults to match kafka_connect_replicator_truststore_storepass.
Default: "{{ kafka_connect_replicator_truststore_storepass}}"
The password for the Kafka Connect Replicator Monitoring Interceptor TLS keystore. Defaults to match kafka_connect_replicator_keystore_storepass.
Default: "{{ kafka_connect_replicator_keystore_storepass }}"
Deployment strategy for all components. Set to rolling to run all provisionging tasks on one host at a time, this is less destructive but can fail when security modes get updated.
Default: parallel
Variable to define the location of the Embedded Rest Proxy for configuring RBAC. Defaults to kafka_connect_replicator_erp_host.
Default: "{{ kafka_connect_replicator_erp_host }}"
Set this variable to the user name of the admin user for the Embedded Rest Proxy, to configure RBAC. Defaults to kafka_connect_replicator_erp_user.
Default: "{{ kafka_connect_replicator_erp_admin_user }}"
Set this variable to the password for the Embedded Rest Proxy user, to configure RBAC. Defaults to match kafka_connect_replicator_erp_admin_password.
Default: "{{ kafka_connect_replicator_erp_admin_password }}"
Set this variable to the Cluster ID for the kafka cluster which you are interacting with. Defaults to match kafka_connect_replicator_kafka_cluster_id.
Default: "{{ kafka_connect_replicator_kafka_cluster_id }}"
Set this variable to the Cluster Name when using Cluster Registry for identification.
Default: "{{ kafka_connect_replicator_kafka_cluster_name }}"
Set this variable to the path where the public pem file for the ERP server is located.
Default: "{{ kafka_connect_replicator_erp_pem_file }}"
Set this variable to override the default location of the public pem file for connecting to the ERP when RBAC is enabled.
Default: "{{ kafka_connect_replicator_rbac_enabled_public_pem_path }}"
Deployment strategy for Zookeeper. Set to parallel to run all provisionging tasks in parallel on all hosts, which may cause downtime.
Default: "{{deployment_strategy}}"
Deployment strategy for Kafka. Set to parallel to run all provisionging tasks in parallel on all hosts, which may cause downtime.
Default: "{{deployment_strategy}}"
Deployment strategy for Connect. Set to parallel to run all provisionging tasks in parallel on all hosts, which may cause downtime.
Default: "{{deployment_strategy}}"
Deployment strategy for Rest Proxy. Set to parallel to run all provisionging tasks in parallel on all hosts, which may cause downtime.
Default: "{{deployment_strategy}}"
Deployment strategy for ksqlDB. Set to parallel to run all provisionging tasks in parallel on all hosts, which may cause downtime.
Default: "{{deployment_strategy}}"
Deployment strategy for Control Center. Set to parallel to run all provisionging tasks in parallel on all hosts, which may cause downtime.
Default: "{{deployment_strategy}}"
Kafka Connect Replicator reconfiguration pattern. Set to parallel to reconfigure all hosts at once, which will cause downtime.
Default: "{{deployment_strategy}}"
Boolean to Pause Rolling Deployment after each Node starts up for all Components.
Default: false
Boolean to Pause Rolling Deployment after each Zookeeper Node starts up.
Default: "{{pause_rolling_deployment}}"
Boolean to Pause Rolling Deployment after each Kafka Node starts up.
Default: "{{pause_rolling_deployment}}"
Boolean to Pause Rolling Deployment after each Schema Registry Node starts up.
Default: "{{pause_rolling_deployment}}"
Boolean to Pause Rolling Deployment after each Connect Node starts up.
Default: "{{pause_rolling_deployment}}"
Boolean to Pause Rolling Deployment after each Rest Proxy Node starts up.
Default: "{{pause_rolling_deployment}}"
Boolean to Pause Rolling Deployment after each ksqlDB Node starts up.
Default: "{{pause_rolling_deployment}}"
Boolean to Pause Rolling Deployment after each Control Center Node starts up.
Default: "{{pause_rolling_deployment}}"
Boolean to Pause Rolling Deployment after each Kafka Connect Replicator Node starts up.
Default: "{{pause_rolling_deployment}}"
Boolean to configure component to Confluent Cloud Kafka. Must also set ccloud_kafka_bootstrap_servers, ccloud_kafka_key, and ccloud_kafka_secret. zookeeper and kafka_broker groups should not be in inventory.
Default: false
Bootstrap Servers to CCloud Kafka
Default: localhost:9092
Boolean to skip truststore creation and configuration. Signifies kafka's certificates were signed by a public certificate authority.
Default: "{{ccloud_kafka_enabled}}"
Boolean to configure component to Confluent Cloud Schema Registry. Must also set ccloud_schema_registry_url, ccloud_schema_registry_key, and ccloud_schema_registry_secret. schema_registry group should not be in inventory.
Default: false
Url to CCloud Schema Registry
Default: https://localhost:8081
Below are the supported variables for the role common
Configures package repositories on hosts. By default will configure confluent's deb/yum repositories. Possible options: none, confluent, custom. Must also set custom_yum_repofile_filepath or custom_apt_repo_filepath if using custom. Note: vars custom_apt_repo and custom_yum_repofile are deprecated
Default: "{{'custom' if custom_apt_repo|bool or custom_yum_repofile else 'confluent'}}"
Full path on control node to custom yum repo file, must also set repository_configuration to custom
Default: ""
Full path on control node to custom apt repo file, must also set repository_configuration to custom
Default: ""
Base URL for Confluent's RPM and Debian Package Repositories
Default: "https://packages.confluent.io"
Base URL for Confluent C/C++ Clients RPM and Debian Package Repositories
Default: "https://packages.confluent.io"
Boolean to have cp-ansible install Java on hosts
Default: true
Java Package to install on RHEL/Centos hosts. Possible values java-1.8.0-openjdk or java-11-openjdk
Default: java-11-openjdk
Java Package to install on Debian hosts. Possible values openjdk-8-jdk or openjdk-11-jdk
Default: openjdk-11-jdk
Java Package to install on Ubuntu hosts. Possible values openjdk-8-jdk or openjdk-11-jdk
Default: openjdk-11-jdk
Deb Repository to use for Java Installation
Default: ppa:openjdk-r/ppa
Version of Jolokia Agent Jar to Download
Default: 1.6.2
Full URL used for Jolokia Agent Jar Download. When jolokia_url_remote=false
this represents the path on Ansible control host.
Version of JmxExporter Agent Jar to Donwload
Default: 0.12.0
Full URL used for Prometheus Exporter Jar Download. When jolokia_url_remote=false
this represents the path on Ansible control host.
A path reference to a local archive file or URL. By default this is the URL from Confluent's repositories. In an ansible-pull deployment this could be set to a local file such as "~/.ansible/pull/{{inventory_hostname}}/{{confluent_archive_file_name}}".
Default: "{{confluent_common_repository_baseurl}}/archive/{{confluent_repo_version}}/confluent{{'' if confluent_server_enabled else '-community'}}-{{confluent_package_version}}.tar.gz"
Set to true to indicate the archive file is remote (i.e. already on the target node) or a URL. Set to false if the archive file is on the control node.
Default: true
Base URL for Confluent CLI packages
Default: "https://s3-us-west-2.amazonaws.com/confluent.cloud"
A path reference to a local archive file or URL. By default this is the URL from Confluent CLI repository.
Default: "{{confluent_cli_repository_baseurl}}/confluent-cli/archives/{{confluent_cli_version}}/{{confluent_cli_binary}}{{(confluent_cli_version == 'latest') | ternary('', 'v')}}{{confluent_cli_version}}{{ansible_system|lower}}_{{confluent_cli_goarch[ansible_architecture]}}.tar.gz"
Set to true to indicate the CLI archive file is remote (i.e. already on the target node) or a URL. Set to false if the archive file is on the control node.
Default: true
Below are the supported variables for the role control_center
Boolean to reconfigure Control Center's logging with RollingFileAppender and log cleanup
Default: "{{ custom_log4j }}"
Root logger within Control Center's log4j config. Only honored if control_center_custom_log4j: true
Default: "INFO, main"
Max number of log files generated by Control Center. Only honored if control_center_custom_log4j: true
Default: 10
Max size of a log file generated by Control Center. Only honored if control_center_custom_log4j: true
Default: 100MB
Custom Java Args to add to the Control Center Process
Default: ""
Full Path to the RocksDB Data Directory. If left as empty string, cp-ansible will not configure RocksDB
Default: ""
Overrides to the Service Section of Control Center Systemd File. This variable is a dictionary.
Default:
Environment Variables to be added to the Control Center Service. This variable is a dictionary.
Default:
Overrides to the Unit Section of Control Center Systemd File. This variable is a dictionary.
Default:
Time in seconds to wait before starting Control Center Health Checks.
Default: 30
Below are the supported variables for the role kafka_broker
Boolean to reconfigure Kafka's logging with RollingFileAppender and log cleanup
Default: "{{ custom_log4j }}"
Root logger within Kafka's log4j config. Only honored if kafka_broker_custom_log4j: true
Default: "INFO, stdout, kafkaAppender"
Max number of log files generated by Kafka Broker. Only honored if kafka_broker_custom_log4j: true
Default: 10
Max size of a log file generated by Kafka Broker. Only honored if kafka_broker_custom_log4j: true
Default: 100MB
Custom Java Args to add to the Kafka Process
Default: ""
Overrides to the Service Section of Kafka Systemd File. This variable is a dictionary.
Default:
Environment Variables to be added to the Kafka Service. This variable is a dictionary.
Default:
Overrides to the Unit Section of Kafka Systemd File. This variable is a dictionary.
Default:
Time in seconds to wait before starting Kafka Health Checks.
Default: 20
Time in seconds to wait before JMX exporter starts serving metrics. Any requests within the delay period will result in an empty metrics set.
Default: 60
Below are the supported variables for the role kafka_connect
Boolean to reconfigure Kafka Connect's logging with the RollingFileAppender and log cleanup functionality.
Default: "{{ custom_log4j }}"
Root logger within Kafka Connect's log4j config. Only honored if kafka_connect_custom_log4j: true
Default: "INFO, stdout, connectAppender"
Max number of log files generated by Kafka Connect. Only honored if kafka_connect_custom_log4j: true
Default: 10
Max size of a log file generated by Kafka Connect. Only honored if kafka_connect_custom_log4j: true
Default: 100MB
Custom Java Args to add to the Connect Process
Default: ""
Overrides to the Service Section of Connect Systemd File. This variable is a dictionary.
Default:
Environment Variables to be added to the Connect Service. This variable is a dictionary.
Default:
Overrides to the Unit Section of Connect Systemd File. This variable is a dictionary.
Default:
Time in seconds to wait before starting Connect Health Checks.
Default: 30
Below are the supported variables for the role kafka_rest
Boolean to reconfigure Rest Proxy's logging with RollingFileAppender and log cleanup
Default: "{{ custom_log4j }}"
Root logger within Rest Proxy's log4j config. Only honored if kafka_rest_custom_log4j: true
Default: "INFO, stdout, file"
Max number of log files generated by Rest Proxy. Only honored if kafka_rest_custom_log4j: true
Default: 10
Max size of a log file generated by Rest Proxy. Only honored if kafka_rest_custom_log4j: true
Default: 100MB
Custom Java Args to add to the Rest Proxy Process
Default: ""
Overrides to the Service Section of Rest Proxy Systemd File. This variable is a dictionary.
Default:
Environment Variables to be added to the Rest Proxy Service. This variable is a dictionary.
Default:
Overrides to the Unit Section of Rest Proxy Systemd File. This variable is a dictionary.
Default:
Time in seconds to wait before starting Rest Proxy Health Checks.
Default: 15
Below are the supported variables for the role ksql
Boolean to reconfigure ksqlDB's logging with the RollingFileAppender and log cleanup functionality.
Default: "{{ custom_log4j }}"
Root logger within ksqlDB's log4j config. Only honored if ksql_custom_log4j: true
Default: "INFO, stdout, main"
Max number of log files generated by ksqlDB. Only honored if ksql_custom_log4j: true
Default: 5
Max size of a log file generated by ksqlDB. Only honored if ksql_custom_log4j: true
Default: 10MB
Custom Java Args to add to the ksqlDB Process
Default: ""
Full Path to the RocksDB Data Directory. If set as empty string, cp-ansible will not configure RocksDB
Default: /tmp/ksqldb
Overrides to the Service Section of ksqlDB Systemd File. This variable is a dictionary.
Default:
Environment Variables to be added to the ksqlDB Service. This variable is a dictionary.
Default:
Overrides to the Unit Section of ksqlDB Systemd File. This variable is a dictionary.
Default:
Time in seconds to wait before starting ksqlDB Health Checks.
Default: 20
Below are the supported variables for the role schema_registry
Boolean to reconfigure Schema Registry's logging with RollingFileAppender and log cleanup
Default: "{{ custom_log4j }}"
Root logger within Schema Registry's log4j config. Only honored if schema_registry_custom_log4j: true
Default: "INFO, stdout, file"
Max number of log files generated by Schema Registry. Only honored if schema_registry_custom_log4j: true
Default: 10
Max size of a log file generated by Schema Registry. Only honored if schema_registry_custom_log4j: true
Default: 100MB
Custom Java Args to add to the Schema Registry Process
Default: ""
Overrides to the Service Section of Schema Registry Systemd File. This variable is a dictionary.
Default:
Environment Variables to be added to the Schema Registry Service. This variable is a dictionary.
Default:
Overrides to the Unit Section of Schema Registry Systemd File. This variable is a dictionary.
Default:
Time in seconds to wait before starting Schema Registry Health Checks.
Default: 15
Below are the supported variables for the role zookeeper
Boolean to reconfigure Zookeeper's logging with RollingFileAppender and log cleanup
Default: "{{ custom_log4j }}"
Root logger within Zookeeper's log4j config. Only honored if zookeeper_custom_log4j: true
Default: INFO, stdout, zkAppender
Max number of log files generated by Zookeeper. Only honored if zookeeper_custom_log4j: true
Default: 10
Max size of a log file generated by Zookeeper. Only honored if zookeeper_custom_log4j: true
Default: 100MB
Custom Java Args to add to the Zookeeper Process
Default: ""
Overrides to the Service Section of Zookeeper Systemd File. This variable is a dictionary.
Default:
Environment Variables to be added to the Zookeeper Service. This variable is a dictionary.
Default:
Overrides to the Unit Section of Zookeeper Systemd File. This variable is a dictionary.
Default:
Time in seconds to wait before starting Zookeeper Health Checks.
Default: 5
Below are the supported variables for the role kafka_connect_replicator
Boolean to reconfigure Kafka Connect Replicator's logging with the RollingFileAppender and log cleanup functionality.
Default: "{{ custom_log4j }}"
Root logger within Kafka Connect Replicator's log4j config. Only honored if kafka_connect_replicator_custom_log4j: true
Default: "INFO, replicatorAppender, stdout"
Custom Java Args to add to the Kafka Connect Replicator Process
Default: ""
Overrides to the Service Section of Kafka Connect Replicator Systemd File. This variable is a dictionary.
Default:
Environment Variables to be added to the Kafka Connect Replicator Service. This variable is a dictionary.
Default:
Overrides to the Unit Section of Connect Systemd File. This variable is a dictionary.
Default:
Time in seconds to wait before starting Kafka Connect Replicator Health Checks.
Default: 30
Below are the supported variables for the role ssl
Key Algorithm used by keytool -genkeypair command when creating Keystores. Only used with self-signed certs
Default: RSA
Key Size used by keytool -genkeypair command when creating Keystores. Only used with self-signed certs
Default: 2048