Skip to content

Latest commit

 

History

History
69 lines (59 loc) · 3.55 KB

README.md

File metadata and controls

69 lines (59 loc) · 3.55 KB

kafka-cluster-cookbook

Build Status Cookbook Version License

Application cookbook which installs and configures Apache Kafka.

Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. This cookbook takes a simplified approach towards configuring and installing Apache Kafka.

It is important to note that Apache Zookeeper is a required component of deploying an Apache Kafka cluster. We have developed a Zookeeper cluster cookbook which takes the same simplified approach and works seamlessly here.

Basic Usage

This cookbook was designed from the ground up to make it dead simple to install and configure an Apache Kafka cluster using Chef. It also highlights several of our best practices for developing reusable Chef cookbooks at Bloomberg.

This cookbook provides node attributes which can be used to fine tune the default recipe which installs and configures Kafka. The values from the node attributes are passed directly into the configuration and service resources.

Out of the box the following platforms are certified to work and are tested using our Test Kitchen configuration. Additional platforms may work, but your mileage may vary.

  • CentOS (RHEL) 7.x
  • Ubuntu LTS 14.04, 16.04, 18.04

The correct way to use this cookbook is to create a wrapper cookbook which configures all of the members of the Apache Kafka cluster. This includes reading the Zookeeper ensemble (cluster) configuration and passing that into Kafka as a parameter. In this example we use our Zookeeper Cluster cookbook to configure the ensemble on the same nodes.

bag = data_bag_item('config', 'zookeeper-cluster')[node.chef_environment]
node.default['zookeeper-cluster']['config']['instance_name'] = node['ipaddress']
node.default['zookeeper-cluster']['config']['ensemble'] = bag['ensemble']
include_recipe 'zookeeper-cluster::default'

node.default['kafka-cluster']['config']['properties']['broker.id'] = node['ipaddress'].rpartition('.').last
node.default['kafka-cluster']['config']['properties']['zookeeper.connect'] = bag['ensemble'].map { |m| "#{m}:2181"}.join(',').concat('/kafka')
include_recipe 'kafka-cluster::default'

In the above example the Zookeeper ensemble configuration is read in from a data bag. This is our suggested method for deploying using our Zookeeper Cluster cookbook. But if you already have your Zookeeper ensemble feel free to format the string zookeeper.connect string appropriately.