Setup Kafka Cluster for Multi/Distributed Servers/Brokers


For setting up Kafka Cluster for Multi Broker/ Server on single Machine follow below steps:

In below example we will create Kafka cluster with three brokers on single machine. All steps are same as configured for Kafka Cluster with Single Server on same machine additionally created two more file for additional brokers and run it on same Cluster.

Download and Installation

Download Latest version of Kafka from link download , copy it to installation directory and run below command to install it.

tar -zxvf kafka_2.11-0.10.0.0

Configuration Changes for Zookeeper and Server

Make below changes  in zookeeper.properties configuration file in config directory.

config/zookeeper.properties

clientPort=2181

clientPort is the port where client will connect. By Default port is 2181 if port will update in zookeeper.properties have to update in below server.properties too.

Make below changes  in server.properties configuration file in config directory.

config/server.properties:

broker.id=0
listeners=PLAINTEXT://:9092
log.dir=/tmp/kafka-logs
zookeeper.connect=localhost:2181

By default server.properties file have above fields with default values.

broker.id : Represents broker unique id by which zookeeper recognize brokers in Kafka Cluster. If Kafka Cluster is having multiple server this broker id will in incremental order for servers.

listeners : Each broker runs on different port by default port for broker is 9092 and can change also.

log.dir:  keep path of logs where Kafka will store steams records. By default point /tmp/kafka-logs.

For more change on property for server.properties file follow  link Kafka Server Properties Configuration.

Multi Server/Broker :

For creating three brokers create two more copy of server.properties configuration file as server1.properties and server2.properties and make below changes in files so that configuration will ready with three brokers on Kafka cluster .

Create copy of server.properties file.

cp config/server.properties config/server1.properties
cp config/server.properties config/server2.properties

make below changes corresponding to each configuration file.

config/server1.properties:

broker.id=1
listeners=PLAINTEXT://:9093
log.dir=/tmp/kafka-logs-1
zookeeper.connect=localhost:2181

config/server2.properties:

broker.id=2
listeners=PLAINTEXT://:9094
log.dir=/tmp/kafka-logs-2
zookeeper.connect=localhost:2181

Start Zookeeper and Servers

Run below files as below in Kafka directory

screen -d -m bin/zookeeper-server-start.sh config/zookeeper.properties
screen -d -m bin/kafka-server-start.sh config/server.properties
screen -d -m bin/kafka-server-start.sh config/server1.properties
screen -d -m bin/kafka-server-start.sh config/server2.properties

Check status of Zookeeper & Servers

Below commands will return the port of Zookeeper and Servers processes Id

ps aux | grep zookeeper.properties
ps aux | grep server.properties
ps aux | grep server1.properties
ps aux | grep server2.properties

 Now Kafka is ready to create topic publish and subscribe messages also.

Create a Topic and Check Status

Create topic with user defined name and by passing replication and number partitions for topic. For more info about how partition stores in Kafka Cluster Env follow link for Kafka Introduction and Architecture.

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic multi-test

Result:
Created topic "multi-test".

above command will create a topic multi-test with configured partition as 1 and replica as 3.

List of available Topics  in Zookeeper

Run below command to get list of topics

bin/kafka-topics.sh --list --zookeeper localhost:2181

Result:
test
multi-test

Description of Topic

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic multi-test

Result:
Topic:multi-test    PartitionCount:1   ReplicationFactor:3     Configs:
Topic: multi-test   Partition: 0    Leader: 2       Replicas: 2,0,1 Isr: 2,0,1

In above command response .The first line gives a summary of all the partitions, each additional line provide information about one partition. We have only one line for this topic  because  there is one partition.

  • “leader” is the broker responsible for all reads and writes for the given partition. Each broker will be the leader for a randomly selected portion of the partitions.
  • “replicas” is the list of brokers that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
  • “isr” is the set of “in-sync” replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.

In above example broker 2 is the leader for one partition for the topic and replicas of these portion are stored  on broker 0 and  then 1.  If any message publish for topic will store in partition 2 first then in brokers 0 and 1 in sequence.

For all the request for this topic will taken care by Broker 2  and if broker is busy or fail by some reason like shutdown then broker 0 will become lead.  See below example I have stopped broker 2 and run below command again and there lead is showing as 0.

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic multi-test
Topic:multi-test     PartitionCount:1       ReplicationFactor:3     Configs:
Topic: multi-test    Partition: 0    Leader: 0       Replicas: 2,0,1 Isr: 0,1

Publish Messages to Topic

To test topic push your messages to topic by running below command

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic multi-test

Input Messages:
Hi Dear
How r u doing?
Where are u these days?

These message after publish to Topic will retain as logs retention is configured for server even it’s read by consumer or not. To get information about Retention Policy configuration follow link Kafka Server Properties Configuration.

Subscribe Messages by Consumer from Topic

Run below command to get all published messages from multi-test Topic. It will return all these messages from beginning.

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic multi-test

Output Messages:
Hi Dear
How r u doing?
Where are u these days?

Read More on Kafka

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana

About Saurabh Gupta

My Name is Saurabh Gupta, I have approx. 10 Year of experience in Information Technology World manly in Java/J2EE. During this time I have worked with multiple organization with different client, so many technology, frameworks etc.
This entry was posted in Kafka and tagged , , , , , , . Bookmark the permalink.