Tag Archives: Broker

Setup Kafka Cluster for Multi/Distributed Servers/Brokers

For setting up Kafka Cluster for Multi Broker/ Server on single Machine follow below steps:

In below example we will create Kafka cluster with three brokers on single machine. All steps are same as configured for Kafka Cluster with Single Server on same machine additionally created two more file for additional brokers and run it on same Cluster.

Download and Installation

Download Latest version of Kafka from link download , copy it to installation directory and run below command to install it.

tar -zxvf kafka_2.11-0.10.0.0

Configuration Changes for Zookeeper and Server

Make below changes  in zookeeper.properties configuration file in config directory.

config/zookeeper.properties

clientPort=2181

clientPort is the port where client will connect. By Default port is 2181 if port will update in zookeeper.properties have to update in below server.properties too.

Make below changes  in server.properties configuration file in config directory.

config/server.properties:

broker.id=0
listeners=PLAINTEXT://:9092
log.dir=/tmp/kafka-logs
zookeeper.connect=localhost:2181

By default server.properties file have above fields with default values.

broker.id : Represents broker unique id by which zookeeper recognize brokers in Kafka Cluster. If Kafka Cluster is having multiple server this broker id will in incremental order for servers.

listeners : Each broker runs on different port by default port for broker is 9092 and can change also.

log.dir:  keep path of logs where Kafka will store steams records. By default point /tmp/kafka-logs.

For more change on property for server.properties file follow  link Kafka Server Properties Configuration.

Multi Server/Broker :

For creating three brokers create two more copy of server.properties configuration file as server1.properties and server2.properties and make below changes in files so that configuration will ready with three brokers on Kafka cluster .

Create copy of server.properties file.

cp config/server.properties config/server1.properties
cp config/server.properties config/server2.properties

make below changes corresponding to each configuration file.

config/server1.properties:

broker.id=1
listeners=PLAINTEXT://:9093
log.dir=/tmp/kafka-logs-1
zookeeper.connect=localhost:2181

config/server2.properties:

broker.id=2
listeners=PLAINTEXT://:9094
log.dir=/tmp/kafka-logs-2
zookeeper.connect=localhost:2181

Start Zookeeper and Servers

Run below files as below in Kafka directory

screen -d -m bin/zookeeper-server-start.sh config/zookeeper.properties
screen -d -m bin/kafka-server-start.sh config/server.properties
screen -d -m bin/kafka-server-start.sh config/server1.properties
screen -d -m bin/kafka-server-start.sh config/server2.properties

Check status of Zookeeper & Servers

Below commands will return the port of Zookeeper and Servers processes Id

ps aux | grep zookeeper.properties
ps aux | grep server.properties
ps aux | grep server1.properties
ps aux | grep server2.properties

 Now Kafka is ready to create topic publish and subscribe messages also.

Create a Topic and Check Status

Create topic with user defined name and by passing replication and number partitions for topic. For more info about how partition stores in Kafka Cluster Env follow link for Kafka Introduction and Architecture.

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic multi-test

Result:
Created topic "multi-test".

above command will create a topic multi-test with configured partition as 1 and replica as 3.

List of available Topics  in Zookeeper

Run below command to get list of topics

bin/kafka-topics.sh --list --zookeeper localhost:2181

Result:
test
multi-test

Description of Topic

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic multi-test

Result:
Topic:multi-test    PartitionCount:1   ReplicationFactor:3     Configs:
Topic: multi-test   Partition: 0    Leader: 2       Replicas: 2,0,1 Isr: 2,0,1

In above command response .The first line gives a summary of all the partitions, each additional line provide information about one partition. We have only one line for this topic  because  there is one partition.

  • “leader” is the broker responsible for all reads and writes for the given partition. Each broker will be the leader for a randomly selected portion of the partitions.
  • “replicas” is the list of brokers that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
  • “isr” is the set of “in-sync” replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.

In above example broker 2 is the leader for one partition for the topic and replicas of these portion are stored  on broker 0 and  then 1.  If any message publish for topic will store in partition 2 first then in brokers 0 and 1 in sequence.

For all the request for this topic will taken care by Broker 2  and if broker is busy or fail by some reason like shutdown then broker 0 will become lead.  See below example I have stopped broker 2 and run below command again and there lead is showing as 0.

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic multi-test
Topic:multi-test     PartitionCount:1       ReplicationFactor:3     Configs:
Topic: multi-test    Partition: 0    Leader: 0       Replicas: 2,0,1 Isr: 0,1

Publish Messages to Topic

To test topic push your messages to topic by running below command

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic multi-test

Input Messages:
Hi Dear
How r u doing?
Where are u these days?

These message after publish to Topic will retain as logs retention is configured for server even it’s read by consumer or not. To get information about Retention Policy configuration follow link Kafka Server Properties Configuration.

Subscribe Messages by Consumer from Topic

Run below command to get all published messages from multi-test Topic. It will return all these messages from beginning.

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic multi-test

Output Messages:
Hi Dear
How r u doing?
Where are u these days?

Read More on Kafka

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana

Advertisements

Setup Kafka Cluster for Single Server/Broker

For setting up Kafka Cluster for Single Broker . Follow below steps :

Download and Installation

Download Latest version of Kafka from link download , copy it to installation directory and run below command to install it.

tar -zxvf kafka_2.11-0.10.0.0

Configuration Changes for Zookeeper and Server

Make below changes  in zookeeper.properties configuration file in config directory.

config/zookeeper.properties

clientPort=2181

clientPort is the port where client will connect. By Default port is 2181 if port will update in zookeeper.properties have to update in below server.properties too.

Make below changes  in server.properties configuration file in config directory.

config/server.properties:

broker.id=0
listeners=PLAINTEXT://:9092
log.dir=/tmp/kafka-logs
zookeeper.connect=localhost:2181

By default server.properties file have above fields with default values.

broker.id : Represents broker unique id by which zookeeper recognize brokers in Kafka Cluster. If Kafka Cluster is having multiple server this broker id will in incremental order for servers.

listeners : Each broker runs on different port by default port for broker is 9092 and can change also.

log.dir:  keep path of logs where Kafka will store steams records. By default point /tmp/kafka-logs.

For more change on property for server.properties file follow  link Kafka Server Properties Configuration.

Start Zookeeper and Server

Run below files as below in Kafka directory

screen -d -m bin/zookeeper-server-start.sh config/zookeeper.properties
screen -d -m bin/kafka-server-start.sh config/server.properties

Check status of Zookeeper & Server

Below commands will return the port of Zookeeper and Server processes

ps aux | grep zookeeper.properties
ps aux | grep server.properties

 Now Kafka is ready to create topic publish and subscribe messages also.

Create a Topic and Check Status

Create topic with user defined name and by passing replication and number partitions for topic. For more info about how partition stores in Kafka Cluster Env follow link for Kafka Introduction and Architecture.

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

Result:
Created topic "test".

above command will create topic test with configured partition as 1 and replica as 1.

List of available Topics  in Zookeeper

Run below command to get list of topics

bin/kafka-topics.sh --list --zookeeper localhost:2181

Result:
test

Description of Topic

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test

Result:
Topic:test      PartitionCount:1        ReplicationFactor:1     Configs:
Topic: test     Partition: 0    Leader: 0       Replicas: 0     Isr: 0

In above command response .The first line gives a summary of all the partitions, each additional line provide information about one partition. We have only one line for this topic  because  there is one partition.

  • “leader” is the broker responsible for all reads and writes for the given partition. Each broker will be the leader for a randomly selected portion of the partitions.
  • “replicas” is the list of brokers that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
  • “isr” is the set of “in-sync” replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.

In above example broker 1 is the leader for one partition for the topic. Topic is not having any replica and is on server 0 because of one server on cluster.

Publish Messages to Topic

To test topic push your messages to topic by running below command

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

Input Messages:
Hi Dear
How r u doing?
Where are u these days?

These message after publish to Topic will retain as logs retention is configured for server even it’s read by consumer or not. To get information about Retention Policy configuration follow link Kafka Server Properties Configuration.

Subscribe Messages by Consumer from Topic

Run below command to get all published messages from test Topic. It will return all these messages from beginning.

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic test

Output Messages:

Hi Dear
How r u doing?
Where are u these days?

Read More on Kafka

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana

Kafka Introduction and Architecture

Kafka is Open source distributed, Steam Processing, Message Broker platform written in Java and Scala developed by Apache Software Foundation.

Kafka is massively use for enterprise infrastructure to process stream data or transaction logs on real time. Kafka provide unified, fault-tolerant, high throughput, low latency platform for dealing real time data feeds.

 Important Points about Kafka :

  • Publish/subscribe messaging system.
  • Robust queue able to handle high volume of data.
  • Work for Online and Offline message consumption.
  • In Kafka Cluster each server/Node work like broker and each broker is responsible for published record and may have zero or more partitions per topic.
  • Each records in partition consists fields key, value and timestamp.
  • Kafka use TCP Protocol to communicate between clients and servers.
  • Kafka provide Producer, Consumer, Streams and Connector java API to publish and consume from topics.

Initial Release: January, 2011

Current Release: 0.10.20

Kafka Cluster Architecture?

 Before going to discuss about Kafka Cluster Architecture . Let’s introduce about terminology use of Kafka that will easy to understand about Architecture and flow.

 Broker

Broker is stateless instance of Kafka server in Cluster. We define broker by giving unique id for each server instance. Kafka cluster can have multiple broker instances and each broker can handle hundred thousands of reads and write request or TB data per seconds of messages without any performance impact.

Zookeeper

Kafka Cluster use Zookeeper for managing and coordinating brokers. Producer and Consumer will get notification if new broker added to cluster or if any fail so that producer and consumer can decide about to point available broker.

 Topic

Topic is a category to keeps steams of records which are publish to it. Topic can have zero, one or many consumers for reading data. We can create our own topics by application and manually also.

Topic stored data on portitions and distribute over servers based on number of partition configure per topic and available brokers.

Partition

A partition stores records in sequential orders and will continually to append it. Each record in partition having sequential id number called as offset. Individual Log partition allows the records to scale up to available single server capacity.

How Topic will partitioned for  brokers/servers/nodes?

Suppose, Need to create a topic with N partitions for Kafka Cluster having M brokers.

If (M==N) : Each broker will have one partition.

If (M>N): First available N broker will take one partition for each.

If (M<N): Some brokers may have more than one partitions.

Kafka cluster will retain these partition  as configured for retention policy in server.properties file while it’s consumed or not by default it’s configure for two months and we can modify based on our storage capacity. Kafka performance don’t impact based on data size because it read and write data based on offset values.

 Kafka Cluster Architecture with Multi distrubuted servers

 Detail about above Kafka Cluster for Multi/distributed servers.

Kafka Cluster: Having three servers  and each server is having corresponding  brokers as id 1, 2 and 3.

Zookeeper: Zookeeper runs over Kafka Cluster which keeps detail of availability of brokers and update producers and consumers.

Brokers: 1,2 and 3 which are having topics as T1, T2 and T3 stored in partitions.

Topics: Topics T1, T2 is partitioned as 3 and distributed over the servers 1, 2 and 3 while Topic 3 is partitioned as 1 that is stored in server 3 only.

Partition: Each partitioned for topic is having different no of records from offset 0 to some value where 0 represents oldest records.

Producers: APP1, APP2 and APP3 is writing to different topics on T1, T2 and T3 which are created by Applications or manually.

Consumers: Topic T3 is consumed by applications APP5 and APP6 while Topic T1 is consumed by APP4 and T2 is consumed by APP5 only.  One topic can be consumed by multiple APPs.

How Kafka Cluster Flow works for Producers and Consumers?

 I will divide above architectures in two parts as “Producer to Kafka Cluster” and “Kafka Cluster to Consumer” because producer and consumer runs parallel and independent of each others.

Producer to Kafka Cluster

  • Create Topic manually or by application with configuration for portioned and replica.
  • Producer will connect with Kafka Cluster with topic name . Kafka cluster will check in Zookeeper for available broker and send broker id to Producer .
  • Producer will publish message to available broker to store in sequential order to partition. If anything got change with Kafka cluster servers like add or fail server Zookeeper updated to Producer.
  • If replication is configured for topic will keep copy of partition on another server for fault tolerant.

Kafka Cluster to Consumer

  • Consumer will point to Topic on Kafka Cluster  as required by Application.
  • Consumer will subscribe records from Topic based on required offset value (like beginning, now or from last).
  • If consumer wants records from now  Zookeeper will send offset value to Consumer to start read records from brokers partitions.
  • If  required offset is not exist in Broker partition where Consumer was pointing reading data then Zookeeper will return available broker id  with partition detail to Consumer.
  • If in between one broker is down during Consumer is reading records from it. Zookeeper will send will send available broker id  with partition detail to Consumer.

Kafka Cluster with Single Server: Will create no of partition on the same server per topic.

Kafka Cluster with Multi Server/Distributed: Topic partitions logs are distributed on all the servers in the Kafka cluster and each server is able to handle data and requests for share partitions. If replication is configure servers will keep number of copies of partition logs distributed to servers for fault tolerance.

Kafka Cluster Load balance for multi-server or distributed servers?

For each Topic partitions log having one server/broker as “leader” while others are followers (if multi server/distributed). Leader handles all read and write requests from producer and consumer while followers make replica of lead server partition. If somehow leader fail or server down for one machine then one of followers will become leader and rest server will follower. For more detail go to Kafka Cluster with multi server on same machine.

Read More on Kafka

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana