Tag Archives: Zookeeper

Kafka Server Properties Configuration

Kafka provide server level properties for configuration of  Broker, Socket, Zookeeper, Buffering, Retention etc.

broker.id :  This broker id which is unique integer value in Kafka cluster.

broker.id:0

Socket Server Settings :

listeners: default value is PLAINTEXT://:9092  where socket servers listens and if not configured will take from java.net.InetAddress.getCanonicalHostName()

Format: security_protocol://host_name:port

listeners:PLAINTEXT://:9092

advertised.listeners: Need to set this value if listeners value is not set. Broker will advertise this listener value to producers and consumers.

Example: PLAINTEXT://your.host.name:9092

advertised.listeners:PLAINTEXT://:9092

num.network.threads: Threads handling network requests.

num.network.threads:3

num.io.threads: Number of threads handling I/O for disk.

num.io.threads:8

socket.send.buffer.bytes: Buffer size used by socket server to keep records for sending.

socket.send.buffer.bytes:102400

socket.receive.buffer.bytes: Buffer size used by socket server to keep records for sending.

socket.receive.buffer.bytes:102400

socket.request.max.bytes: max size of request that the socket server will accept.

socket.request.max.bytes:104857600

Log Basics

log.dirs: a comma separated list of directories under which to store log files.

log.dirs=/tmp/kafka-logs

num.partitions: The default number of logs per topic. More partitions allow greater parallelism for consumption, but this will also result in more files across the brokers.

num.partitions=1

num.recovery.threads.per.data.dir: The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.This value is recommended to be increased for installations with data dirs located in RAID array.

num.recovery.threads.per.data.dir=1

Log Flush Policy

log.flush.interval.messages:  Messages are immediately written to the file system but by default we only fsync() to sync the OS cache lazily. The following configurations control the flush of data to disk.There are a few important trade-offs here:

  1. Durability: Unflushed data may be lost if you are not using replication.
  2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
  3.  Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.

The settings below allow one to configure the flush policy to flush data after a period of time or every N messages (or both). This can be done globally and overridden on a per-topic basis.

The number of messages to accept before forcing a flush of data to disk.

log.flush.interval.messages=10000

The maximum amount of time a message can sit in a log before we force a flush.

log.flush.interval.ms=1000

Log Retention Policy

The following configurations control the disposal of log segments. The policy can be set to delete segments after a period of time, or after a given size has accumulated. A segment will be deleted whenever either of these criteria are met. Deletion always happens from the end of the log.

log.retention.hours:The minimum age of a log file to be eligible for deletion

log.retention.hours=168

log.retention.bytes:A size-based retention policy for logs. Segments are pruned from the log as long as the remaining segments don’t drop below log.retention.bytes.

log.retention.bytes=1073741824

log.segment.bytes:The maximum size of a log segment file. When this size is reached a new log segment will be created.

log.segment.bytes=1073741824

log.retention.check.interval.ms: The interval at which log segments are checked to see if they can be deleted according to the retention policies

log.retention.check.interval.ms=300000

Zookeeper

zookeeper.connect: Zookeeper connection string is a comma separated host:port pairs, each corresponding to a zk server. e.g. “127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002”. We can also append an optional root string to the urls to specify theroot directory for all kafka znodes.

zookeeper.connect=localhost:2181

zookeeper.connection.timeout.ms: Timeout in ms for connecting to zookeeper.

zookeeper.connection.timeout.ms=6000

Read More on Kafka

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana

Advertisements

Setup Kafka Cluster for Single Server/Broker

For setting up Kafka Cluster for Single Broker . Follow below steps :

Download and Installation

Download Latest version of Kafka from link download , copy it to installation directory and run below command to install it.

tar -zxvf kafka_2.11-0.10.0.0

Configuration Changes for Zookeeper and Server

Make below changes  in zookeeper.properties configuration file in config directory.

config/zookeeper.properties

clientPort=2181

clientPort is the port where client will connect. By Default port is 2181 if port will update in zookeeper.properties have to update in below server.properties too.

Make below changes  in server.properties configuration file in config directory.

config/server.properties:

broker.id=0
listeners=PLAINTEXT://:9092
log.dir=/tmp/kafka-logs
zookeeper.connect=localhost:2181

By default server.properties file have above fields with default values.

broker.id : Represents broker unique id by which zookeeper recognize brokers in Kafka Cluster. If Kafka Cluster is having multiple server this broker id will in incremental order for servers.

listeners : Each broker runs on different port by default port for broker is 9092 and can change also.

log.dir:  keep path of logs where Kafka will store steams records. By default point /tmp/kafka-logs.

For more change on property for server.properties file follow  link Kafka Server Properties Configuration.

Start Zookeeper and Server

Run below files as below in Kafka directory

screen -d -m bin/zookeeper-server-start.sh config/zookeeper.properties
screen -d -m bin/kafka-server-start.sh config/server.properties

Check status of Zookeeper & Server

Below commands will return the port of Zookeeper and Server processes

ps aux | grep zookeeper.properties
ps aux | grep server.properties

 Now Kafka is ready to create topic publish and subscribe messages also.

Create a Topic and Check Status

Create topic with user defined name and by passing replication and number partitions for topic. For more info about how partition stores in Kafka Cluster Env follow link for Kafka Introduction and Architecture.

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

Result:
Created topic "test".

above command will create topic test with configured partition as 1 and replica as 1.

List of available Topics  in Zookeeper

Run below command to get list of topics

bin/kafka-topics.sh --list --zookeeper localhost:2181

Result:
test

Description of Topic

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test

Result:
Topic:test      PartitionCount:1        ReplicationFactor:1     Configs:
Topic: test     Partition: 0    Leader: 0       Replicas: 0     Isr: 0

In above command response .The first line gives a summary of all the partitions, each additional line provide information about one partition. We have only one line for this topic  because  there is one partition.

  • “leader” is the broker responsible for all reads and writes for the given partition. Each broker will be the leader for a randomly selected portion of the partitions.
  • “replicas” is the list of brokers that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
  • “isr” is the set of “in-sync” replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.

In above example broker 1 is the leader for one partition for the topic. Topic is not having any replica and is on server 0 because of one server on cluster.

Publish Messages to Topic

To test topic push your messages to topic by running below command

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

Input Messages:
Hi Dear
How r u doing?
Where are u these days?

These message after publish to Topic will retain as logs retention is configured for server even it’s read by consumer or not. To get information about Retention Policy configuration follow link Kafka Server Properties Configuration.

Subscribe Messages by Consumer from Topic

Run below command to get all published messages from test Topic. It will return all these messages from beginning.

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic test

Output Messages:

Hi Dear
How r u doing?
Where are u these days?

Read More on Kafka

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana

Kafka Introduction and Architecture

Kafka is Open source distributed, Steam Processing, Message Broker platform written in Java and Scala developed by Apache Software Foundation.

Kafka is massively use for enterprise infrastructure to process stream data or transaction logs on real time. Kafka provide unified, fault-tolerant, high throughput, low latency platform for dealing real time data feeds.

 Important Points about Kafka :

  • Publish/subscribe messaging system.
  • Robust queue able to handle high volume of data.
  • Work for Online and Offline message consumption.
  • In Kafka Cluster each server/Node work like broker and each broker is responsible for published record and may have zero or more partitions per topic.
  • Each records in partition consists fields key, value and timestamp.
  • Kafka use TCP Protocol to communicate between clients and servers.
  • Kafka provide Producer, Consumer, Streams and Connector java API to publish and consume from topics.

Initial Release: January, 2011

Current Release: 0.10.20

Kafka Cluster Architecture?

 Before going to discuss about Kafka Cluster Architecture . Let’s introduce about terminology use of Kafka that will easy to understand about Architecture and flow.

 Broker

Broker is stateless instance of Kafka server in Cluster. We define broker by giving unique id for each server instance. Kafka cluster can have multiple broker instances and each broker can handle hundred thousands of reads and write request or TB data per seconds of messages without any performance impact.

Zookeeper

Kafka Cluster use Zookeeper for managing and coordinating brokers. Producer and Consumer will get notification if new broker added to cluster or if any fail so that producer and consumer can decide about to point available broker.

 Topic

Topic is a category to keeps steams of records which are publish to it. Topic can have zero, one or many consumers for reading data. We can create our own topics by application and manually also.

Topic stored data on portitions and distribute over servers based on number of partition configure per topic and available brokers.

Partition

A partition stores records in sequential orders and will continually to append it. Each record in partition having sequential id number called as offset. Individual Log partition allows the records to scale up to available single server capacity.

How Topic will partitioned for  brokers/servers/nodes?

Suppose, Need to create a topic with N partitions for Kafka Cluster having M brokers.

If (M==N) : Each broker will have one partition.

If (M>N): First available N broker will take one partition for each.

If (M<N): Some brokers may have more than one partitions.

Kafka cluster will retain these partition  as configured for retention policy in server.properties file while it’s consumed or not by default it’s configure for two months and we can modify based on our storage capacity. Kafka performance don’t impact based on data size because it read and write data based on offset values.

 Kafka Cluster Architecture with Multi distrubuted servers

 Detail about above Kafka Cluster for Multi/distributed servers.

Kafka Cluster: Having three servers  and each server is having corresponding  brokers as id 1, 2 and 3.

Zookeeper: Zookeeper runs over Kafka Cluster which keeps detail of availability of brokers and update producers and consumers.

Brokers: 1,2 and 3 which are having topics as T1, T2 and T3 stored in partitions.

Topics: Topics T1, T2 is partitioned as 3 and distributed over the servers 1, 2 and 3 while Topic 3 is partitioned as 1 that is stored in server 3 only.

Partition: Each partitioned for topic is having different no of records from offset 0 to some value where 0 represents oldest records.

Producers: APP1, APP2 and APP3 is writing to different topics on T1, T2 and T3 which are created by Applications or manually.

Consumers: Topic T3 is consumed by applications APP5 and APP6 while Topic T1 is consumed by APP4 and T2 is consumed by APP5 only.  One topic can be consumed by multiple APPs.

How Kafka Cluster Flow works for Producers and Consumers?

 I will divide above architectures in two parts as “Producer to Kafka Cluster” and “Kafka Cluster to Consumer” because producer and consumer runs parallel and independent of each others.

Producer to Kafka Cluster

  • Create Topic manually or by application with configuration for portioned and replica.
  • Producer will connect with Kafka Cluster with topic name . Kafka cluster will check in Zookeeper for available broker and send broker id to Producer .
  • Producer will publish message to available broker to store in sequential order to partition. If anything got change with Kafka cluster servers like add or fail server Zookeeper updated to Producer.
  • If replication is configured for topic will keep copy of partition on another server for fault tolerant.

Kafka Cluster to Consumer

  • Consumer will point to Topic on Kafka Cluster  as required by Application.
  • Consumer will subscribe records from Topic based on required offset value (like beginning, now or from last).
  • If consumer wants records from now  Zookeeper will send offset value to Consumer to start read records from brokers partitions.
  • If  required offset is not exist in Broker partition where Consumer was pointing reading data then Zookeeper will return available broker id  with partition detail to Consumer.
  • If in between one broker is down during Consumer is reading records from it. Zookeeper will send will send available broker id  with partition detail to Consumer.

Kafka Cluster with Single Server: Will create no of partition on the same server per topic.

Kafka Cluster with Multi Server/Distributed: Topic partitions logs are distributed on all the servers in the Kafka cluster and each server is able to handle data and requests for share partitions. If replication is configure servers will keep number of copies of partition logs distributed to servers for fault tolerance.

Kafka Cluster Load balance for multi-server or distributed servers?

For each Topic partitions log having one server/broker as “leader” while others are followers (if multi server/distributed). Leader handles all read and write requests from producer and consumer while followers make replica of lead server partition. If somehow leader fail or server down for one machine then one of followers will become leader and rest server will follower. For more detail go to Kafka Cluster with multi server on same machine.

Read More on Kafka

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana