Tag Archives: Download

Elasticsearch Installation in Linux


For  Elasticsearch installation in Linux download .tar file from below download link.

Latest Version : elasticsearch-5.4.0

Download Link : https://www.elastic.co/downloads/elasticsearch

Pre-Requisite :

  • Java 8 +
  • Set JAVA_HOME environment variable to your JDK directory like below by run manually or set in .bash.profile in home directory.
export JAVA_HOME=/opt/app/FACING_ISSUE_IN_IT/JAVA/jdk1.8.0_66

How to Install Elasticsearch?

Untar downloaded .tar file in directory where you want to install elasticsearch by using below command.

tar -zxvf elasticsearch-5.4.0.tar.gz

Your untar directory structure would look as below.

Elasticsearch installation directory window

Below is description about all these directories.

Directory Description
 bin  Binary Script directory which keeps files to start elasticseach, install plugin
 config  Keeps elasticsearch.yml file elasticseach configuration and jvm.options to make jvm related setting like heap etc.
 data This is default directory set in elasticsearch for keeping node data that can be configure by changing path.data in elasticsearch.yml file.
 lib  Keeps all jar files for elasticsearch
 logs  This is default directory set in elasticsearch for keeping logs  that can be configure by changing path.logs in elasticsearch.yml file
 modules  It keeps all functionality and processors required for data.
 plugins  All installed plugin will store in plugins directory

Elasticsearch Configuration

Before going to start Elasticsearch need to make some basic changes in config/elasticsearch.yml file.

cluster.name: FACING-ISSUE-IN-IT
node.name: TEST-NODE-1
#network.host: 0.0.0.0
http.port: 9200

cluster.name : Elasticsearch Cluster name is unique name with in network which links all nodes.

node.name :  Each node in cluster have unique name by which Cluster identify nodes.

http.port: Default http port for Elasticsearch is 9200 . You can update it .

network.host: This property will  set when elasticsearch need to  access other machine  or by IP. For more on network.host follow link Why network.host?

Now Elasticsearch is ready for start.

How to start Elasticsearch?

There are multiple way to start elasticsearch as below.

  • Run in foreground
  • Run in background
  • Run in background with commandline arguments

Run in Foreground

To run elastic search in Linux  server as  foreground process to see sysout in console use below command in elasticsearch home directory. When you will see started as in below screen means elasticsearch is started  successfully.

/bin/elasticsearch

elasticsearch start

In above screen elasticsearch is started  on port 9200.

Note: To stop elasticsearch use CNTR+C.

Run in Background

use option “screen -d -m” to run elasticsearch in background

screen -d -m  /bin/elasticsearch
or
/bin/elasticsearch -d

Elasticsearch in Background by Command-line configuration

Instead of passing hard code value in elasticsearch.yml file .We can also pass all these above configurable fields from command-line as given below. For more detail follow link Elasticsearch Cluster with multi node on same machine.

./bin/elasticsearch -d -Ecluster.name=FACING-ISSUE-IN-IT
 -Enode.name=TEST-NODE-1 -Ehttp.port=9200

Where -E represents for arguments name where value need to set.

-d is for running elasticsearch as daemon in background .

Note :To stop Elasticsearch running in background findout the process as below and kill processId.

Now Elasticsearch is running  and time to Test.

For testing elasticsearch cluster status copy below link try on your browser address bar. You will get result like below.

http://localhost:9200/_cluster/health?pretty

or as below if network.host configured

http://elasticseverIp:9200/_cluster/health?pretty

Result :

{
  "cluster_name" : "FACING-ISSUE-IN-IT",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

For learn more about Cluster follow link Cluster Configuration and Health

Same way you can get detail about your configured Node from below URLs.

http://localhost:9200/_nodes?pretty

or as below if network.host configured

http://elasticsearverip:9200/_nodes?pretty

For learn more about Node follow link Node Type,Configuration and status etc. Follow Link for Elasticsearch Tutorials.

Read More

To read more on Elasticsearch Configuration, sample Elasticsearch REST clients, Queries Type configurations with example follow link Elasticsearch Tutorial and Elasticsearch Issues.

Hope this blog was helpful for you.

Leave you feedback to enhance more on this topic so that make it more helpful for others.

Advertisements

Elasticsearch Installation in Window


For  Elasticsearch installation in window download .zip file from below  link.

Latest Version : elasticsearch-5.4.0

Download Link : https://www.elastic.co/downloads/elasticsearch

Pre-Requisite : Java 8 +

How to Install Elasticsearch?

Unzip downloaded .zip file in directory where you want to install elasticsearch and your unzip directory structure world look as below.

Elasticsearch installation directory window

Below is description about all these directories.

Directory Description
 bin  Binary Script directory which keeps files to start elasticseach, install plugin
 config  Keeps elasticsearch.yml file elasticseach configuration and jvm.options to make jvm related setting like heap etc.
 data This is default directory set in elasticsearch for keeping node data that can be configure by changing path.data in elasticsearch.yml file.
 lib  Keeps all jar files for elasticsearch
 logs  This is default directory set in elasticsearch for keeping logs  that can be configure by changing path.logs in elasticsearch.yml file
 modules  It keeps all functionality and processors required for data.
 plugins  All installed plugin will store in plugins directory

Elasticsearch Configuration

Before going to start Elasticsearch need to make some basic changes in config/elasticsearch.yml file.

cluster.name: FACING-ISSUE-IN-IT
node.name: TEST-NODE-1
#network.host: 0.0.0.0
http.port: 9200

cluster.name : Elasticsearch Cluster name is unique name with in network which links all nodes.

node.name :  Each node in cluster have unique name by which Cluster identify nodes.

http.port: Default http port for Elasticsearch is 9200 . You can update it .

network.host: This property will  set when elasticsearch need to  access other machine  or by IP. For more on network.host follow link Why network.host?

Now Elasticsearch is ready for start.

How to start Elasticsearch?

Click on elasticsearch.bat file inside /bin directory and you will get below console sysout when successfully started.

elasticsearch start

Now you can see from above screen your Elasticsearch is started successfully on port 9200.

For testing elasticsearch cluster status copy below link try on your browser address bar. You will get result like below.

http://localhost:9200/_cluster/health?pretty

or as below if network.host configured

http://elasticseverIp:9200/_cluster/health?pretty

Result :

{
  "cluster_name" : "FACING-ISSUE-IN-IT",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

For learn more about Cluster follow link Cluster Configuration and Health

Same way you can get detail about your configured Node from below URLs.

http://localhost:9200/_nodes?pretty

or as below if network.host configured

http://elasticsearverip:9200/_nodes?pretty

For learn more about Node follow link Node Type,Configuration and status etc. Go to below link for Elasticsearch Tutorials.

Read More

To read more on Elasticsearch Configuration, sample Elasticsearch REST clients, Queries Type configurations with example follow link Elasticsearch Tutorial and Elasticsearch Issues.

Hope this blog was helpful for you.

Leave you feedback to enhance more on this topic so that make it more helpful for others.

Setup Kafka Cluster for Single Server/Broker


For setting up Kafka Cluster for Single Broker . Follow below steps :

Download and Installation

Download Latest version of Kafka from link download , copy it to installation directory and run below command to install it.

tar -zxvf kafka_2.11-0.10.0.0

Configuration Changes for Zookeeper and Server

Make below changes  in zookeeper.properties configuration file in config directory.

config/zookeeper.properties

clientPort=2181

clientPort is the port where client will connect. By Default port is 2181 if port will update in zookeeper.properties have to update in below server.properties too.

Make below changes  in server.properties configuration file in config directory.

config/server.properties:

broker.id=0
listeners=PLAINTEXT://:9092
log.dir=/tmp/kafka-logs
zookeeper.connect=localhost:2181

By default server.properties file have above fields with default values.

broker.id : Represents broker unique id by which zookeeper recognize brokers in Kafka Cluster. If Kafka Cluster is having multiple server this broker id will in incremental order for servers.

listeners : Each broker runs on different port by default port for broker is 9092 and can change also.

log.dir:  keep path of logs where Kafka will store steams records. By default point /tmp/kafka-logs.

For more change on property for server.properties file follow  link Kafka Server Properties Configuration.

Start Zookeeper and Server

Run below files as below in Kafka directory

screen -d -m bin/zookeeper-server-start.sh config/zookeeper.properties
screen -d -m bin/kafka-server-start.sh config/server.properties

Check status of Zookeeper & Server

Below commands will return the port of Zookeeper and Server processes

ps aux | grep zookeeper.properties
ps aux | grep server.properties

 Now Kafka is ready to create topic publish and subscribe messages also.

Create a Topic and Check Status

Create topic with user defined name and by passing replication and number partitions for topic. For more info about how partition stores in Kafka Cluster Env follow link for Kafka Introduction and Architecture.

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

Result:
Created topic "test".

above command will create topic test with configured partition as 1 and replica as 1.

List of available Topics  in Zookeeper

Run below command to get list of topics

bin/kafka-topics.sh --list --zookeeper localhost:2181

Result:
test

Description of Topic

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test

Result:
Topic:test      PartitionCount:1        ReplicationFactor:1     Configs:
Topic: test     Partition: 0    Leader: 0       Replicas: 0     Isr: 0

In above command response .The first line gives a summary of all the partitions, each additional line provide information about one partition. We have only one line for this topic  because  there is one partition.

  • “leader” is the broker responsible for all reads and writes for the given partition. Each broker will be the leader for a randomly selected portion of the partitions.
  • “replicas” is the list of brokers that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
  • “isr” is the set of “in-sync” replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.

In above example broker 1 is the leader for one partition for the topic. Topic is not having any replica and is on server 0 because of one server on cluster.

Publish Messages to Topic

To test topic push your messages to topic by running below command

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

Input Messages:
Hi Dear
How r u doing?
Where are u these days?

These message after publish to Topic will retain as logs retention is configured for server even it’s read by consumer or not. To get information about Retention Policy configuration follow link Kafka Server Properties Configuration.

Subscribe Messages by Consumer from Topic

Run below command to get all published messages from test Topic. It will return all these messages from beginning.

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic test

Output Messages:

Hi Dear
How r u doing?
Where are u these days?

Read More on Kafka

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana

Filebeat Download,Installation and Start/Run


Filebeat Latest Version : 6.2.4

Filebeat Download Link :  https://www.elastic.co/downloads/beats/filebeat

 Download filebeat from above link according to your Operating System and copy to directory where you want to install.

 Installation on Linux :  Go to directory where tar file was copied and use below    command to install it.


tar –zxvf filebeat--linux-x86.tar.gz

Installation on Windows: Go to directory where zip file was copied and unzip  file.


unzip file filebeat--window-xxx.zip file

Before Test and Run filebeat installation need to make below configuration changes in filbeat.yml file for prospectors,Output ,logging etc. Prospectors changes are required rest of changes optional and decide based on application requirements.

Required Change:

 Optional Change : Based on your Application Requirement

 Run/Start Filebeat On Linux:


./filebeat -e -c filebeat.yml -d "publish"

For running filebeat in background add “screen –d –m” as given below:


screen -d -m ./filebeat -e -c filebeat.yml -d "publish"

For Logging filebeat output to log file remove –e option from command as given below and follow link Filebeat Configuration Changes for Logging  for more info.


./filebeat  -c filebeat.yml -d "publish"

screen -d -m ./filebeat  -c filebeat.yml -d "publish"

Filebeat 5 added new features of passing command line arguments while start filebeat. This is really helpful because no change required in filebeat.yml configuration file specifics to servers and and pass server specific information over command line. If in future your servers scaling and changes in output port and machine IP for elasticsearch or kafka or logstash. Then configuration team need to update only command line arguments for specific information and no change in configuration file.

Run/Start Filebeat with command line Arguments in Forground:


./filebeat -c filebeat.yml -d publish -E server= -E file=app1.log  -E tz=CDT -E                     kafkaHost=IP:PORT

Run In Background :


screen –d –m ./filebeat -c filebeat.yml -d publish -E server= -E  file=app1.log  -E tz=CDT -E kafkaHost=IP:PORT

Here -E option represents argument values are passing from command line will set in respective position in filebeat.yml configuration file. Follow link Filebeat Commandline Arguments setting in configuration file .

Integration

Complete Integration Example Filebeat, Kafka, Logstash, Elasticsearch and Kibana

Read More

To read more on Filebeat topics, sample configuration files and integration with other systems with example follow link Filebeat Tutorial  and  Filebeat Issues. To know more about YAML/YML follow YAML Tutorials

Leave you feedback to enhance more on this topic so that make it more helpful for others.