Category Archives: REST

Spring Web and REST Annotations List


Here you will know about  all  Spring Web & REST services related annotations which we mostly used while development of applications:

See Also :

Spring Web & REST Annotations

@RequestMapping

@RequestMapping can be configured using path, or it’s aliases, name and value. @RequestMapping annotations apply at class and method level as request handler methods inside the @Controller classes.
@RequestMapping parameters:

  • URL : the method is mapped to
  • method : compatible HTTP methods/li>
  • params : filters requests based on presence, absence or value of HTTP headers
  • headers : filters requests based on presence, absence or value of HTTP params
  • consumes: which media types the method can consume in the HTTP request body
  • produces: which media types the method can produce in the HTTP response body

@RequestMapping variants for HTTP methods:

  • @GetMapping :For GET method
  • @PostMapping : For POST method
  • @PutMapping : For PUT method
  • @DeleteMapping :For Delete Method
  • @PatchMapping : For Patch Method

Example : Method level

@Controller
class CarController {

    @RequestMapping(value = "/cars/welcome", method = RequestMethod.GET)
    String welcome() {
        return "Facing Issues On IT";
    }
}

Example :Class level (both are equivalent)
@RequestMapping at class level that provide default settings for all handler methods in as @Controller class and for method handler URL that will be combination of path at class level and method level.

@Controller
@RequestMapping(value = "/cars", method = RequestMethod.GET)
class CarController {

    @RequestMapping("/welcome")
    String welcome() {
        return "Facing Issues on IT";
    }
}

@RequestBody

@RequestBody map the body of Http request to an object. Deserialization is automatic and depend on content type of request.
Example

@RequestMapping(value="/car/create", method=RequestMethod.POST)
public ResponseEntity createCar(@RequestBody Car , UriComponentsBuilder ucBuilder){
    System.out.println("Creating Car "+car.getName());

    if(userService.isUserExist(user)){
        System.out.println("A Car with name "+car.getName()+" already exist");
        return new ResponseEntity(HttpStatus.CONFLICT);
    }

    carService.saveCar(car);

    HttpHeaders headers = new HttpHeaders();
    headers.setLocation(ucBuilder.path("/user/{id}").buildAndExpand(car.getId()).toUri());
    return new ResponseEntity(headers, HttpStatus.CREATED);
}

@PathVariable

@PathVariable indicates that a method argument is bound to a URI template variable. We can specify the URI template with the @RequestMapping annotation and bind a method argument to one of the template parts with @PathVariable. We can achieve this with the name or its alias, the value argument.
Example : Path variable with different name

@RequestMapping("/{id}")
Car getCar(@PathVariable("id") long carId) {
    // ...
}

Example : Path variable with same name
If the name of the part in the template matches the name of method argument then no need to specify the @PathVariable annotation.

@RequestMapping("/{carId}")
Car getCar(@PathVariable long carId) {
    // ...
}

Example : Path variable as optional
We can make @PathVariable as optional by setting the argument required to false.

@RequestMapping("/{id}")
Car getCar(required = false) long id) {
    // ...
}

@RequestParam

@RequestParam use to access HTTP request parameters. It has the same configuration options as the @PathVariable annotation. We can access them with the annotations @CookieValue and @RequestHeader respectively.

Example

@RequestMapping
Car getCarDetailByParam(@RequestParam("id") long id) {
    // ...
}

Example : Set defaultValue to make required false
In @RequestParam can inject default value by setting defaultValue (i.e required false) that will consider when spring founds no and empty value in request.

@RequestMapping("/price")
Car buyCar(@RequestParam(defaultValue = "6") int seatCount) {
    // ...
}

@ResponseBody

Mark a request handler method with @ResponseBody then Spring treats the result of the method as the response itself.

If annotate a @Controller class with @ResponseBody annotation then all request handler methods will use it.

Example

@ResponseBody
@RequestMapping("/hello")
String hello() {
    return "Facing Issues on IT";
}

@ExceptionHandler

@ExceptionHandler can use to declare a custom error handler method. Spring calls this method when a request handler method throws any of the specified exceptions.The caught exception can be passed to the method as an argument:
Example


@ExceptionHandler(IllegalAccessException.class)
void onIllegalAccessException(IllegalAccessException exception) {
    // ...
}

@ResponseStatus

@ResponseStatus annotation use to specify the desired HTTP status of the response if we annotate a request handler method with this annotation. We can declare the status code with the code argument, or its alias, the value argument. Also, we can provide a reason using the reason argument and also use it along with @ExceptionHandler:
Example

@ExceptionHandler(IllegalAccessException.class)
@ResponseStatus(HttpStatus.BAD_REQUEST)
void onIllegalAccessException(IllegalAcessException exception) {
    // ...
}

@Controller

We can define a Spring MVC controller with @Controller.
Example
As used in below example

@RestController

The @RestController combines @Controller and @ResponseBody.
Example
Both are alternate way to handle rest services

@Controller
@ResponseBody
class VehicleRestController {
    // ...
}

or

@RestController
class VehicleRestController {
    // ...
}

@ModelAttribute

@ModelAttribute annotation can access elements that are already in the model of an MVC @Controller, by providing the model key.

  • Like @PathVariable and @RequestParam no need to specify the model key if the argument has the same name.
  • If we annotate a method with @ModelAttribute Spring will automatically add the method’s return value to model.

Example

@PostMapping("/assemble")
void assembleCar(@ModelAttribute("car") Car carModel) {
    // ...
}

or here model attribute name is same.

@PostMapping("/assemble")
void assembleCar(@ModelAttribute Car car) {
    // ...
}

Before Spring calls a request handler method, it invokes all @ModelAttribute annotated methods in the class. We can use below either way to get value of vehicle model attribute.

@ModelAttribute("car")
Car getCar() {
    // ...
}

or

@ModelAttribute
Car car() {
    // ...
}

@CrossOrigin

@CrossOrign annotation enables cross-domain communication for the annotated request handler methods. If we mark a class with @CrossOrgin then it applies to all request handler methods in it.
Example

@CrossOrigin
@RequestMapping("/welcome")
String welcome() {
    return "Facing Issues On IT";
}

Elasticsearch Overview


“Elasticsearch is open source cross-platform developed completely in Java. It’s built on top of Lucene which provide full text search on high volumes of data quickly and easily do analysis based on indexing. It is schema free and provide NRT(Near real Time) search results.”

Advantage of Elasticsearch

Full Text Search 

Elasticserach built on top of Lucene which provide full-featured  library to search full-text on any open source.

Schema Free 

Elasticsearch stores documents in JSON format and based on it detects words and type to make it searchable.

Restful API 

Elastisearch is easily accessible over browser by using URL and also support for Restful API to perform Operation. For read more on Elasticsearch REST follow link for Elasticsearch REST JAVA API Overview.

Operation Persistence

Elasticsearch cluster keep records of all transaction level changes for schema if anything get change in data for index and track of availability of Nodes in cluster so that make data easily available if any fail-over of any node.

Area of use Elasticsearch?

  • It’s useful in application where need to do analysis, statics and need to find out anomalies on data based on pattern.
  • It’s useful where need to send alerts when particular condition matched like stock market, exception from logs etc.
  • It’s useful with application where log analysis and issue solution provide because of full search in billions of records in milliseconds.
  • It’s compatible with application like Filebeat, Logstash and Kibana for storage of high volume data for analysis and visualize in form of chart and dashboards.

Basic Concepts and Terminology

Cluster

Cluster is a collection of one or more nodes which provide capabilities to search text on scattered data on nodes. It’s identified by unique name with in network so that all associated nodes will join together by cluster name.

For more info on Cluster configuration and query follow link Elasticsearch Cluster.

Elasticsearch Cluster
Elasticsearch Cluster

In above screen elasticsearch cluster “FACING_ISSUE_IN_IT” having three master and four data node.

Node

Node is a Elasticsearch server which associate with a cluster. It’s store data , help cluster for indexing data and search query. It’s identified by unique name in Cluster if name is not provided elasticsearch will generate random Universally Unique Identifier(UUID) on time of server start.

A Cluster can have one or more Nodes .If first node start that will have Cluster with single node and when other node will start will add with that cluster.

For more info on Node Configuration, Master Node, Data Node, Ingest node follow link Elasticsearch Node.

Data Node storage
Data Node Documents Storage

In above screen trying to represent data of two indexes like I1 and I2.Where Index I1 is having two type of documents T1 and T2 while index I2 is having only type T2 and these shards are distributes over all nodes in cluster. This data node is having documents of shard (S1) for  Index I1 and shard (S3) for Index I2. It’s also keeping replica of documents of shards S2 of Index I2 and I1 which are store some other nodes in cluster.

Index

An Index is collection of documents with same characteristics which stores on nodes in distributed fashion and its identify by unique name on which perform different operation like search query, update and delete for documents. A cluster can have as many indexes with unique name.

A document store in Index and assigned a type to it and an Index can have multiple types of documents.

For more info on Index Creation, Mapping Template , CRUD follow link Elasticsearch Index.

Shards

Shards are partitions of indexes scattered on nodes. It provide capability to store large amount (billions) of documents for same index to store in cluster even one disk of node is not capable to store it.

Replica

Replica is copy of shard which store on different node. A shard can have zero or more replica. If shard on one node then replica of shard will store on another node.

Benefits of Shards and Replica

  • Shards splits indexes in horizontal partition for high volumes of data.
  • It perform operations parallel to each shards or replica on multiple node for index so that increase system performance and throughput.
  • Recovered easily in case of fail-over of node because data replica exist on another node because replica always store on different node where shards exist.

Note

When we create index by default elasticseach index configure as 5 shards and 1 replica but we can configure it from config/elasticsearch.yml file or by passing shards and replica values in mapping when index create.

Once index created we can’t change shards configuration but modify in replica. If need to update in shards only option is re-indexing.

Each Shard itself a Lucene index and it can keep max 2,147,483,519 (= Integer.MAX_VALUE – 128) documents. For merging of search results and failover taken care by elasticsearch cluster.

For more info on elasticsearch Shards and Replica follow Elasticsearch Shards and Replica configuration.

Document

Each Record store in index is called a document which store in JSON object. JSON data exchange is fast over internet and easy to handle on browser side display.

Read More

To read more on Elasticsearch Configuration, Sample Elasticsearch REST Clients, Search Queries Types with example follow link Elasticsearch Tutorial and Elasticsearch Issues.

Hope this blog was helpful for you.

Leave you feedback to enhance more on this topic so that make it more helpful for others.

Elasticsearch REST Index Manager Auto Client for CRUD


Elasticsearch 5 REST Java Index Manager Auto Client can  help to manage index life from client end by setting configuration for keeping  index   open, close, delete indexes  for this no any third party tool required.

Below steps for auto  index management will save your time of index management manually and will take care of index life based on configure time.

Pre-requisite

  • Minimum requirement for Java 8 version required.
  • Add dependency for Elasticsearch REST and JSON Mapping in your pom.xml or add in your class path.
  • Index name format should be like IndexName-2017.06.10 for Ex. app1-logs-2017.06.08 if you have different date format change accordingly in below code.

We will follow below steps to create this client and auto run:

  • Create Java Maven Project ElasticsearchAutoIndexManager.
  • Add ElasticSearchIndexManagerClient in Project.
  • Test
  • Create auto run jar for project
  • Create script file for run auto jar
  • Create Cron Tab configuration for schedule and receive email alert.

Create Java Maven Project ElasticsearchAutoIndexManager

Create console based JAVA maven project as in below screen shot with name as ElasticsearchAutoIndexManager . To know more about Console based Java maven project follow link How to create Maven Java Console Project?

Elasticsearch REST Auto Client

Add below dependency in pom.xml file

<!--Elasticsearch REST jar-->
<dependency>
			<groupId>org.elasticsearch.client</groupId>
			<artifactId>rest</artifactId>
			<version>5.1.2</version>
</dependency>
<!--Jackson jar for mapping json to Java -->
<dependency>
			<groupId>com.fasterxml.jackson.core</groupId>
			<artifactId>jackson-databind</artifactId>
			<version>2.8.5</version>
</dependency>

Add below ElasticSearchIndexManagerClient class in com.facingissuesonit.es package and make below constant fields changes as per your server info and requirement.

Set INDEX_NO_ACTION_TIME so that till these days difference no action will take. For Example as set 2 means till  two days index will searchable in system.

private static final int INDEX_NO_ACTION_TIME = 2; 

Set INDEX_CLOSE_TIME so that all indexes will in close status means exist in elasticsearch server but not searchable.For Example as set 5 means if index life is more than five days  will close these indexes and keep it as long as Index delete time not reach.

private static final int INDEX_CLOSE_TIME = 5; 

Set INDEX_DELETE_TIME decide when to delete these indexes which match this criteria. For example as set 15 means will delete all indexes which are having index life more than 15 days.

private static final int INDEX_DELETE_TIME = 15;

private static final String ELASTICSEARCH_SERVER = “ServerHost”;

private static final int ELASTICSEARCH_SERVER_PORT = 9200;

Note : Set proxy server and login credential information if required else comment out.

package com.facingissuesonit.es;

import java.io.IOException;
import java.io.InputStream;
import java.time.LocalDate;
import java.time.format.DateTimeFormatter;
import java.time.temporal.ChronoUnit;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;

import org.apache.http.HttpEntity;
import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;

import com.fasterxml.jackson.databind.ObjectMapper;

public class ElasticSearchIndexManagerClient {
	private static final int INDEX_NO_ACTION_TIME = 2;
	private static final int INDEX_CLOSE_TIME = 5;
	private static final int INDEX_DELETE_TIME = 15;
	private static final String ELASTICSEARCH_SERVER = "ServerHost";
	private static final int ELASTICSEARCH_SERVER_PORT = 9200;
	public static void main(String[] args) {
		RestClient client;
		String indexName = "", indexDateStr = "";
		LocalDate indexDate = null;
		long days = 0;
		final DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd");
		final LocalDate todayLocalDate = LocalDate.now();

		try {
			ElasticSearchIndexManagerClient esManager=new ElasticSearchIndexManagerClient();
			//Get Connection from Elasticsearch
			client=esManager.getElasticsearchConnectionClient();
			if(client!=null)
			{
				IndexDetail[] indexList = esManager.getIndexDetailList(client);

				if (indexList != null && indexList.length > 0) {
					for (IndexDetail indexDetail : indexList) {
						indexName = indexDetail.getIndexName();
						System.out.println(indexName);
						indexDateStr = indexName.substring(indexName.lastIndexOf("-") + 1);
						//Below code is for getting number of days difference from index creation ad current date
						try {
							indexDate = LocalDate.parse(indexDateStr.replace('.', '-'), formatter);
							days = ChronoUnit.DAYS.between(indexDate, todayLocalDate);
							esManager.performAction(indexDetail, days,client);
						} catch (Exception ex) {
							System.out.println("Index is not having formatted date as required : yyyy.MM.dd :"+indexName);
						}
					}
				}
			}
		} catch (Exception ex) {
			System.out.println("Exception found while index management");
			ex.printStackTrace();
			System.exit(1);
		} finally {
			System.out.println("Index Management successfully completed");
			System.exit(0);
		}
	}
	//Get Elasticsearch Connection
	private RestClient getElasticsearchConnectionClient() {
		final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
		credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("userid", "password"));

		RestClient client = RestClient
				.builder(new HttpHost(ELASTICSEARCH_SERVER,ELASTICSEARCH_SERVER_PORT))
				.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {

					public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
						return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
								.setProxy(new HttpHost("ProxyHost", "ProxyPort"));

					}
				}).setMaxRetryTimeoutMillis(60000).build();
		return client;
	}
	//Get List of Indexes in Elaticsearxh Server
	public IndexDetail[] getIndexDetailList(RestClient client)
	{
		IndexDetail[] indexDetails=null;
		HttpEntity in=null;
		try
		{
		ObjectMapper jacksonObjectMapper = new ObjectMapper();
		Response response = client.performRequest("GET", "/_cat/indices?format=json&pretty", Collections.singletonMap("pretty", "true"));
		in =response.getEntity();
		indexDetails=jacksonObjectMapper.readValue(in.getContent(), IndexDetail[].class);
		System.out.println("Index found :"+indexDetails.length);
		}
		catch(IOException ex)
		{
			ex.printStackTrace();
		}

		return indexDetails;
	}
	//This Method Decide what action need to take based based Index creation date and configured date for No Action, close and Delete indexes
	private  void performAction(IndexDetail indexDetail, long days,RestClient client) {
		String indexName = indexDetail.getIndexName();
		if (days >= INDEX_NO_ACTION_TIME) {
			if (!(indexDetail.getStatus() != null && indexDetail.getStatus().equalsIgnoreCase("close"))) {
				// Close index condition
				if (days >= INDEX_CLOSE_TIME) {
					System.out.println("Close Index :" + indexName);
					closeIndex(indexName,client);
				}
			}
			// Delete index condition
			if (days >= INDEX_DELETE_TIME) {
				if (!(indexDetail.getStatus() != null && indexDetail.getStatus().equalsIgnoreCase("close"))) {
					System.out.println("Delete Index :" + indexName);
					deleteIndex(indexName,client);
				} else {
					System.out.println("Delete Close Index :" + indexName);
					deleteCloseIndex(indexName,client);
				}
			}
		}
	}

	// Operation on Indexes
		private  void closeIndex(String indexName,RestClient client) {

			flushIndex(indexName,client);
			postDocuments(indexName + "/_close", client);
			System.out.println("Close Index :" + indexName);
		}

		private  void deleteIndex(String indexName,RestClient client) {
			flushIndex(indexName,client);
			deleteDocument(indexName,client);
			System.out.println("Delete Index :" + indexName);
		}

		private  void deleteCloseIndex(String indexName,RestClient client) {
			openIndex(indexName,client);
			flushIndex(indexName,client);
			deleteDocument(indexName, client);
			System.out.println("Delete Close Index :" + indexName);
		}

		private  void openIndex(String indexName,RestClient client) {
			postDocuments(indexName + "/_open", client);
			System.out.println("Open Index :" + indexName);
		}

		private  void flushIndex(String indexName,RestClient client) {
			postDocuments(indexName + "/_flush", client);
			System.out.println("Flush Index :" + indexName);
			try {
				Thread.sleep(3000);
			} catch (InterruptedException ex) {
				ex.printStackTrace();
			}
		}
		//POST perform action used for creation and updation indexes
		public InputStream postDocuments(String endPoint,RestClient client)
		{
			InputStream in=null;
			Response response=null;
			try
			{
				response = client.performRequest("POST", endPoint, Collections.<String, String>emptyMap());
				in =response.getEntity().getContent();
			}
			catch(IOException ex)
			{
				System.out.println("Exception in post Documents :");
				ex.printStackTrace();
			}
			return in;
		}
		//DELETE perform action use for Deletion of Index
		public InputStream deleteDocument(String endPoint,RestClient client)
		{
			InputStream in=null;
			try
			{

		    Response response = client.performRequest("DELETE", endPoint, Collections.singletonMap("pretty", "true"));
			in =response.getEntity().getContent();
			}
			catch(IOException ex)
			{
				System.out.println("Exception in delete Documents :");
				ex.printStackTrace();
			}
			return in;
		}

}

In above code pretty state forward and step by step. Let’s me explain about below operation.

Open :  Index status open keep index available for searching and we can perform below operation like close and delete on indexes when it open status. For making Index open we can use below command in curl .

curl POST /indexName-2017.06.10/_open

Flush: This operation  is required before executing close and delete operation on indexes so that all running transactions on indexes complete.

curl POST /indexName-2017.06.10/_flush

Close : Close indexes persist in elasticsearch server but not available for searching. For making Index open we can use below command in curl .

curl POST /indexName-2017.06.10/_close

Delete : Delete operation on index will delete completely from server.

curl POST /indexName-2017.06.10/_delete

Now our code is ready to take care of indexes based on configured time and test . we test it after running above code.

Below steps are for making your index manager code auto runnable in Linux environment.

Create Auto Runnable Jar

Export ElasticsearchAutoIndexManager project as auto runnable jar by selecting as Launch class ElascticsearchIndexManagerClient. To learn about Auto runnable jar creation steps following link How to make and auto run /executable jar with dependencies?

Create Script File to Execute  Autorun Jar

Create script file as below with name as IndexManger.sh and save it.

#!/bin/bash
~/JAVA/jdk1.8.0_66/bin/java  -jar /opt/app/facingissuesonit/automate/IndexManagerClient.jar

Create Cron Tab configuration for schedule and receive email alert

Linux provide cron tab for executing schedule job/scripts. by using cron tab will execute  runnable jar by using above script file

  • Use command crontab -e to make and edit existing entries in cron tab.
  • Make below cron entry in this editor  for executing IndexManager.sh script on every night 1AM.
  • If you want to get execution alert to you and your team  with console logs also add your email id as below.
  • Save cron tab as ESC then (:wq)

Below are some more example for cron tab expression.

0 * * * *           : Run Every hour of day
* * * * *           : Every minute of day
30 4 * * *         : Run on 4:30 AM everyday
5 10,22 * * *   : Run twice on 10:05 and 22:05
5 0 * * *          : Run after Midnight

Read More

To read more on Elasticsearch REST , sample clients, configurations with example follow link Elasticsearch REST Tutorial and Elasticsearch Issues.

Hope this blog was helpful for you.

Leave you feedback to enhance more on this topic so that make it more helpful for others.

Elasticsearch REST JAVA Client for Cluster Detail


Below is example to get Cluster Detail in Java Object by using Elasticsearch REST Java client. Here client will call endpoint  “/_cluster/health” to retrieve all detail of index list. It is same as we use GET by CURL

GET http://elasticsearchHost:9200/_cluster/health

Pre-requisite

  • Minimum requirement for Java 7 version required.
  • Add below dependency for Elasticsearch REST and JSON Mapping in your pom.xml or add in your class path.

Dependency

<!--Elasticsearch REST jar-->
<dependency>
			<groupId>org.elasticsearch.client</groupId>
			<artifactId>rest</artifactId>
			<version>5.1.2</version>
</dependency>
<!--Jackson jar for mapping json to Java -->
<dependency>
			<groupId>com.fasterxml.jackson.core</groupId>
			<artifactId>jackson-databind</artifactId>
			<version>2.8.5</version>
</dependency>

Sample Code

import java.io.IOException;
import java.util.Collections;

import org.apache.http.HttpEntity;
import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;

import com.fasterxml.jackson.databind.ObjectMapper;

public class ElasticsearchRESTClusterClient {

	public static void main(String[] args) {
		ClusterInfo clusterHealth = null;
		RestClient client = null;
		try {
			client = openConnection();
			if (client != null) {
				// performRequest GET method will retrieve all cluster health
				// information from elastic server
				Response response = client.performRequest("GET", "/_cluster/health",
						Collections.singletonMap("pretty", "true"));
				// GetEntity api will return content of response in form of json
				// in Http Entity
				HttpEntity entity = response.getEntity();
				ObjectMapper jacksonObjectMapper = new ObjectMapper();
				// Map json response to Java object in ClusterInfo
				// Cluster Info
				clusterHealth = jacksonObjectMapper.readValue(entity.getContent(), ClusterInfo.class);
				System.out.println(clusterHealth);
			}

		} catch (Exception ex) {
			System.out.println("Exception found while getting cluster detail");
			ex.printStackTrace();
		} finally {
			closeConnnection(client);
		}

	}

	// Get Rest client connection
	private static RestClient openConnection() {
		RestClient client = null;
		try {
			final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
			credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("userId", "password"));
			client = RestClient.builder(new HttpHost("elasticHost", Integer.parseInt("9200")))
					.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
						// Customize connection as per requirement
						public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
							return httpClientBuilder
									// Credentials
									.setDefaultCredentialsProvider(credentialsProvider)
									// Proxy
									.setProxy(new HttpHost("ProxyServer", 8080));

						}
					}).setMaxRetryTimeoutMillis(60000).build();

		} catch (Exception ex) {
			ex.printStackTrace();
		}
		return client;
	}

	// Close Open connection
	private static void closeConnnection(RestClient client) {
		if (client != null) {
			try {
				client.close();
			} catch (IOException ex) {
				ex.printStackTrace();
			}
		}
	}

}

Cluster Info Java Object where retrieve json response will map.

import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;

@JsonIgnoreProperties(ignoreUnknown = true)
public class ClusterInfo {

@JsonProperty(value = "cluster_name")
private String clusterName;
@JsonProperty(value = "status")
private String clusterStatus;
@JsonProperty(value = "active_primary_shards")
private int primaryActiveShards;
@JsonProperty(value = "active_shards")
private int activeShards;
@JsonProperty(value = "delayed_unassigned_shards")
private int delayedUnAssignedShards;
@JsonProperty(value = "unassigned_shards")
private int unAssignedShards;
@JsonProperty(value = "initializing_shards")
private int initializingShards;
@JsonProperty(value = "relocating_shards")
private int relocatingShards;
@JsonProperty(value = "number_of_nodes")
private int totalNodeCount;
@JsonProperty(value = "number_of_data_nodes")
private int dataNodeCount;

@Override
public String toString()
{
	StringBuffer str=new StringBuffer(60);
	str.append("{\n");
	str.append("    \"").append("clusterName").append("\":\"").append(clusterName).append("\",\n");
	str.append("    \"").append("clusterStatus").append("\":\"").append(clusterStatus).append("\",\n");
	str.append("    \"").append("primaryActiveShards").append("\":\"").append(primaryActiveShards).append("\",\n");
	str.append("    \"").append("activeShards").append("\":\"").append(activeShards).append("\",\n");
	str.append("    \"").append("delayedUnAssignedShards").append("\":\"").append(delayedUnAssignedShards).append("\",\n");
	str.append("    \"").append("unAssignedShards").append("\":\"").append(unAssignedShards).append("\",\n");
	str.append("    \"").append("initializingShards").append("\":\"").append(initializingShards).append("\",\n");
	str.append("    \"").append("relocatingShards").append("\":\"").append(relocatingShards).append("\",\n");
	str.append("    \"").append("totalNodeCount").append("\":\"").append(totalNodeCount).append("\",\n");
	str.append("    \"").append("dataNode").append("\":\"").append(dataNodeCount).append("\"");
	str.append("    \"");
	return str.toString();
}

public String getClusterName() {
	return clusterName;
}
public void setClusterName(String clusterName) {
	this.clusterName = clusterName;
}
public String getClusterStatus() {
	return clusterStatus;
}
public void setClusterStatus(String clusterStatus) {
	this.clusterStatus = clusterStatus;
}
public int getPrimaryActiveShards() {
	return primaryActiveShards;
}
public void setPrimaryActiveShards(int primaryActiveShards) {
	this.primaryActiveShards = primaryActiveShards;
}
public int getActiveShards() {
	return activeShards;
}
public void setActiveShards(int activeShards) {
	this.activeShards = activeShards;
}
public int getDelayedUnAssignedShards() {
	return delayedUnAssignedShards;
}
public void setDelayedUnAssignedShards(int delayedUnAssignedShards) {
	this.delayedUnAssignedShards = delayedUnAssignedShards;
}
public int getUnAssignedShards() {
	return unAssignedShards;
}
public void setUnAssignedShards(int unAssignedShards) {
	this.unAssignedShards = unAssignedShards;
}
public int getInitializingShards() {
	return initializingShards;
}
public void setInitializingShards(int initializingShards) {
	this.initializingShards = initializingShards;
}
public int getRelocatingShards() {
	return relocatingShards;
}
public void setRelocatingShards(int relocatingShards) {
	this.relocatingShards = relocatingShards;
}
public int getDataNodeCount() {
	return dataNodeCount;
}
public void setDataNodeCount(int dataNodeCount) {
	this.dataNodeCount = dataNodeCount;
}
public int getTotalNodeCount() {
	return totalNodeCount;
}
public void setTotalNodeCount(int totalNodeCount) {
	this.totalNodeCount = totalNodeCount;
}
}

Read More on Elasticsearch REST

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana

Elasticsearch REST Response Handling


Successful Response

Elasticsearch REST performRequest api always returned response  for Synchronous by object Response and for Asynchronous by ResponseListener which contain response object. Response object contains other fields as given below:

Host

getHost() api return host information.

requestLine

getRequestLine() api returned information about performed request.

statusLine

return response status code by calling getStatusLine()

headers

provide response all header information by api getHeaders() if need to get specific one can get by getHeader(String).

Entity

Entity keeps all the content of response which comes after query or filters. We can get this by calling getEntity()

Failure Response or Exception

IOException

Communication problem like SocketTimeout etc.

ResponseException

Response received with status code not having 200 for OK. Response Exception not occurs for code 404 which indicate resource is not available.

Read More on Elasticsearch REST

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana

Elasticsearch REST Synchronous and Asynchronous performRequest APIs


Rest client can perform Synchronous and Asynchronous both type of requests. Synchronous Api’s return response with response code while Asynchronous api’s return response as void and accept extra argument extraResponseListener as callback which respond on completion and failure.

Synchronous  performRequest

Response performRequest(String method, String endpoint, Header... headers) throws IOException;

Response performRequest(String method, String endpoint, Map<String, String> params, Header... headers) throws IOException;

Response performRequest(String method, String endpoint, Map<String, String> params, HttpEntity entity, Header... headers) throws IOException;

Response performRequest(String method, String endpoint, Map<String, String> params, HttpEntity entity, HttpAsyncResponseConsumerFactory responseConsumerFactory, Header... headers) throws IOException;

Asynchronous  performRequest

void performRequestAsync(String method, String endpoint, ResponseListener responseListener, Header... headers);

void performRequestAsync(String method, String endpoint, Map<String, String> params, ResponseListener responseListener, Header... headers);

void performRequestAsync(String method, String endpoint, Map<String, String> params, HttpEntity entity, ResponseListener responseListener, Header... headers);

void performRequestAsync(String method, String endpoint, Map<String, String> params, HttpEntity entity, HttpAsyncResponseConsumerFactory responseConsumerFactory, ResponseListener responseListener, Header... headers);

Details about Parameters:

Method :  Elasticsearch support all rest opration like GET, POST,PUT,DELETE.
Endpoint: Url to call elasticsearch apis like (/_cat/indices for getting list of indexes on elasticsearch).
Params: This field is optional and will pass as query strings parameter.
Entity: This field is optional and will pass in method type like POST,PUT or filter query request.
Headers: This is optional will pass if request need header param.

Additional Parameter for Asynchronous call:

ResponseConsumerFactory: Optional and will use to create HttpAsynchResponseConsumer for callback response for request.

ResponseListener: Listener return callback response as request was complete successfully or failure.

Read More on Elasticsearch REST

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana