If we need to shipped server logs lines directly to elasticseach over HTTP by filebeat . We have set below fields for elasticsearch output according to your elasticsearch server configuration and follow below steps.
- Uncomment output.elasticsearch in filebeat.yml file Elasticsearch
- Set host and port in hosts line
- Set index name as you want. If it’s not set filebeat will create default index as “filebeat-%{+yyyy.MM.dd}” .
output.elasticsearch : enabled:true hosts:["localhost:9200"] index:app1-logs-%{+yyyy.MM.dd}"
Elasticsearch server credentials configuration if any
- Set user name and password
- Set protocol if https because default protocol is http
username:userid password:pwd
Elasticsearch Index Template Configuration: We can update elasticsearch index template from filebeat which will define settings and mappings to determine field analysis.
Auto Index Template Loading: Filebeat package will load default template filebeat.template.json to elasticsearch if no any template configuration for template and will not overwrite template.
Customize Index Template Loading: We can upload our user define template and update version also by using below configuration.
#(if set as false template need to upload manually) template.enabled:true #default value is filebeat template.name:"app1" #default value is filebeat.template.json. template.path:"app1.template.json" #default value is false template.overwrite:false
By default, template.overwrite value is false and will not overwrite index template if already exist on elasticsearch. For overwriting index template make this flag as true in filebeat.yml configuraton file.
Latest Template Version Loading from Filebeat: Set template.overwrite as true and if need to update template file version as 2.x then set path of Latest template file with below configuration.
template.overwrite:true template.versions.2x.enabled: true template.versions.2x.path:"${path.config}/app1.template-es2x.json"
Manually Index Template Loading : for manually index loading please refer Elasticsearch Index Template Management.
Compress Elasticsearch Output : Filebeat provide gzip compression level which varies from 1 to 9. As compression level increase processing speed will reduce but network speed increase.By default compression level disable and value is 0.
compress_level: 0
Other configuration Options:
bulk_max_size : Default values is 50. If filebeat is generating events more than configure batch max size it will split events in configure size batches and send to elasticsearch. As much as batch size will increase performance will improve but require more buffring. It can cause other issue like connection, errors, timeout for requests.
Never set value of bulk size as 0 because there would not be any buffering for events and filebeat will send each event directly to elasticsearch.
timeout: Default value is 90 seconds. If no response http request will timeout.
flush_interval: waiting time for new events for bulk requests. If bulk request max size sent before this specified time, new bulk index request created.
max_retries: Default value is 3. When max retry reach specified limit and evens not published all events will drop. Filebeat also provide option to retry until all events are published by setting value as less than 0.
worker: we can configure number of worker for each host publishing events to elasticseach which will do load balancing.
Sample Filebeat Configuration file:
Sample filebeat.yml file to Integrate Filebeat with Elasticsearch
Integration
Complete Integration Example Filebeat, Kafka, Logstash, Elasticsearch and Kibana
Read More
To read more on Filebeat topics, sample configuration files and integration with other systems with example follow link Filebeat Tutorial and Filebeat Issues. To know more about YAML follow link YAML tutorials.
Leave you feedback to enhance more on this topic so that make it more helpful for others.
12 thoughts on “Filebeat,Elasticsearch Output Configuration”
You must log in to post a comment.