Filebeat.yml file with Prospectors, Multiline,Elasticsearch Output and Logging Configuration
You can copy same file in filebeat.yml and run after making below change as per your environment directory structure and follow steps mentioned for Filebeat Download,Installation and Start/Run
- Change on Prospectors section for your logs file directory and file name
- Configure Multiline pattern as per your logs format as of now set as generic hopefully will work with all pattern
- Change on Elasticsearch output section for Host ,Port and other setting if required
- Change on logging directory as per you machine directory.
Sample filebeat.yml file
#=============Filebeat prospectors =============== filebeat.prospectors: # Here we can define multiple prospectors and shipping method and rules as per #requirement and if need to read logs from multiple file from same patter directory #location can use regular pattern also. #Filebeat support only two types of input_type log and stdin ##############input type logs configuration##################### - input_type: log # Paths of the files from where logs will read and use regular expression if need to read #from multiple files paths: - /opt/app/app1/logs/app1-debug*.log* # make this fields_under_root as true if you want filebeat json out for read files in root. fields_under_root: true ### Multiline configuration for handeling stacktrace, Object, XML etc if that is the case #and multiline is enabled with below configuration will shipped output for these case in #multiline # The regexp Pattern that has to be matched. The example pattern matches all lines #starting with [DEBUG,ALERT,TRACE,WARNING log level that can be customize #according to your logs line format #multiline.pattern: '^\[([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)' # Default is false.Defines if the pattern match should be negated or not. #multiline.negate: true # multiline.match define if pattern not match with above pattern where these line need #to append.Possible values are "after" or "before". #multiline.match: after # if you will set this max line after these number of multiline all will ignore #multiline.max_lines: 50</pre> <h4>#==========Elasticsearch Output Configuration=======================</h4> <pre>output.elasticsearch: # We can configure this flag the output as module. #enabled: true #Define elasticsearch elasticsearch HTTP client server host and port. default port for #elasticsearch is 9200 hosts: ["elasticsearver:9200"] # Filebeat provide gzip compression level which varies from 1 to 9. As compression level #increase processing speed will reduce but network speed increase.By default #compression level disable and value is 0. compression_level: 0 # Optional protocol by default HTTP. If requires set https and basic auth credentials for #credentials if any. #protocol: "https" #username: "userid" #password: "pwd" # we can configure number of worker for each host publishing events to elasticseach #which will do load balancing. #worker: 1 # Optional index name. The default is "filebeat" plus date and generates filebeat-{YYYY.MM.DD} keys. index: "app1-%{+yyyy.MM.dd}" # Optional ingest node pipeline. By default no pipeline will be used. #pipeline: "" # Optional HTTP Path #path: "/elasticsearch" # Proxy server url #proxy_url: http://proxy:3128 # Default value is 3. When max retry reach specified limit and evens not published all #events will drop. Filebeat also provide option to retry until all events are published by #setting value as less than 0. #max_retries: 3 #Default values is 50. If filebeat is generating events more than configure batch max size it will split events in configure size batches and send to elasticsearch. As much as batch size will increase performance will improve but require more buffring. It can cause other issue like connection, errors, timeout for requests. #bulk_max_size: 50 #Default value is 90 seconds. If no response http request will timeout. #timeout: 90 # waiting time for new events for bulk requests. If bulk request max size sent before this #specified time, new bulk index request created. #flush_interval: 1s # We can update elasticsearch index template from filebeat which will define settings #and mappings to determine field analysis. # Set to false to disable template loading. #template.enabled: true # Template name. By default the template name is filebeat. #template.name: "app1" # Path to template file #template.path: "${path.config}/app1.template.json" #Set template.overwrite as true and if need to update template file version as 2.x then set #path of Latest template file with below configuration. #template.overwrite: false #template.versions.2x.enabled: true #template.versions.2x.path: "${path.config}/filebeat.template-es2x.json" # Configure SSL setting id required for Kafk broker #ssl.enabled: true # Optional SSL configuration options. SSL is off by default. # List of root certificates for HTTPS server verifications #SSL configuration is Optional and OFF by default . It required for server verification if #HTTPS root certificate . #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] #Default value is full. SSL configuration verfication mode is required if SSL is configured .#We can use value as 'none' for testing purpose but in this mode it can accept any #certificate. #ssl.verification_mode: full # List of supported/valid TLS versions. By default all TLS versions 1.0 up to # 1.2 are enabled. #By Default it support all TLS versions after 1.0 to 1.2. We can also mentioned version in #below array #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] # Define path for certificate for SSL #ssl.certificate: "/etc/pki/client/cert.pem" # Define path for Client Certificate Key #ssl.key: "/etc/pki/client/cert.key" # If data is configured and shipped encrypted form. Need to add passphrase for decrypting the Certificate Key otherwise optional #ssl.key_passphrase: '' # Configure encryption cipher suites to be used for SSL connections #ssl.cipher_suites: [] # Configure encryption curve types for ECDHE based cipher suites #ssl.curve_types: [] #====================Logging ============================== # Default log level is info if set above or below will record top this hierarchy #automatically. Available log levels are: critical, error, warning, info, debug logging.level: debug # Possible values for selectors are "beat", "publish" and "service" if you want to enable #for all select value as "*". This selector decide on command line when start filebeat. logging.selectors: ["*"] # The default value is false.If make it true will send out put to syslog. logging.to_syslog: false # The default is true. all non-zero metrics reading are output on shutdown. logging.metrics.enabled: true # Period of matrics for log reading counts from log files and it will send complete report #when shutdown filebeat logging.metrics.period: 30s # Set this flag as true to enable logging in files if not set that will disable. logging.to_files: true logging.files: # Path of directory where logs file will write if not set default directory will home #directory. path: /tmp # Name of files where logs will write name: filebeat-app.log # Log File will rotate if reach max size and will create new file. Default value is 10MB rotateeverybytes: 10485760 # = 10MB # This will keep recent maximum log files in directory for rotation and remove oldest #files. keepfiles: 7 # Will enable logging for that level only. Available log levels are: critical, error, warning, #info, debug level: debug
Read More on Filebeat
- Filebeat Overview
- Filebeat Download,Installation and Start/Run
- Filebeat Prospectors Configuration Changes for Read Log files
- Sample filebeat.yml file for Prospectors Configuration
- Filebeat Multiline Configuration Changes for Object, StackTrace and XML
- Filebeat, Logging Configuration
- Filebeat,Elasticsearch Output Configuration
- Filebeat, Logstash Output Configuration
- Filebeat, Kafka Output Configuration
- Filebeat, Commandline Arguments Configuration
To Know more about YAML follow link YAML Tutorials.
Sample filebeat.yml File
- Sample filebeat.yml file for Prospectors Configuration
- Sample filebeat.yml file with Multiline Configuration
- Sample filebeat.yml file with Logging Configuration
- Sample filebeat.yml file with Logstash Output Configuration
- Sample filebeat.yml with Kafka Output Configuration
Integration
Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana
13 thoughts on “Sample filebeat.yml file for Prospectors, Elasticsearch Output and Logging Configuration”
You must log in to post a comment.