Tag Archives: Elasticsearch

Spring Boot: Data Configuration Properties and Default Value


These are Spring Boot Data properties that can be configured with any Spring Boot Application. These properties are already configured in Spring Boot with the given default value.

Note: In your application, You don’t need to add all these values in your application.proprties/application.yaml file. You just need to add only those values which you want to change/override.

See Also:

DAO Configuration Properties

Spring Boot load these properties in PersistenceExceptionTranslationAutoConfiguration class.

Name Default Value Description
spring.dao.exceptiontranslation.enabled true Enable the PersistenceExceptionTranslationPostProcessor.

DataSource Configuration Properties

Spring Boot load these properties in DataSourceAutoConfiguration & DataSourceProperties class.

Name Default Value Description
spring.datasource.continue-on-error false Do not stop if an error occurs while initializing the database.
spring.datasource.data Data (DML) script resource reference.
spring.datasource.data-username database usename to execute DML scripts (if different).
spring.datasource.data-password database password to execute DML scripts (if different).
spring.datasource.dbcp.* Commons DBCP specific settings
spring.datasource.dbcp2.* Commons DBCP2 specific settings
spring.datasource.driver-class-name Fully qualified JDBC driver name. if not set detect from URL by default.
spring.datasource.generate-unique-name false Generate a random datasource name.
spring.datasource.hikari.* Hikari specific settings
spring.datasource.initialize true allow to execute query using ‘data.sql’.
spring.datasource.jmx-enabled false Enable JMX support (if underlying pool provided).
spring.datasource.jndi-name JNDI location of the datasource other set fields( Class, url, username & password) are ignored when set.
spring.datasource.name testdb datasource name.
spring.datasource.password database login password.
spring.datasource.platform all Platform to use in the schema resource (schema-${platform}.sql).
spring.datasource.schema Schema (DDL) script resource reference.
spring.datasource.schema-username database user to execute DDL scripts (if different).
spring.datasource.schema-password database password to execute DDL scripts (if different).
spring.datasource.separator ; Statement separator in SQL initialization scripts.
spring.datasource.sql-script-encoding SQL scripts encoding.
spring.datasource.tomcat.* Tomcat datasource specific settings
spring.datasource.type connection pool fully qualified name implementation to use. if not set by default detected from the classpath.
spring.datasource.url JDBC database url.
spring.datasource.username database user.
spring.datasource.xa.data-source-class-name fully qualified name of XA data source.
spring.datasource.xa.properties Properties to pass to the XA data source.

JPA Configuration Properties

Spring Boot load these properties in JpaBaseConfiguration and HibernateJpaAutoConfiguration class.

Name Default Value Description
spring.data.jpa.repositories.enabled true Enable JPA repositories.
spring.jpa.database Target database to operate on. It detected by default or alternatively set using the “databasePlatform” property.
spring.jpa.database-platform Name of the target database to operate on. It detected by default or alternatively set using the “Database” enum.
spring.jpa.generate-ddl false Initialize the schema on startup.
spring.jpa.hibernate.ddl-auto DDL mode for the “hibernate.hbm2ddl.auto” property. Default value is “create-drop” when using an embedded database otherwise “none”.
spring.jpa.hibernate.naming.implicit-strategy Hibernate 5 implicit fully qualified naming strategy name .
spring.jpa.hibernate.naming.physical-strategy Hibernate 5 physical fully qualified naming strategy name .
spring.jpa.hibernate.naming.strategy Hibernate 4 fully qualified naming strategy name. Not supported with Hibernate 5.
spring.jpa.hibernate.use-new-id-generator-mappings Use Hibernate’s IdentifierGenerator for AUTO, TABLE and SEQUENCE.
spring.jpa.open-in-view true Register OpenEntityManagerInViewInterceptor. Binds a JPA EntityManager to the thread for request entire processing.
spring.jpa.properties.* Additional native properties set on JPA provider.
spring.jpa.show-sql false Enable SQL statements logging.

JTA Configuration Properties

Spring Boot load these properties in JtaAutoConfiguration class.

Name Default Value Description
spring.jta.enabled true Enable JTA support.
spring.jta.log-dir Transaction logs directory.
spring.jta.transaction-manager-id Transaction manager unique identifier.

Data REST Configuration Properties

Spring Boot load these properties in RepositoryRestProperties class.

Name Default Value Description
spring.data.rest.base-path Base path for Spring Data REST to repository resources.
spring.data.rest.default-page-size Default page size.
spring.data.rest.enable-enum-translation Enable enum value translation through Spring Data REST by default resource bundle.
spring.data.rest.limit-param-name URL query parameter name that decide how many results return at once.
spring.data.rest.max-page-size Maximum size of pages.
spring.data.rest.page-param-name URL query string parameter to indicates what page to return.
spring.data.rest.return-body-on-create Return a response body after creating an entity.
spring.data.rest.return-body-on-update Return a response body after updating an entity.
spring.data.rest.sort-param-name URL query string parameter to indicates what direction to sort results.

H2 Web Console Configuration Properties

Spring Boot load these properties in H2ConsoleProperties class.

Name Default Value Description
spring.h2.console.enabled false Enable the console.
spring.h2.console.path /h2-console Path at which the console will be available.
spring.h2.console.settings.trace false Enable trace output.
spring.h2.console.settings.web-allow-others false Enable remote access.

Data Redis Configuration Properties

Spring Boot load these properties in SecurityProperties class.

Name Default Value Description
spring.data.redis.repositories.enabled true Enable Redis repositories.

Redis Configuration Properties

Spring Boot load these properties in RedisProperties class.

Name Default Value Description
spring.redis.cluster.max-redirects Maximum number of redirects to follow when executing commands across the cluster.
spring.redis.cluster.nodes Comma-separated list of “host:port” pairs to bootstrap from.
spring.redis.database 0 Database index used by the connection factory.
spring.redis.host localhost Redis server host.
spring.redis.password Login password of the redis server.
spring.redis.pool.max-active 8 Max number of connections that can be allocated by the pool at a given time. Use a negative value for no limit.
spring.redis.pool.max-idle 8 Max number of “idle” connections in the pool. Use a negative value to indicate an unlimited number of idle connections.
spring.redis.pool.max-wait -1 Maximum time a connection allocation to block before throwing an exception in case of pool is exhausted. Use -1 to block indefinitely.
spring.redis.pool.min-idle 0 Minimum number of idle connections in pool. This works only when set as positive.
spring.redis.port 6379 Redis server port.
spring.redis.sentinel.master Name of Redis server.
spring.redis.sentinel.nodes Comma-separated host:port pairs.
spring.redis.timeout 0 Connection timeout (milliseconds).

Flyway Configuration Properties

Spring Boot load these properties in FlywayProperties class.

Name Default Value Description
flyway.baseline-description
flyway.baseline-version 1 version to start migration
flyway.baseline-on-migrate
flyway.check-location false Check that migration scripts location exists.
flyway.clean-on-validation-error
flyway.enabled true Enable flyway.
flyway.encoding
flyway.ignore-failed-future-migration
flyway.init-sqls SQL statements to execute immediate to initialize when a connection obtain.
flyway.locations classpath:db/migration locations of migrations scripts
flyway.out-of-order
flyway.password JDBC password In case if you want Flyway create its own DataSource
flyway.placeholder-prefix
flyway.placeholder-replacement
flyway.placeholder-suffix
flyway.placeholders.*
flyway.schemas schemas to update
flyway.sql-migration-prefix V
flyway.sql-migration-separator
flyway.sql-migration-suffix .sql
flyway.table
flyway.url JDBC database url to migrate. Incase not set then use the primary configured data source.
flyway.user Login user of the database to migrate.
flyway.validate-on-migrate

Liquibase Configuration Properties

Spring Boot load these properties in LiquibaseProperties class.

Name Default Value Description
liquibase.change-log classpath:/db/ changelog/db.
changelog-master.yaml
Change log configuration path.
liquibase.check-change-log-location true Check the change log location exists.
liquibase.contexts Comma-separated list of runtime contexts to use.
liquibase.default-schema Default database schema.
liquibase.drop-first false Drop the database schema first.
liquibase.enabled true Enable liquibase support.
liquibase.labels Comma-separated list of runtime labels to use.
liquibase.parameters.* Change log parameters.
liquibase.password Login password of the database to migrate.
liquibase.rollback-file File to rollback SQL statements will be written when an update is performed.
liquibase.url JDBC url of the database to migrate. Incase not set then use the primary configured data source.
liquibase.user Login user of the database to migrate.

Couchbase Configuration Properties

Spring Boot load these properties in CouchbaseProperties class.

Name Default Value Description
spring.couchbase.bootstrap-hosts Couchbase nodes host/IP address to bootstrap from.
spring.couchbase.bucket.name default bucket name connect to.
spring.couchbase.bucket.password bucket password.
spring.couchbase.env.endpoints.key-value 1 Number of sockets per node for each Key/value service.
spring.couchbase.env.endpoints.query 1 Number of sockets per node for each Query (N1QL) service.
spring.couchbase.env.endpoints.view 1 Number of sockets per node for each view service.
spring.couchbase.env.ssl.enabled Enable SSL support. Enabled automatically if a “keyStore” is provided otherwise specified otherwise.
spring.couchbase.env.ssl.key-store Path to JVM key store which holds the certificates.
spring.couchbase.env.ssl.key-store-password Password used to access the key store.
spring.couchbase.env.timeouts.connect 5000 Bucket connections timeout. (in milliseconds)
spring.couchbase.env.timeouts.key-value 2500 Blocking operations performed on a key timeout.( in milliseconds)
spring.couchbase.env.timeouts.query 7500 N1QL query operations timeout.( in milliseconds)
spring.couchbase.env.timeouts.socket-connect 1000 Socket connect connections timeout.( in milliseconds).
spring.couchbase.env.timeouts.view 7500 Regular and geospatial view operations timeout. (in milliseconds).

Cassandra Configuration Properties

Spring Boot load these properties in CassandraProperties class.

Name Default Value Description
spring.data.cassandra.cluster-name Cassandra cluster Name.
spring.data.cassandra.compression Compression supported by the Cassandra binary protocol.
spring.data.cassandra.connect-timeout-millis Socket option: connection time out.
spring.data.cassandra.consistency-level Queries consistency level.
spring.data.cassandra.contact-points localhost Comma-separated cluster node addresses.
spring.data.cassandra.fetch-size Queries default fetch size.
spring.data.cassandra.keyspace-name Keyspace name to use.
spring.data.cassandra.load-balancing-policy Class name of the load balancing policy.
spring.data.cassandra.port Port of the Cassandra server.
spring.data.cassandra.password Login password of the server.
spring.data.cassandra.read-timeout-millis Socket option: read time out.
spring.data.cassandra.reconnection-policy Reconnection policy class.
spring.data.cassandra.repositories.enabled Enable Cassandra repositories.
spring.data.cassandra.retry-policy Class name of the retry policy.
spring.data.cassandra.serial-consistency-level Queries serial consistency level.
spring.data.cassandra.schema-action none Schema action to take at startup.
spring.data.cassandra.ssl false Enable SSL support.
spring.data.cassandra.username Login user of the server.

Data Couchbase Configuration Properties

Spring Boot load these properties in CouchbaseDataProperties class.

Name Default Value Description
spring.data.couchbase.auto-index false create views and indexes automatically.
spring.data.couchbase.consistency read-your-own-writes By default Consistency to apply on generated queries.
spring.data.couchbase.repositories.enabled true Enable Couchbase repositories.

SOLR Configuration Properties

Spring Boot load these properties in SolrProperties class.

Name Default Value Description
spring.data.solr.host http://127.0.0.1:8983/solr Solr host. Ignored if “zk-host” is set.
spring.data.solr.repositories.enabled true Enable Solr repositories.
spring.data.solr.zk-host ZooKeeper host address i.e HOST:PORT.

ElasticSearch Configuration Properties

Spring Boot load these properties in ElasticsearchProperties class.

Name Default Value Description
spring.data.elasticsearch.cluster-name elasticsearch cluster name.
spring.data.elasticsearch.cluster-nodes Comma-separated cluster node addresses. If not specified, starts a client node.
spring.data.elasticsearch.properties.* Additional properties used to configure the client.
spring.data.elasticsearch.repositories.enabled true Enable Elasticsearch repositories.

JEST (Elasticsearch HTTP client) Configuration Properties

Spring Boot load these properties in JestProperties class.

Name Default Value Description
spring.elasticsearch.jest.connection-timeout 3000 Connection timeout in milliseconds.
spring.elasticsearch.jest.password Login password.
spring.elasticsearch.jest.proxy.host Proxy host the HTTP client to use.
spring.elasticsearch.jest.proxy.port Proxy port the HTTP client to use.
spring.elasticsearch.jest.read-timeout 3000 Read timeout. (in milliseconds)
spring.elasticsearch.jest.uris http://localhost:9200 Comma-separated Elasticsearch instances to use.
spring.elasticsearch.jest.username Login user.

Embedded MongoDB Configuration Properties

Spring Boot load these properties in EmbeddedMongoProperties class.

Name Default Value Description
spring.mongodb.embedded.features SYNC_DELAY Comma-separated features to enable.
spring.mongodb.embedded.storage.database-dir Directory used for data storage.
spring.mongodb.embedded.storage.oplog-size Maximum size of the oplog in megabytes.
spring.mongodb.embedded.storage.repl-set-name Name of the replica set.
spring.mongodb.embedded.version 2.6.10 Version of Mongo to use.

MongoDB Configuration Properties

Spring Boot load these properties in MongoProperties class.

Name Default Value Description
spring.data.mongodb.authentication-database Authentication database name.
spring.data.mongodb.database test Database name.
spring.data.mongodb.field-naming-strategy USe Fully qualified name of the FieldNamingStrategy.
spring.data.mongodb.grid-fs-database GridFS database name.
spring.data.mongodb.host localhost Mongo server host.
spring.data.mongodb.password Login password of the mongo server.
spring.data.mongodb.port 27017 Mongo server port.
spring.data.mongodb.repositories.enabled true Enable Mongo repositories.
spring.data.mongodb.uri mongodb://localhost/test Mongo database URI.host and port are ignored when setting it.
spring.data.mongodb.username Login user of the mongo server.

Neo4j Configuration Properties

Spring Boot load these properties in Neo4jProperties class.

Name Default Value Description
spring.data.neo4j.compiler Compiler to use.
spring.data.neo4j.embedded.enabled true Enable embedded mode when embedded driver is available.
spring.data.neo4j.password Login password of the server.
spring.data.neo4j.repositories.enabled true Enable Neo4j repositories.
spring.data.neo4j.session.scope singleton Scope (lifetime) of the session.
spring.data.neo4j.uri URI used by the driver detected by default.
spring.data.neo4j.username Login user of the server.

JOOQ Configuration Properties

Spring Boot load these properties in JooqAutoConfiguration class.

Name Default Value Description
spring.jooq.sql-dialect Use SQLDialect JOOQ when communicating with the configured datasource. For example: `POSTGRES`

Atomikos Configuration Properties

Spring Boot load these properties in AtomikosProperties class.

Name Default Value Description
spring.jta.atomikos.connectionfactory.borrow-connection-timeout 30 Timeout for borrowing connections from the pool. (in seconds)
spring.jta.atomikos.connectionfactory.ignore-session-transacted-flag true Set to ignore the transacted flag when creating session.
spring.jta.atomikos.connectionfactory.local-transaction-mode false Set local transactions are desired.
spring.jta.atomikos.connectionfactory.maintenance-interval 60 The time between runs of the pool’s maintenance thread. (in seconds).
spring.jta.atomikos.connectionfactory.max-idle-time 60 The time after which connections are cleaned up from the pool. (in seconds)
spring.jta.atomikos.connectionfactory.max-lifetime 0 The time that a connection can be pooled for before being destroyed. 0 denotes no limit.(in seconds)
spring.jta.atomikos.connectionfactory.max-pool-size 1 The maximum pool size.
spring.jta.atomikos.connectionfactory.min-pool-size 1 The minimum pool size.
spring.jta.atomikos.connectionfactory.reap-timeout 0 The reap timeout for borrowed connections. 0 denotes no limit.( in seconds)
spring.jta.atomikos.connectionfactory.unique-resource-name jmsConnectionFactory The unique name used to identify the resource during recovery.
spring.jta.atomikos.datasource.borrow-connection-timeout 30 Timeout for borrowing connections from the pool. (in seconds)
spring.jta.atomikos.datasource.default-isolation-level Default isolation level of connections provided by the pool.
spring.jta.atomikos.datasource.login-timeout Timeout for establishing a database connection.(in seconds)
spring.jta.atomikos.datasource.maintenance-interval 60 The time between runs of the pool’s maintenance thread.(in seconds)
spring.jta.atomikos.datasource.max-idle-time 60 The time after which connections are cleaned up from the pool.(in seconds)
spring.jta.atomikos.datasource.max-lifetime 0 The time that a connection can be pooled for before being destroyed. 0 denotes no limit.(in seconds)
spring.jta.atomikos.datasource.max-pool-size 1 The maximum pool size.
spring.jta.atomikos.datasource.min-pool-size 1 The minimum pool size.
spring.jta.atomikos.datasource.reap-timeout 0 The reap timeout for borrowed connections. 0 denotes no limit.(in seconds)
spring.jta.atomikos.datasource.test-query SQL query or statement used to validate a connection before returning it.
spring.jta.atomikos.datasource.unique-resource-name dataSource The unique name used to identify the resource during recovery.
spring.jta.atomikos.properties.checkpoint-interval 500 Interval between checkpoints.
spring.jta.atomikos.properties.default-jta-timeout 10000 Default timeout for JTA transactions.
spring.jta.atomikos.properties.enable-logging true Enable disk logging.
spring.jta.atomikos.properties.force-shutdown-on-vm-exit false Specify if a VM shutdown should trigger forced shutdown of the transaction core.
spring.jta.atomikos.properties.log-base-dir Directory in which the log files should be stored.
spring.jta.atomikos.properties.log-base-name tmlog Transactions log file base name.
spring.jta.atomikos.properties.max-actives 50 Maximum active transactions.
spring.jta.atomikos.properties.max-timeout 300000 Maximum timeout that can be allowed for transactions. (in milliseconds)
spring.jta.atomikos.properties.serial-jta-transactions true Specify if sub-transactions should be joined when possible.
spring.jta.atomikos.properties.service Transaction manager implementation that should be started.
spring.jta.atomikos.properties.threaded-two-phase-commit true Use different (and concurrent) threads for two-phase commit on the resources.
spring.jta.atomikos.properties.transaction-manager-unique-name Transaction manager’s unique name.

Bironix Configuration Properties

 

Name Default Value Description
spring.jta.bitronix.connectionfactory.acquire-increment 1 Number of connections to create when pool grow.
spring.jta.bitronix.connectionfactory.acquisition-interval 1 Time to wait before trying to acquire a connection again after an invalid connection was acquired.(in second)
spring.jta.bitronix.connectionfactory.acquisition-timeout 30 Timeout for acquiring connections from the pool. (in second)
spring.jta.bitronix.connectionfactory.allow-local-transactions true Set the transaction manager should allow mixing XA and non-XA transactions.
spring.jta.bitronix.connectionfactory.apply-transaction-timeout false Set the transaction timeout should be set on the XAResource when it is enlisted.
spring.jta.bitronix.connectionfactory.automatic-enlisting-enabled true Set resources should be enlisted and delisted automatically
spring.jta.bitronix.connectionfactory.cache-producers-consumers true Set produces and consumers should be cached.
spring.jta.bitronix.connectionfactory.defer-connection-release true Set the provider can run many transactions on the same connection and supports transaction interleaving.
spring.jta.bitronix.connectionfactory.ignore-recovery-failures false Set recovery failures should be ignored.
spring.jta.bitronix.connectionfactory.max-idle-time 60 The time after which connections are cleaned up from the pool.(in second)
spring.jta.bitronix.connectionfactory.max-pool-size 10 The maximum pool size. 0 denotes no limit.
spring.jta.bitronix.connectionfactory.min-pool-size 0 The minimum pool size.
spring.jta.bitronix.connectionfactory.password The password to use to connect to the JMS provider.
spring.jta.bitronix.connectionfactory.share-transaction-connections false Set connections in the ACCESSIBLE state can be shared within the context of a transaction.
spring.jta.bitronix.connectionfactory.test-connections true Set connections should be tested when acquired from the pool.
spring.jta.bitronix.connectionfactory.two-pc-ordering-position 1 The position that this resource should take during two-phase commit (always first is Integer.MIN_VALUE, always last is Integer.MAX_VALUE).
spring.jta.bitronix.connectionfactory.unique-name jmsConnectionFactory The unique name used to identify the resource during recovery.
spring.jta.bitronix.connectionfactory.use-tm-join true Set TMJOIN should be used when starting XAResources.
spring.jta.bitronix.connectionfactory.user The user to use to connect to the JMS provider.
spring.jta.bitronix.datasource.acquire-increment 1 Number of connections to create when growing the pool.
spring.jta.bitronix.datasource.acquisition-interval 1 Time to wait before trying to acquire a connection again after an invalid connection was acquired.(in second)
spring.jta.bitronix.datasource.acquisition-timeout 30 Timeout for acquiring connections from the pool. (in second)
spring.jta.bitronix.datasource.allow-local-transactions true Set the transaction manager should allow mixing XA and non-XA transactions.
spring.jta.bitronix.datasource.apply-transaction-timeout false Set the transaction timeout should be set on the XAResource when it is enlisted.
spring.jta.bitronix.datasource.automatic-enlisting-enabled true Set resources should be enlisted and delisted automatically.
spring.jta.bitronix.datasource.cursor-holdability The default cursor holdability for connections.
spring.jta.bitronix.datasource.defer-connection-release true Set the database can run many transactions on the same connection and supports transaction interleaving.
spring.jta.bitronix.datasource.enable-jdbc4-connection-test Set Connection.isValid() is called when acquiring a connection from the pool.
spring.jta.bitronix.datasource.ignore-recovery-failures false Set recovery failures should be ignored.
spring.jta.bitronix.datasource.isolation-level The default isolation level for connections.
spring.jta.bitronix.datasource.local-auto-commit The default auto-commit mode for local transactions.
spring.jta.bitronix.datasource.login-timeout Timeout for establishing a database connection.(in second)
spring.jta.bitronix.datasource.max-idle-time 60 The time after which connections are cleaned up from the pool.(in second)
spring.jta.bitronix.datasource.max-pool-size 10 The maximum pool size. 0 denotes no limit.
spring.jta.bitronix.datasource.min-pool-size 0 The minimum pool size.
spring.jta.bitronix.datasource.prepared-statement-cache-size 0 The target size of the prepared statement cache. 0 disables the cache.
spring.jta.bitronix.datasource.share-transaction-connections false Set connections in the ACCESSIBLE state can be shared within the context of a transaction.
spring.jta.bitronix.datasource.test-query SQL query or statement used to validate a connection before returning it.
spring.jta.bitronix.datasource.two-pc-ordering-position 1 The position that this resource should take during two-phase commit (always first is Integer.MIN_VALUE, always last is Integer.MAX_VALUE).
spring.jta.bitronix.datasource.unique-name dataSource The unique name used to identify the resource during recovery.
spring.jta.bitronix.datasource.use-tm-join true Set TMJOIN should be used when starting XAResources.
spring.jta.bitronix.properties.allow-multiple-lrc false Allow multiple LRC resources to be enlisted into the same transaction.
spring.jta.bitronix.properties.asynchronous2-pc false Enable asynchronously execution of two phase commit. spring.jta.bitronix.properties.background-recovery-interval-seconds 60 Interval at which to run the recovery process in the background.(in seconds) spring.jta.bitronix.properties.current-node-only-recovery true Recover only the current node. spring.jta.bitronix.properties.debug-zero-resource-transaction false Log the creation and commit call stacks of transactions executed without a single enlisted resource. spring.jta.bitronix.properties.default-transaction-timeout 60 Default transaction timeout.(in second) spring.jta.bitronix.properties.disable-jmx false Enable JMX support. spring.jta.bitronix.properties.exception-analyzer Set the fully qualified name of the exception analyzer implementation to use. spring.jta.bitronix.properties.filter-log-status false Enable filtering of logs so that only mandatory logs are written. spring.jta.bitronix.properties.force-batching-enabled true Set if disk forces are batched. spring.jta.bitronix.properties.forced-write-enabled true Set if logs are forced to disk. spring.jta.bitronix.properties.graceful-shutdown-interval 60 Maximum amount of seconds the TM will wait for transactions to get done before aborting them at shutdown time. spring.jta.bitronix.properties.jndi-transaction-synchronization-registry-name JNDI name of the TransactionSynchronizationRegistry. spring.jta.bitronix.properties.jndi-user-transaction-name JNDI name of the UserTransaction. spring.jta.bitronix.properties.journal disk Name of the journal. Can be ‘disk’, ‘null’ or a class name. spring.jta.bitronix.properties.log-part1-filename btm1.tlog Name of the first fragment of the journal. spring.jta.bitronix.properties.log-part2-filename btm2.tlog Name of the second fragment of the journal. spring.jta.bitronix.properties.max-log-size-in-mb 2 Maximum size in megabytes of the journal fragments. spring.jta.bitronix.properties.resource-configuration-filename ResourceLoader configuration file name. spring.jta.bitronix.properties.server-id ASCII ID that must uniquely identify this TM instance. Default to the machine’s IP address. spring.jta.bitronix.properties.skip-corrupted-logs false Skip corrupted transactions log entries. spring.jta.bitronix.properties.warn-about-zero-resource-transaction true Log a warning for transactions executed without a single enlisted resource.

 

Narayana Configuration Properties

Spring Boot load these properties in NarayanaProperties class.

Name Default Value Description
spring.jta.narayana.default-timeout 60 Transaction timeout.(in second)
spring.jta.narayana.expiry-scanners com.arjuna.ats.internal. arjuna.recovery. ExpiredTransactionStatusManagerScanner Comma-separated list of expiry scanners.
spring.jta.narayana.log-dir Transaction object store directory.
spring.jta.narayana.one-phase-commit true Enable one phase commit optimisation.
spring.jta.narayana.periodic-recovery-period 120 Interval in which periodic recovery scans are performed.(in second)
spring.jta.narayana.recovery-backoff-period 10 Back off period between first and second phases of the recovery scan.(in second)
spring.jta.narayana.recovery-db-pass Database password for recovery manager.
spring.jta.narayana.recovery-db-user Database username for recovery manager.
spring.jta.narayana.recovery-jms-pass JMS password for recovery manager.
spring.jta.narayana.recovery-jms-user JMS username for recovery manager.
spring.jta.narayana.recovery-modules Comma-separated recovery modules.
spring.jta.narayana.transaction-manager-id 1 Unique transaction manager id.
spring.jta.narayana.xa-resource-orphan-filters Comma-separated orphan filters.

References

https://docs.spring.io/spring-boot/docs/1.4.x/reference/html/common-application-properties.html

Logstash Connection Refused by Elasticsearch Over Proxy or AIC


We faced connection refused issue if  trying to Logstash output data to Elasticsearch over HTTP that happen because of Proxy configuration or if Elasticsearch on cloud environment.

Generally we faced below exception

[2017-04-24T10:45:32,933][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#}
[2017-04-24T10:45:32,934][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_second
Logstash allow proxy declaration in configuration file for Elasticsearch Output as given below in field proxy. For user userid and password is having any special symbol than you have to use ASCII value. For example my password is music@123 then after converting to ASCII value for that is %40 my password become music%40123. Refer this link ASCII CODE for getting ASCII value corresponding to each character.
proxy => "http://userid:passowrd@proxyhost:8080"

For  example my userid and password is “smart” and “music@123” below proxy configuration like

proxy => "http://smart:music%40123@proxyhost:8080"

How to set Proxy in Logstash Configuration for Elasticsearch Output?

output {
    elasticsearch {
       index => "app1-logs-%{+YYYY.MM.dd}"
       proxy => "http://smart:music%40123@proxyhost:8080"
       hosts => ["elasticServerHost:elasticServerPort"]
       }
}

Issues Solution

For more Logstash issues solution follow link Common Logstash Issues.

Sample filebeat.yml file for Prospectors, Elasticsearch Output and Logging Configuration


Filebeat.yml file with Prospectors, Multiline,Elasticsearch Output and Logging Configuration

You can copy same file in filebeat.yml and run after making below change as per your environment directory structure and follow steps mentioned for Filebeat Download,Installation and Start/Run

  • Change on Prospectors section for your logs file directory and file name
  • Configure Multiline pattern as per your logs format as of now set as generic hopefully will work with all pattern
  • Change on Elasticsearch output section for Host ,Port and other setting if required
  • Change on logging directory as per you machine directory.

Sample filebeat.yml file

#=============Filebeat prospectors ===============

filebeat.prospectors:

# Here we can define multiple prospectors and shipping method and rules  as per #requirement and if need to read logs from multiple file from same patter directory #location can use regular pattern also.

#Filebeat support only two types of input_type log and stdin

##############input type logs configuration#####################

- input_type: log

# Paths of the files from where logs will read and use regular expression if need to read #from multiple files
paths:
- /opt/app/app1/logs/app1-debug*.log*
# make this fields_under_root as true if you want filebeat json out for read files in root.
fields_under_root: true

### Multiline configuration for handeling stacktrace, Object, XML etc if that is the case #and multiline is enabled with below configuration will shipped output for these case in #multiline

# The regexp Pattern that has to be matched. The example pattern matches all lines #starting with [DEBUG,ALERT,TRACE,WARNING log level that can be customize #according to your logs line format
#multiline.pattern: '^\[([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)'

# Default is false.Defines if the pattern match  should be negated or not.
#multiline.negate: true

# multiline.match define if pattern not match with above pattern where these line need #to append.Possible values  are "after" or "before".

#multiline.match: after

# if you will set this max line after these number of multiline all will ignore
#multiline.max_lines: 50</pre>
<h4>#==========Elasticsearch Output Configuration=======================</h4>
<pre>output.elasticsearch:
# We can configure this flag the output as module.
#enabled: true

#Define elasticsearch elasticsearch HTTP client server host and port. default port for #elasticsearch is 9200
hosts: ["elasticsearver:9200"]

# Filebeat provide gzip compression level which varies from 1 to 9. As compression level #increase processing speed will reduce but network speed increase.By default #compression level disable and value is 0.
compression_level: 0

# Optional protocol by default HTTP. If requires set https and basic auth credentials for #credentials if any.
#protocol: "https"
#username: "userid"
#password: "pwd"

# we can configure number of worker for each host publishing events to elasticseach #which will do load balancing.
#worker: 1

# Optional index name. The default is "filebeat" plus date and generates filebeat-{YYYY.MM.DD} keys.
index: "app1-%{+yyyy.MM.dd}"

# Optional ingest node pipeline. By default no pipeline will be used.
#pipeline: ""

# Optional HTTP Path
#path: "/elasticsearch"

# Proxy server url
#proxy_url: http://proxy:3128

# Default value is 3. When max retry reach specified limit and evens not published all #events will drop. Filebeat also provide option to retry until all events are published by #setting value as less than 0.
#max_retries: 3

#Default values is 50. If filebeat is generating events more than configure batch max size it will split events in configure size batches and send to elasticsearch. As much as batch size will increase performance will improve but require more buffring. It can cause other issue like connection, errors, timeout for requests.
#bulk_max_size: 50

#Default value is 90 seconds. If no response http request will timeout.
#timeout: 90

# waiting time for new events for bulk requests. If bulk request max size sent before this #specified time, new bulk index request created.
#flush_interval: 1s

# We can update elasticsearch index template from filebeat which will define settings #and mappings to determine field analysis.

# Set to false to disable template loading.
#template.enabled: true

# Template name. By default the template name is filebeat.
#template.name: "app1"

# Path to template file
#template.path: "${path.config}/app1.template.json"

#Set template.overwrite as true and if need to update template file version as 2.x then set #path of Latest template file with below configuration.
#template.overwrite: false
#template.versions.2x.enabled: true
#template.versions.2x.path: "${path.config}/filebeat.template-es2x.json"

# Configure SSL setting id required for Kafk broker
#ssl.enabled: true

# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications

#SSL configuration is Optional and OFF by default . It required for server verification if #HTTPS root certificate .
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

#Default value is full. SSL configuration verfication mode is required if SSL is configured .#We can use value as 'none' for testing purpose but in this mode it can accept any #certificate.
#ssl.verification_mode: full

# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.

#By Default  it support all TLS versions after 1.0 to 1.2. We can also mentioned version in #below array
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]

# Define path for certificate for SSL
#ssl.certificate: "/etc/pki/client/cert.pem"

# Define path for Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"

# If data is configured and shipped encrypted form. Need to add passphrase for decrypting the Certificate Key otherwise optional
#ssl.key_passphrase: ''

# Configure encryption cipher suites to be used for SSL connections
#ssl.cipher_suites: []

# Configure encryption curve types for ECDHE based cipher suites
#ssl.curve_types: []
#====================Logging ==============================

# Default log level is info if set above or below will record top this hierarchy #automatically. Available log levels are: critical, error, warning, info, debug

logging.level: debug
# Possible values for selectors are "beat", "publish" and  "service" if you want to enable #for all select value as "*". This selector decide on command line when  start filebeat.
logging.selectors: ["*"]

# The default value is false.If make it true will send out put to syslog.
logging.to_syslog: false
# The default is true. all non-zero metrics  reading are output on shutdown.
logging.metrics.enabled: true

# Period of matrics for log reading counts from log files and it will send complete report #when shutdown filebeat
logging.metrics.period: 30s
# Set this flag as true to enable logging in files if not set that will disable.
logging.to_files: true
logging.files:
# Path of directory where logs file will write if not set default directory will home #directory.
path: /tmp

# Name of files where logs will write
name: filebeat-app.log
# Log File will rotate if reach max size and will create new file. Default value is 10MB
rotateeverybytes: 10485760 # = 10MB

# This will keep recent maximum log files in directory for rotation and remove oldest #files.
keepfiles: 7
# Will enable logging for that level only. Available log levels are: critical, error, warning, #info, debug
level: debug

Read More on Filebeat

To Know more about YAML follow link YAML Tutorials.

Sample filebeat.yml File

Integration

Integrate Filebeat, Kafka, Logstash, Elasticsearch and Kibana

Filebeat,Elasticsearch Output Configuration


If we need  to shipped server logs lines  directly to elasticseach  over HTTP by filebeat . We have set below fields for elasticsearch output according to your elasticsearch server configuration and follow below steps.

  1.  Uncomment output.elasticsearch in filebeat.yml file Elasticsearch
  2.  Set host and port in hosts line
  3.  Set index name as you want. If it’s not set filebeat will create default index as “filebeat-%{+yyyy.MM.dd}” .
output.elasticsearch :

   enabled:true
   hosts:["localhost:9200"]
   index:app1-logs-%{+yyyy.MM.dd}"

Elasticsearch server credentials configuration if any 

  1.  Set user name and password
  2.  Set protocol if https because default protocol is http
    username:userid
    password:pwd

Elasticsearch Index Template Configuration: We can update elasticsearch index template from filebeat which will define settings and mappings to determine field analysis.

Auto Index Template Loading: Filebeat package will load default template filebeat.template.json to elasticsearch if no any template configuration for template and will not overwrite template.

Customize Index Template Loading: We can upload our user define template and update version also by using below configuration.

#(if set as false template need to upload manually)
template.enabled:true
#default value is filebeat
template.name:"app1"
#default value is filebeat.template.json.
template.path:"app1.template.json"
#default value is false
template.overwrite:false 

By default, template.overwrite value is false and will not overwrite index template if already exist on elasticsearch.  For overwriting index template make this flag as true in filebeat.yml configuraton file.

Latest Template Version Loading from Filebeat: Set template.overwrite as true and if need to update template file version as 2.x then set path of Latest template file with below configuration.


template.overwrite:true
template.versions.2x.enabled: true
template.versions.2x.path:"${path.config}/app1.template-es2x.json"

Manually Index Template Loading : for manually index loading please refer Elasticsearch Index Template Management.

Compress Elasticsearch Output :  Filebeat provide gzip compression level which varies from 1 to 9. As compression level increase processing speed will reduce but network speed increase.By default compression level disable and value is 0.


compress_level: 0

Other configuration Options:

bulk_max_size : Default values is 50. If filebeat is generating events more than configure batch max size it will split events in configure size batches and send to elasticsearch. As much as batch size will increase performance will improve but require more buffring. It can cause other issue like connection, errors, timeout for requests.

Never set value of bulk size as 0 because there would not be any buffering for events and filebeat will send each event directly to elasticsearch.

timeout: Default value is 90 seconds. If no response http request will timeout.

flush_interval: waiting time for new events for bulk requests. If bulk request max size sent before this specified time, new bulk index request created.

max_retries: Default value is 3. When max retry reach specified limit and evens not published all events will drop. Filebeat also provide option to retry until all events are published by setting value as less than 0.

worker:  we can configure number of worker for each host publishing events to elasticseach which will do load balancing.

 Sample Filebeat Configuration file:

Sample filebeat.yml file to Integrate Filebeat with Elasticsearch

Integration

Complete Integration Example Filebeat, Kafka, Logstash, Elasticsearch and Kibana

Read More

To read more on Filebeat topics, sample configuration files and integration with other systems with example follow link Filebeat Tutorial  and  Filebeat Issues. To know more about YAML follow link YAML tutorials.

Leave you feedback to enhance more on this topic so that make it more helpful for others.