<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Document</title>
</head>
<body>
	<table class="data-table">
		<tbody>
			<tr>
				<th>Name</th>
				<th>Description</th>
				<th>Type</th>
				<th>Default</th>
				<th>Valid Values</th>
				<th>Importance</th>
			</tr>
			<tr>
				<td>key.deserializer</td>
				</td>
				<td>Deserializer class for key that implements the <code>org.apache.kafka.common.serialization.Deserializer</code>
					interface.
				</td>
				</td>
				<td>class</td>
				</td>
				<td></td>
				</td>
				<td></td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>value.deserializer</td>
				</td>
				<td>Deserializer class for value that implements the <code>org.apache.kafka.common.serialization.Deserializer</code>
					interface.
				</td>
				</td>
				<td>class</td>
				</td>
				<td></td>
				</td>
				<td></td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>bootstrap.servers</td>
				</td>
				<td>A list of host/port pairs to use for establishing the
					initial connection to the Kafka cluster. The client will make use
					of all servers irrespective of which servers are specified here for
					bootstrapping&mdash;this list only impacts the initial hosts used
					to discover the full set of servers. This list should be in the
					form <code>host1:port1,host2:port2,...</code>. Since these servers
					are just used for the initial connection to discover the full
					cluster membership (which may change dynamically), this list need
					not contain the full set of servers (you may want more than one,
					though, in case a server is down).
				</td>
				</td>
				<td>list</td>
				</td>
				<td>""</td>
				</td>
				<td>non-null string</td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>fetch.min.bytes</td>
				</td>
				<td>The minimum amount of data the server should return for a
					fetch request. If insufficient data is available the request will
					wait for that much data to accumulate before answering the request.
					The default setting of 1 byte means that fetch requests are
					answered as soon as a single byte of data is available or the fetch
					request times out waiting for data to arrive. Setting this to
					something greater than 1 will cause the server to wait for larger
					amounts of data to accumulate which can improve server throughput a
					bit at the cost of some additional latency.</td>
				</td>
				<td>int</td>
				</td>
				<td>1</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>group.id</td>
				</td>
				<td>A unique string that identifies the consumer group this
					consumer belongs to. This property is required if the consumer uses
					either the group management functionality by using <code>subscribe(topic)</code>
					or the Kafka-based offset management strategy.
				</td>
				</td>
				<td>string</td>
				</td>
				<td>""</td>
				</td>
				<td></td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>heartbeat.interval.ms</td>
				</td>
				<td>The expected time between heartbeats to the consumer
					coordinator when using Kafka's group management facilities.
					Heartbeats are used to ensure that the consumer's session stays
					active and to facilitate rebalancing when new consumers join or
					leave the group. The value must be set lower than <code>session.timeout.ms</code>,
					but typically should be set no higher than 1/3 of that value. It
					can be adjusted even lower to control the expected time for normal
					rebalances.
				</td>
				</td>
				<td>int</td>
				</td>
				<td>3000</td>
				</td>
				<td></td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>max.partition.fetch.bytes</td>
				</td>
				<td>The maximum amount of data per-partition the server will
					return. Records are fetched in batches by the consumer. If the
					first record batch in the first non-empty partition of the fetch is
					larger than this limit, the batch will still be returned to ensure
					that the consumer can make progress. The maximum record batch size
					accepted by the broker is defined via <code>message.max.bytes</code>
					(broker config) or <code>max.message.bytes</code> (topic config).
					See fetch.max.bytes for limiting the consumer request size.
				</td>
				</td>
				<td>int</td>
				</td>
				<td>1048576</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>session.timeout.ms</td>
				</td>
				<td>The timeout used to detect consumer failures when using
					Kafka's group management facility. The consumer sends periodic
					heartbeats to indicate its liveness to the broker. If no heartbeats
					are received by the broker before the expiration of this session
					timeout, then the broker will remove this consumer from the group
					and initiate a rebalance. Note that the value must be in the
					allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code>
					and <code>group.max.session.timeout.ms</code>.
				</td>
				</td>
				<td>int</td>
				</td>
				<td>10000</td>
				</td>
				<td></td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>ssl.key.password</td>
				</td>
				<td>The password of the private key in the key store file. This
					is optional for client.</td>
				</td>
				<td>password</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>ssl.keystore.location</td>
				</td>
				<td>The location of the key store file. This is optional for
					client and can be used for two-way authentication for client.</td>
				</td>
				<td>string</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>ssl.keystore.password</td>
				</td>
				<td>The store password for the key store file. This is optional
					for client and only needed if ssl.keystore.location is configured.
				</td>
				</td>
				<td>password</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>ssl.truststore.location</td>
				</td>
				<td>The location of the trust store file.</td>
				</td>
				<td>string</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>ssl.truststore.password</td>
				</td>
				<td>The password for the trust store file. If a password is not
					set access to the truststore is still available, but integrity
					checking is disabled.</td>
				</td>
				<td>password</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>high</td>
				</td>
			</tr>
			<tr>
				<td>auto.offset.reset</td>
				</td>
				<td>What to do when there is no initial offset in Kafka or if
					the current offset does not exist any more on the server (e.g.
					because that data has been deleted):
					<ul>
						<li>earliest: automatically reset the offset to the earliest
							offset
						<li>latest: automatically reset the offset to the latest
							offset</li>
						<li>none: throw exception to the consumer if no previous
							offset is found for the consumer's group</li>
						<li>anything else: throw exception to the consumer.</li>
					</ul>
				</td>
				</td>
				<td>string</td>
				</td>
				<td>latest</td>
				</td>
				<td>[latest, earliest, none]</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>client.dns.lookup</td>
				</td>
				<td><p>Controls how the client uses DNS lookups.</p>
					<p>
						If set to
						<code>use_all_dns_ips</code>
						then, when the lookup returns multiple IP addresses for a
						hostname, they will all be attempted to connect to before failing
						the connection. Applies to both bootstrap and advertised servers.
					</p>
					<p>
						If the value is
						<code>resolve_canonical_bootstrap_servers_only</code>
						each entry will be resolved and expanded into a list of canonical
						names.
					</p></td>
				</td>
				<td>string</td>
				</td>
				<td>default</td>
				</td>
				<td>[default, use_all_dns_ips,
					resolve_canonical_bootstrap_servers_only]</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>connections.max.idle.ms</td>
				</td>
				<td>Close idle connections after the number of milliseconds
					specified by this config.</td>
				</td>
				<td>long</td>
				</td>
				<td>540000</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>default.api.timeout.ms</td>
				</td>
				<td>Specifies the timeout (in milliseconds) for consumer APIs
					that could block. This configuration is used as the default timeout
					for all consumer operations that do not explicitly accept a <code>timeout</code>
					parameter.
				</td>
				</td>
				<td>int</td>
				</td>
				<td>60000</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>enable.auto.commit</td>
				</td>
				<td>If true the consumer's offset will be periodically
					committed in the background.</td>
				</td>
				<td>boolean</td>
				</td>
				<td>true</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>exclude.internal.topics</td>
				</td>
				<td>Whether records from internal topics (such as offsets)
					should be exposed to the consumer. If set to <code>true</code> the
					only way to receive records from an internal topic is subscribing
					to it.
				</td>
				</td>
				<td>boolean</td>
				</td>
				<td>true</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>fetch.max.bytes</td>
				</td>
				<td>The maximum amount of data the server should return for a
					fetch request. Records are fetched in batches by the consumer, and
					if the first record batch in the first non-empty partition of the
					fetch is larger than this value, the record batch will still be
					returned to ensure that the consumer can make progress. As such,
					this is not a absolute maximum. The maximum record batch size
					accepted by the broker is defined via <code>message.max.bytes</code>
					(broker config) or <code>max.message.bytes</code> (topic config).
					Note that the consumer performs multiple fetches in parallel.
				</td>
				</td>
				<td>int</td>
				</td>
				<td>52428800</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>isolation.level</td>
				</td>
				<td><p>
						Controls how to read messages written transactionally. If set to
						<code>read_committed</code>
						, consumer.poll() will only return transactional messages which
						have been committed. If set to
						<code>read_uncommitted</code>
						' (the default), consumer.poll() will return all messages, even
						transactional messages which have been aborted. Non-transactional
						messages will be returned unconditionally in either mode.
					</p>
					<p>
						Messages will always be returned in offset order. Hence, in
						<code>read_committed</code>
						mode, consumer.poll() will only return messages up to the last
						stable offset (LSO), which is the one less than the offset of the
						first open transaction. In particular any messages appearing after
						messages belonging to ongoing transactions will be withheld until
						the relevant transaction has been completed. As a result,
						<code>read_committed</code>
						consumers will not be able to read up to the high watermark when
						there are in flight transactions.
					</p>
					<p>
						Further, when in
						<code>read_committed</code>
						the seekToEnd method will return the LSO</td>
				</td>
				<td>string</td>
				</td>
				<td>read_uncommitted</td>
				</td>
				<td>[read_committed, read_uncommitted]</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>max.poll.interval.ms</td>
				</td>
				<td>The maximum delay between invocations of poll() when using
					consumer group management. This places an upper bound on the amount
					of time that the consumer can be idle before fetching more records.
					If poll() is not called before expiration of this timeout, then the
					consumer is considered failed and the group will rebalance in order
					to reassign the partitions to another member.</td>
				</td>
				<td>int</td>
				</td>
				<td>300000</td>
				</td>
				<td>[1,...]</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>max.poll.records</td>
				</td>
				<td>The maximum number of records returned in a single call to
					poll().</td>
				</td>
				<td>int</td>
				</td>
				<td>500</td>
				</td>
				<td>[1,...]</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>partition.assignment.strategy</td>
				</td>
				<td>The class name of the partition assignment strategy that
					the client will use to distribute partition ownership amongst
					consumer instances when group management is used</td>
				</td>
				<td>list</td>
				</td>
				<td>class org.apache.kafka.clients.consumer.RangeAssignor</td>
				</td>
				<td>non-null string</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>receive.buffer.bytes</td>
				</td>
				<td>The size of the TCP receive buffer (SO_RCVBUF) to use when
					reading data. If the value is -1, the OS default will be used.</td>
				</td>
				<td>int</td>
				</td>
				<td>65536</td>
				</td>
				<td>[-1,...]</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>request.timeout.ms</td>
				</td>
				<td>The configuration controls the maximum amount of time the
					client will wait for the response of a request. If the response is
					not received before the timeout elapses the client will resend the
					request if necessary or fail the request if retries are exhausted.</td>
				</td>
				<td>int</td>
				</td>
				<td>30000</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>sasl.client.callback.handler.class</td>
				</td>
				<td>The fully qualified name of a SASL client callback handler
					class that implements the AuthenticateCallbackHandler interface.</td>
				</td>
				<td>class</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>sasl.jaas.config</td>
				</td>
				<td>JAAS login context parameters for SASL connections in the
					format used by JAAS configuration files. JAAS configuration file
					format is described <a
					href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>.
					The format for the value is: '<code>loginModuleClass
						controlFlag (optionName=optionValue)*;</code>'. For brokers, the config
					must be prefixed with listener prefix and SASL mechanism name in
					lower-case. For example,
					listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule
					required;
				</td>
				</td>
				<td>password</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>sasl.kerberos.service.name</td>
				</td>
				<td>The Kerberos principal name that Kafka runs as. This can be
					defined either in Kafka's JAAS config or in Kafka's config.</td>
				</td>
				<td>string</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>sasl.login.callback.handler.class</td>
				</td>
				<td>The fully qualified name of a SASL login callback handler
					class that implements the AuthenticateCallbackHandler interface.
					For brokers, login callback handler config must be prefixed with
					listener prefix and SASL mechanism name in lower-case. For example,
					listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</td>
				</td>
				<td>class</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>sasl.login.class</td>
				</td>
				<td>The fully qualified name of a class that implements the
					Login interface. For brokers, login config must be prefixed with
					listener prefix and SASL mechanism name in lower-case. For example,
					listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</td>
				</td>
				<td>class</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>sasl.mechanism</td>
				</td>
				<td>SASL mechanism used for client connections. This may be any
					mechanism for which a security provider is available. GSSAPI is the
					default mechanism.</td>
				</td>
				<td>string</td>
				</td>
				<td>GSSAPI</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>security.protocol</td>
				</td>
				<td>Protocol used to communicate with brokers. Valid values
					are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td>
				</td>
				<td>string</td>
				</td>
				<td>PLAINTEXT</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>send.buffer.bytes</td>
				</td>
				<td>The size of the TCP send buffer (SO_SNDBUF) to use when
					sending data. If the value is -1, the OS default will be used.</td>
				</td>
				<td>int</td>
				</td>
				<td>131072</td>
				</td>
				<td>[-1,...]</td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>ssl.enabled.protocols</td>
				</td>
				<td>The list of protocols enabled for SSL connections.</td>
				</td>
				<td>list</td>
				</td>
				<td>TLSv1.2,TLSv1.1,TLSv1</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>ssl.keystore.type</td>
				</td>
				<td>The file format of the key store file. This is optional for
					client.</td>
				</td>
				<td>string</td>
				</td>
				<td>JKS</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>ssl.protocol</td>
				</td>
				<td>The SSL protocol used to generate the SSLContext. Default
					setting is TLS, which is fine for most cases. Allowed values in
					recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may
					be supported in older JVMs, but their usage is discouraged due to
					known security vulnerabilities.</td>
				</td>
				<td>string</td>
				</td>
				<td>TLS</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>ssl.provider</td>
				</td>
				<td>The name of the security provider used for SSL connections.
					Default value is the default security provider of the JVM.</td>
				</td>
				<td>string</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>ssl.truststore.type</td>
				</td>
				<td>The file format of the trust store file.</td>
				</td>
				<td>string</td>
				</td>
				<td>JKS</td>
				</td>
				<td></td>
				</td>
				<td>medium</td>
				</td>
			</tr>
			<tr>
				<td>auto.commit.interval.ms</td>
				</td>
				<td>The frequency in milliseconds that the consumer offsets are
					auto-committed to Kafka if <code>enable.auto.commit</code> is set
					to <code>true</code>.
				</td>
				</td>
				<td>int</td>
				</td>
				<td>5000</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>check.crcs</td>
				</td>
				<td>Automatically check the CRC32 of the records consumed. This
					ensures no on-the-wire or on-disk corruption to the messages
					occurred. This check adds some overhead, so it may be disabled in
					cases seeking extreme performance.</td>
				</td>
				<td>boolean</td>
				</td>
				<td>true</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>client.id</td>
				</td>
				<td>An id string to pass to the server when making requests.
					The purpose of this is to be able to track the source of requests
					beyond just ip/port by allowing a logical application name to be
					included in server-side request logging.</td>
				</td>
				<td>string</td>
				</td>
				<td>""</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>fetch.max.wait.ms</td>
				</td>
				<td>The maximum amount of time the server will block before
					answering the fetch request if there isn't sufficient data to
					immediately satisfy the requirement given by fetch.min.bytes.</td>
				</td>
				<td>int</td>
				</td>
				<td>500</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>interceptor.classes</td>
				</td>
				<td>A list of classes to use as interceptors. Implementing the
					<code>org.apache.kafka.clients.consumer.ConsumerInterceptor</code>
					interface allows you to intercept (and possibly mutate) records
					received by the consumer. By default, there are no interceptors.
				</td>
				</td>
				<td>list</td>
				</td>
				<td>""</td>
				</td>
				<td>non-null string</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>metadata.max.age.ms</td>
				</td>
				<td>The period of time in milliseconds after which we force a
					refresh of metadata even if we haven't seen any partition
					leadership changes to proactively discover any new brokers or
					partitions.</td>
				</td>
				<td>long</td>
				</td>
				<td>300000</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>metric.reporters</td>
				</td>
				<td>A list of classes to use as metrics reporters. Implementing
					the <code>org.apache.kafka.common.metrics.MetricsReporter</code>
					interface allows plugging in classes that will be notified of new
					metric creation. The JmxReporter is always included to register JMX
					statistics.
				</td>
				</td>
				<td>list</td>
				</td>
				<td>""</td>
				</td>
				<td>non-null string</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>metrics.num.samples</td>
				</td>
				<td>The number of samples maintained to compute metrics.</td>
				</td>
				<td>int</td>
				</td>
				<td>2</td>
				</td>
				<td>[1,...]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>metrics.recording.level</td>
				</td>
				<td>The highest recording level for metrics.</td>
				</td>
				<td>string</td>
				</td>
				<td>INFO</td>
				</td>
				<td>[INFO, DEBUG]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>metrics.sample.window.ms</td>
				</td>
				<td>The window of time a metrics sample is computed over.</td>
				</td>
				<td>long</td>
				</td>
				<td>30000</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>reconnect.backoff.max.ms</td>
				</td>
				<td>The maximum amount of time in milliseconds to wait when
					reconnecting to a broker that has repeatedly failed to connect. If
					provided, the backoff per host will increase exponentially for each
					consecutive connection failure, up to this maximum. After
					calculating the backoff increase, 20% random jitter is added to
					avoid connection storms.</td>
				</td>
				<td>long</td>
				</td>
				<td>1000</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>reconnect.backoff.ms</td>
				</td>
				<td>The base amount of time to wait before attempting to
					reconnect to a given host. This avoids repeatedly connecting to a
					host in a tight loop. This backoff applies to all connection
					attempts by the client to a broker.</td>
				</td>
				<td>long</td>
				</td>
				<td>50</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>retry.backoff.ms</td>
				</td>
				<td>The amount of time to wait before attempting to retry a
					failed request to a given topic partition. This avoids repeatedly
					sending requests in a tight loop under some failure scenarios.</td>
				</td>
				<td>long</td>
				</td>
				<td>100</td>
				</td>
				<td>[0,...]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>sasl.kerberos.kinit.cmd</td>
				</td>
				<td>Kerberos kinit command path.</td>
				</td>
				<td>string</td>
				</td>
				<td>/usr/bin/kinit</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>sasl.kerberos.min.time.before.relogin</td>
				</td>
				<td>Login thread sleep time between refresh attempts.</td>
				</td>
				<td>long</td>
				</td>
				<td>60000</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>sasl.kerberos.ticket.renew.jitter</td>
				</td>
				<td>Percentage of random jitter added to the renewal time.</td>
				</td>
				<td>double</td>
				</td>
				<td>0.05</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>sasl.kerberos.ticket.renew.window.factor</td>
				</td>
				<td>Login thread will sleep until the specified window factor
					of time from last refresh to ticket's expiry has been reached, at
					which time it will try to renew the ticket.</td>
				</td>
				<td>double</td>
				</td>
				<td>0.8</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>sasl.login.refresh.buffer.seconds</td>
				</td>
				<td>The amount of buffer time before credential expiration to
					maintain when refreshing a credential, in seconds. If a refresh
					would otherwise occur closer to expiration than the number of
					buffer seconds then the refresh will be moved up to maintain as
					much of the buffer time as possible. Legal values are between 0 and
					3600 (1 hour); a default value of 300 (5 minutes) is used if no
					value is specified. This value and
					sasl.login.refresh.min.period.seconds are both ignored if their sum
					exceeds the remaining lifetime of a credential. Currently applies
					only to OAUTHBEARER.</td>
				</td>
				<td>short</td>
				</td>
				<td>300</td>
				</td>
				<td>[0,...,3600]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>sasl.login.refresh.min.period.seconds</td>
				</td>
				<td>The desired minimum time for the login refresh thread to
					wait before refreshing a credential, in seconds. Legal values are
					between 0 and 900 (15 minutes); a default value of 60 (1 minute) is
					used if no value is specified. This value and
					sasl.login.refresh.buffer.seconds are both ignored if their sum
					exceeds the remaining lifetime of a credential. Currently applies
					only to OAUTHBEARER.</td>
				</td>
				<td>short</td>
				</td>
				<td>60</td>
				</td>
				<td>[0,...,900]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>sasl.login.refresh.window.factor</td>
				</td>
				<td>Login refresh thread will sleep until the specified window
					factor relative to the credential's lifetime has been reached, at
					which time it will try to refresh the credential. Legal values are
					between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8
					(80%) is used if no value is specified. Currently applies only to
					OAUTHBEARER.</td>
				</td>
				<td>double</td>
				</td>
				<td>0.8</td>
				</td>
				<td>[0.5,...,1.0]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>sasl.login.refresh.window.jitter</td>
				</td>
				<td>The maximum amount of random jitter relative to the
					credential's lifetime that is added to the login refresh thread's
					sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a
					default value of 0.05 (5%) is used if no value is specified.
					Currently applies only to OAUTHBEARER.</td>
				</td>
				<td>double</td>
				</td>
				<td>0.05</td>
				</td>
				<td>[0.0,...,0.25]</td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>ssl.cipher.suites</td>
				</td>
				<td>A list of cipher suites. This is a named combination of
					authentication, encryption, MAC and key exchange algorithm used to
					negotiate the security settings for a network connection using TLS
					or SSL network protocol. By default all the available cipher suites
					are supported.</td>
				</td>
				<td>list</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>ssl.endpoint.identification.algorithm</td>
				</td>
				<td>The endpoint identification algorithm to validate server
					hostname using server certificate.</td>
				</td>
				<td>string</td>
				</td>
				<td>https</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>ssl.keymanager.algorithm</td>
				</td>
				<td>The algorithm used by key manager factory for SSL
					connections. Default value is the key manager factory algorithm
					configured for the Java Virtual Machine.</td>
				</td>
				<td>string</td>
				</td>
				<td>SunX509</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>ssl.secure.random.implementation</td>
				</td>
				<td>The SecureRandom PRNG implementation to use for SSL
					cryptography operations.</td>
				</td>
				<td>string</td>
				</td>
				<td>null</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
			<tr>
				<td>ssl.trustmanager.algorithm</td>
				</td>
				<td>The algorithm used by trust manager factory for SSL
					connections. Default value is the trust manager factory algorithm
					configured for the Java Virtual Machine.</td>
				</td>
				<td>string</td>
				</td>
				<td>PKIX</td>
				</td>
				<td></td>
				</td>
				<td>low</td>
				</td>
			</tr>
		</tbody>
	</table>

</body>
</html>
