kafka文檔(12)----0.10.1-Document-文檔(4)-configures-producer配置信息

3.2 Producer Configs

Below is the configuration of the Java producer:

下面是java版本的producer的配置文件

NAME DESCRIPTION TYPE DEFAULT VALID VALUES IMPORTANCE
bootstrap.servers

A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the formhost1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).


host/port對的列表,用來建立與kafka的初始鏈接。客戶端將使用列表中所有指定的servers-這個列表隻影響客戶端的初始化,客戶端需要使用這個列表去查詢所有servers的完整列表。列表格式應該爲:host1:port1,host2,port2,....;因爲這些server列表只是用來初始化發現完整的server列表(而完整的server列表可能在使用中發生變化,機器損壞,部署遷移等),這個表不需要包含所有server的ip和port(但是最好多於1個,預防這個server掛掉的風險,防止下次啓動無法鏈接)

list     high
key.serializer

Serializer class for key that implements theSerializer interface


Serializer接口的密鑰的類的key

class     high
value.serializer

Serializer class for value that implements theSerializer interface


Serializer接口的類的value

class     high
acks

The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed:


producer要求在leader在判定某條消息是否寫入完成之前需要收到的確認寫入的個數。這個值控制了發送消息的可用性。以下爲具體配置說明:

  • acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and theretries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1. ack=0, 如果設置爲0,則表明producer不要等待server的任何寫入確認。記錄會立刻添加到socket buffer,然後認爲是發送了。這種情況下,無法保證server是否確實收到了消息,同時retries這個配置不起作用,請求返回應答中的offset通常設置爲-1
  • acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost.只要leader確認寫入本地日誌了就可以返回應答了,不需要等待所有follower同步成功。這種情況下,如果leader寫入本地之後立馬返回確認應答,但是此時follower沒有同步這條消息,同時leader如果掛掉,則這條消息丟失了
  • acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting.這種情況下要求leader在收到所有活躍備份節點確認寫入的消息之後才能回饋確認寫入的消息給producer。這種方式可以保證只要有一個備份節點活躍,消息就不會丟。這是最強的保證。這個和acks=-1相同
string 1 [all, -1, 0, 1] high
buffer.memory The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for max.block.ms after which it will throw an exception.

This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.


producer用於緩存發送數據的內存大小。如果消息放入緩存的速度大於發送的速度,則producer可以設置阻塞超時時間max.block.ms,超時則報異常即可。

這個設置指定了producer將要使用的內存大小,但是並不是一個實際的邊界條件,因爲producer並不會把所有的內存都用作緩存。一些額外的緩存可能用於壓縮(如果支持壓縮的話),還有一些緩存用於維護正在進行的請求。

long 33554432 [0,...] high
compression.type

The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are nonegzipsnappy, or lz4. Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression).


producer可以支持的數據壓縮類型。合法的壓縮格式爲:none,gzip,snappy,lz4.壓縮時批量進行的,因此批量的大小也會影響壓縮的效率(更大的批量可能會有更高的壓縮速率)

string none   high
retries

Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries without settingmax.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.

設置重試次數可以在發送失敗時進行重試,提高發送的可靠性。注意,這個重試和客戶端發生接受錯誤的重試沒有區別。允許重試,而不設置max.in.flight.request.per.connection爲1的話,將可能調整消息的發送次序,例如兩組批量消息發送到同一個partition,第一個失敗瞭然後重試,但是第二個發送成功了,實際的結果可能是第二個小組在partition中出現的更早。

int 0 [0,...,2147483647] high
ssl.key.password

The password of the private key in the key store file. This is optional for client.


key存儲文件中私有密鑰的密碼。對客戶端來說是可選的。

password null   high
ssl.keystore.location

The location of the key store file. This is optional for client and can be used for two-way authentication for client.


密鑰存儲的位置。對於客戶端來說是可選的,可以使用雙向認證

string null   high
ssl.keystore.password

The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.


密鑰文件中密碼,這個隊客戶端來說是可選的,只有在ssl.keystore.location配置的時候才需要

password null   high
ssl.truststore.location

The location of the trust store file.


信任存儲文件的位置

string null   high
ssl.truststore.password

The password for the trust store file.


受信任文件的密碼

password null   high
batch.size The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes.

No attempt will be made to batch records larger than this size.

Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent.

A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.

producer嘗試批量處理消息,可以使用較少的發送次數發送相同數量的消息。可以提高server以及client端的性能。此值是指默認批量處理的字節數。

不要嘗試批量發送超過這個值的消息。

發送給brokers的請求可能包含多個批量發送的消息組,每個組都有對應的partition。

較小的批量發送尺寸將降低吞吐量(批量尺寸爲0的話將禁止批量發送)。非常大的批量發送尺寸將需要更多的空間,需要預先申請更大的空間。

int 16384 [0,...] medium
client.id

An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.


請求server時傳送給server的clientid字符串。

目的在於追蹤請求的來源,判斷是否從合法ip/port發出的。

string ""   medium
connections.max.idle.ms

Close idle connections after the number of milliseconds specified by this config.


空閒鏈接存在的最長時間,超出之後就會被關閉

long 540000   medium
linger.ms

The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay—that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we getbatch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load.

生產者將批量處理請求,只要落在兩次批量請求之間的請求都被集合到一次請求中。通常來說,這只會在以下情況下發生:即請求來的比發送的頻繁。然而,即使在中等負載情況下,producer也希望降低請求的次數。該設置通過添加人爲的延遲實現這一點:即,請求來到不是立即發送,而是等待指定的延遲,批量發送請求。這類似於TCP中的Nagle算法,此值指出了延遲的上限,一旦我們的請求數量達到一個分區的batch.size,可以立即發送請求而不用管這個值。但是,如果分區的請求的數量沒有達到batch.size,則需要延遲此值指定的時間,以等待更多的請求。默認設置爲0,即沒有延遲。設置linger.ms=5,例如,爲了降低請求次數,可能需要等待5ms的時間以獲取更多的請求進行批次請求,但是如果在請求數不多的情況下進行延遲,會導致延遲5ms。

long 0 [0,...] medium
max.block.ms

The configuration controls how longKafkaProducer.send() andKafkaProducer.partitionsFor() will block.These methods can be blocked either because the buffer is full or metadata unavailable.Blocking in the user-supplied serializers or partitioner will not be counted against this timeout.


KafkaProducer.send()和KafkaProducer.partitionsFor()將會阻塞的時長。這兩個方法有可能因爲緩存區滿了或者元數據不可用而阻塞。由於用戶提供的serializers或者partitioner而產生的阻塞不會計入超時。

long 60000 [0,...] medium
max.request.size

The maximum size of a request in bytes. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.


請求的最大字節數。這個也是有效的最長消息的上限。注意,server可能有自己的消息上限,相互之間可能有所不同。這個設置限制了批量處理消息的大小,因此producer單次發送時應該避免發送一大坨請求。

int 1048576 [0,...] medium
partitioner.class

Partitioner class that implements the Partitionerinterface.


實現Partitioner的接口的Partitioner類

class class org.apache.kafka.clients.producer.internals.DefaultPartitioner   medium
receive.buffer.bytes

The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used


TCP接受緩存的大小(SO_RCVBUF)。如果設置爲-1,則使用OS默認值.

int 32768 [-1,...] medium
request.timeout.ms

The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.


請求的超時時間,即等待server端應答某個請求的最長時間。如果在超時時間內沒有收到應答,客戶端有可能重試,如果重試都失敗了,則本次請求失敗。

int 30000 [0,...] medium
sasl.kerberos.service.name

The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.


kafka運行的Kerberos主機名。可以在Kafka's JAAS配置或者Kafka's 配置中定義。

string null   medium
sasl.mechanism

SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.


客戶端鏈接進行通信的SASL機制。默認時GSSAPI

string GSSAPI   medium
security.protocol

Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.


brokers之間通信使用的安全協議。正確值爲:PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

string PLAINTEXT   medium
send.buffer.bytes

The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

TCP發送的socket的SO_SNDBUF緩存。如果設置爲-1,將使用OS的默認值

int 131072 [-1,...] medium
ssl.enabled.protocols

The list of protocols enabled for SSL connections.


SSL鏈接的協議

list [TLSv1.2, TLSv1.1, TLSv1]   medium
ssl.keystore.type

The file format of the key store file. This is optional for client.


密鑰文件的文件格式。對客戶端來說是可選的。

string JKS   medium
ssl.protocol

The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.


生成SSLContext的SSL協議。默認配置時TLS,適用於大部分情況。最近JVMS支持的協議包括:TLS,TLSv1.1,TLSv1.2.
SSL,SSLv2,SSLv3在老版本的JVMS中可用,但是由於知名的安全漏洞,它們並不受歡迎。

string TLS   medium
ssl.provider

The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.


SSL鏈接安全提供者名字。默認是JVM

string null   medium
ssl.truststore.type

The file format of the trust store file.

受信任的文件的文件格式

string JKS   medium
timeout.ms

The configuration controls the maximum amount of time the server will wait for acknowledgments from followers to meet the acknowledgment requirements the producer has specified with the acks configuration. If the requested number of acknowledgments are not met when the timeout elapses an error will be returned. This timeout is measured on the server side and does not include the network latency of the request.


server等待followers回饋的寫入確認個數(producer指定的acks)的超時時間,如果要求的確認個數在超時時間內沒有達到,則會返回錯誤。這個超時是指server端的超時,並且沒有包含請求的網絡延遲。

int 30000 [0,...] medium
block.on.buffer.full When our memory buffer is exhausted we must either stop accepting new records (block) or throw errors. By default this setting is false and the producer will no longer throw a BufferExhaustException but instead will use themax.block.ms value to block, after which it will throw a TimeoutException. Setting this property to true will set the max.block.ms to Long.MAX_VALUE. Also if this property is set to true, parametermetadata.fetch.timeout.ms is no longer honored.

This parameter is deprecated and will be removed in a future release. Parameter max.block.ms should be used instead.


當內存已滿,要麼停止接受新請求,要麼拋出錯誤。默認時拋出錯誤,而且producer不再拋出BufferExhaustException,而是阻塞max.block.ms長的時間,阻塞超時後會拋出TimeoutException錯誤。設置這個值爲true,將設置max.block.ms爲long.MAX_VALUE,而且不再使用metadata.fetch.timeout.ms。此參數在以後的release版本中會廢棄,可以使用max.block.ms

boolean false   low
interceptor.classes

A list of classes to use as interceptors. Implementing theProducerInterceptor interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors.


用作攔截器的類的列表。接口ProducerInterceptor可以攔截部分消息,以防它們發送到kafka集羣。默認情況下沒有攔截器

list null   low
max.in.flight.requests.per.connection

The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled).


在阻塞之前,客戶端可以在單個鏈接之中發送未確認的請求的最大數目。注意:如果這個值大於1,則一旦發送失敗,有可能會打亂消息的原有順序

int 5 [1,...] low
metadata.fetch.timeout.ms

The first time data is sent to a topic we must fetch metadata about that topic to know which servers host the topic's partitions. This config specifies the maximum time, in milliseconds, for this fetch to succeed before throwing an exception back to the client.


獲取某個topic的partitions在servers上分佈情況的元數據的超時時間。此值指定了最大時間,用來指定客戶端等待server回饋元數據的時間。

long 60000 [0,...] low
metadata.max.age.ms

The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.


更新metadata的時間間隔,無論partition的leader是否發生變換或者topic其它的元數據是否發生變化。

long 300000 [0,...] low
metric.reporters

A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.


用於實現指標統計的類的列表。MetricReporter接口允許調用實現指標統計的插件類。JmxReporter總是包含註冊JMX統計。

list []   low
metrics.num.samples

The number of samples maintained to compute metrics.


維護計算指標的樣本數

int 2 [1,...] low
metrics.sample.window.ms

The window of time a metrics sample is computed over.


度量樣本的計算的時長

long 30000 [0,...] low
reconnect.backoff.ms

The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.


重連給定host之前的等待時間。避免頻繁的重連某個host。這個backoff時間也設定了consumer請求broker的重試等待時間。

long 50 [0,...] low
retry.backoff.ms

The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.


重新發送失敗請求的等待時間。避免某些失敗情況下頻繁發送請求。

long 100 [0,...] low
sasl.kerberos.kinit.cmd

Kerberos kinit command path.


Kerberos kinit命令路徑

string /usr/bin/kinit   low
sasl.kerberos.min.time.before.relogin

Login thread sleep time between refresh attempts.


重試之間,線程的睡眠時間

long 60000   low
sasl.kerberos.ticket.renew.jitter

Percentage of random jitter added to the renewal time.


添加到更新時間的隨機抖動的百分比。

double 0.05   low
sasl.kerberos.ticket.renew.window.factor

Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.


重新進行登錄驗證刷新之前,登錄線程的睡眠時間

double 0.8   low
ssl.cipher.suites

A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.


密碼套件列表。 這是一種集認證,加密,MAC和密鑰交換算法一塊的命名組合,用於使用TLS或SSL網絡協議協商網絡連接的安全設置。 默認情況下,支持所有可用的密碼套件。

list null   low
ssl.endpoint.identification.algorithm

The endpoint identification algorithm to validate server hostname using server certificate.


端點標識算法,使用服務器證書驗證服務器主機名。

string null   low
ssl.keymanager.algorithm

The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.


密鑰管理器工廠用於SSL連接的算法。 默認值是爲Java虛擬機配置的密鑰管理器工廠算法。

string SunX509   low
ssl.secure.random.implementation

The SecureRandom PRNG implementation to use for SSL cryptography operations.


用於SSL加密操作的SecureRandom PRNG實現。

string null   low
ssl.trustmanager.algorithm

The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.


ssl鏈接信任管理者工廠的算法。默認時JVM支持的算法。

string PKIX   low

For those interested in the legacy Scala producer configs, information can be found here.

更多合法的Scala版本的配置,可以查看這裏

發佈了130 篇原創文章 · 獲贊 40 · 訪問量 82萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章