Kafka源碼閱讀(一):Kafka Producer整體架構概述及源碼分析

整體架構

客戶端整體架構

線程

        整個 Kafka 客戶端由兩個線程協調運行,即Main線程和Sender線程。
        在Main線程中由KafkaProducer創建消息,然後通過Interceptor、Serializer和Partitioner之後緩存到RecordAccumulator(消息累加器)中。
        Sender線程 負責從RecordAccumulator中獲取消息併發送到Kafka中。

RecordAccumulator

        RecordAccumulator主要用來緩存消息以便Sender線程可以批量發送,進而減少網絡傳輸的資源消耗以提升性能。
        RecordAccumulator緩存的大小可以通過生產者客戶端參數buffer.memory進行配置,默認值是32MB。如果生產者發送消息的速度超過了發送到客戶端的速度,則會導致生產者空間不足,此時KafkaProducer send()方法的調用要麼會被阻塞,要麼拋出異常。
         KafkaProducer發送消息的速度可以有參數max.block.ms進行配置,此參數默認值爲60秒。

ProducerBatch

         Main線程發送過來的消息會被追加到RecordAccumulator的Deque(雙端隊列)中,在RecordAccumulator的內部每個Partition都維護了一個Deque,Deque中的內容就是ProducerBatch,即:Deque<ProducerBatch>
        消息被寫入緩存時,會被追加到Deque的尾部。Sender讀取消息時,會從Deque的頭部進行讀取。
        ProducerBatch中可以包含一到多個ProducerRecord(生產者創建的消息),這樣可以使字節的使用更加緊湊。同時,將嬌小的ProducerRecord拼成一個較大的ProducerBatch也可以減少網絡請求的次數以提高整體的吞吐量。
        如果生產者需要向多個分區發送消息,則可以將buffer.memory參數適當調大以增加整體的吞吐量。

BufferPool

         消息在網絡上都是以字節進行傳輸的,在發送之前需要創建一塊內存區域來保存對應的消息。在Kafka生產者客戶端中通過java.io.ByteBuffer實現消息的創建和釋放,不過頻繁的創建和釋放比較消耗資源,在RecordAccumulator的內部還有一個BufferPool,它主要用來試驗ByteBuffer的複用,已實現緩存的高效利用。
        但是BufferPool只針對特定大小的ByteBuffer進行管理,而其他大小的ByteBuffer不會進入BufferPool。此特定值的大小可以通過參數batch.size進行配置以實現緩存不同大小的消息。

ProducerBatchbatch.size關係

         當一條消息ProducerRecord進入RecordAccumulator中時,會先尋找與消息分區所對應的的Deque(如果沒有則新創建),在從這個Deque的尾部獲取一個ProducerBatch(如果沒有則新創建),查看ProducerBatch中是否還可以寫入這個ProducerRecord,如果可以則寫入,否則需要創建一個新的ProducerBatch。
        在新建ProducerBatch時需要評估這條消息的大小是否超過batch.size,如果不超過,就以batch.size的大小來創建這個ProducerBatch,這樣在使用完後還可以通過BufferPool的管理進行復用。若果超過,則以消息的大小來創建ProducerBatch,此內存區域不會被複用。

Sender

         Sender從RecordAccumulator中獲取緩存的消息後,會進一步將原本<TopicPartition, Deque<ProducerBatch>>的保存形式進一步轉換爲<Node,List<ProducerBatch>>的形式,其中Node表示Kafka集羣中的Broker節點。
         對於網絡連接來說,生產者客戶端與具體的Broker節點建立連接,也就是向具體的Broker節點發送消息,而並不關心消息屬於哪個分區;對於KafkaProducer的應用邏輯而言,我們只關注向哪個分區中發送哪些消息,所以這裏需要做一個應用邏輯層到網絡I/O層面的轉換。
         在轉換成<Node,List<ProducerBatch>>的形式之後,Sender還會進一步封裝成<Node,List<ProduceRequest>>的形式,這樣就可以將Request請求發送到各個Node。

InFlightRequests

         請求從Sender線程發往Kafka之前還會保存到InFlightRequests中,InFlightRequests保存對象的具體形式是Map<NodeId, Deque<Request>>,其主要作用是緩存已經發出去但還沒有收到響應的請求。與此同時,InFlightRequests還提供了趣多管理類的方法,並且通過配置參數還可以限制每個連接(即客戶端與Node之間的連接)最多緩存的請求數。此參數爲max.in.flight.requests.per.connection,默認值是5。超過該數值之後就不能再向這個連接發送更多的請求了,除非有緩存的請求收到了響應。
         通過比較Deque<Request>的size與配置的最大連接數可以判斷對應的node是否已經堆積了很多未響應的請求。如果已有較大未響應請求的堆積,那麼說明這個Node節點負載較大或者網絡連接有問題,再繼續向其發送請求會增大請求超時的可能。


源碼分析

Demo

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

Producer<String, String> producer = new KafkaProducer<>(props);
 for (int i = 0; i < 100; i++)
    producer.send(new ProducerRecord<String, String>("my-topic", Integer.toString(i), Integer.toString(i)));

 producer.close();

從這個簡單的Demo可以看出發送消息時,只需要構建ProducerRecord之後調用producer.send()方法即可發送消息,剩下的工作都由Kafka完成了,那麼Kafka是怎麼做到的呢?接下來將從KafkaProducer入手,通過分析源碼一步步解開隱藏的細節。

KafkaProducer.java

send()

    @Override
    public Future<RecordMetadata> send(ProducerRecord<K, V> record) {
        return send(record, null);
    }
  @Override
  public Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback) {
        // intercept the record, which can be potentially modified; this method does not throw exceptions
        ProducerRecord<K, V> interceptedRecord = this.interceptors.onSend(record);
        return doSend(interceptedRecord, callback);
    }
private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback callback) {
        TopicPartition tp = null;
        try {
        	// 1. 如果producer實例已經關閉,則拋出異常:IllegalStateException。
            throwIfProducerClosed();
            // first make sure the metadata for the topic is available
            ClusterAndWaitTime clusterAndWaitTime;
            try {
                // 2. 獲取cluster元數據信息以及等待時間
                // maxBlockTimeMs: 用來控制send方法的阻塞時間。當生產者的發送緩衝區己滿,或者沒有可用的元數據時,send方法就會阻塞.
                // 下一篇博文會對此進行詳細分析:https://blog.csdn.net/XU906722/article/details/104381440
                clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), maxBlockTimeMs);
            } catch (KafkaException e) {
                if (metadata.isClosed())
                    throw new KafkaException("Producer closed while send in progress", e);
                throw e;
            }
            long remainingWaitMs = Math.max(0, maxBlockTimeMs - clusterAndWaitTime.waitedOnMetadataMs);
            Cluster cluster = clusterAndWaitTime.cluster;
            byte[] serializedKey;
            try {
            	// 3. 序列化key
                serializedKey = keySerializer.serialize(record.topic(), record.headers(), record.key());
            } catch (ClassCastException cce) {
                throw new SerializationException("Can't convert key of class " + record.key().getClass().getName() +
                        " to class " + producerConfig.getClass(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG).getName() +
                        " specified in key.serializer", cce);
            }
            byte[] serializedValue;
            try {
                // 4. 序列化value
                serializedValue = valueSerializer.serialize(record.topic(), record.headers(), record.value());
            } catch (ClassCastException cce) {
                throw new SerializationException("Can't convert value of class " + record.value().getClass().getName() +
                        " to class " + producerConfig.getClass(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG).getName() +
                        " specified in value.serializer", cce);
            }
            // 5. 根據消息的key、value計算要發送的分區,待會詳細介紹
            int partition = partition(record, serializedKey, serializedValue, cluster);
            tp = new TopicPartition(record.topic(), partition);

            setReadOnly(record.headers());
            Header[] headers = record.headers().toArray();

            int serializedSize = AbstractRecords.estimateSizeInBytesUpperBound(apiVersions.maxUsableProduceMagic(),
                    compressionType, serializedKey, serializedValue, headers);
            // 6. 校驗消息的大小,如果消息大於maxRequestSize或者totalMemorySize將會拋出異常: RecordTooLargeException
            ensureValidRecordSize(serializedSize);
            long timestamp = record.timestamp() == null ? time.milliseconds() : record.timestamp();
            log.trace("Sending record {} with callback {} to topic {} partition {}", record, callback, record.topic(), partition);
            // producer callback will make sure to call both 'callback' and interceptor callback
            Callback interceptCallback = new InterceptorCallback<>(callback, this.interceptors, tp);

            if (transactionManager != null && transactionManager.isTransactional())
                transactionManager.maybeAddPartitionToTransaction(tp);
			// 7. 將消息添加到RecordAccumulator緩存中
            RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey,
                    serializedValue, headers, interceptCallback, remainingWaitMs);
             // 8. 如果緩存已滿或者緩存中新建了ProducerBatch則喚醒Sender線程發送消息
            if (result.batchIsFull || result.newBatchCreated) {
                log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch", record.topic(), partition);
                // 9.喚醒Sender線程發送消息,後面會詳細介紹
                this.sender.wakeup();
            }
            return result.future;
            // handling exceptions and record the errors;
            // for API exceptions return them in the future,
            // for other exceptions throw directly
        } catch (ApiException e) {
            log.debug("Exception occurred during message send:", e);
            if (callback != null)
                callback.onCompletion(null, e);
            this.errors.record();
            this.interceptors.onSendError(record, tp, e);
            return new FutureFailure(e);
        } catch (InterruptedException e) {
            this.errors.record();
            this.interceptors.onSendError(record, tp, e);
            throw new InterruptException(e);
        } catch (BufferExhaustedException e) {
            this.errors.record();
            this.metrics.sensor("buffer-exhausted-records").record();
            this.interceptors.onSendError(record, tp, e);
            throw e;
        } catch (KafkaException e) {
            this.errors.record();
            this.interceptors.onSendError(record, tp, e);
            throw e;
        } catch (Exception e) {
            // we notify interceptor about all exceptions, since onSend is called before anything else in this method
            this.interceptors.onSendError(record, tp, e);
            throw e;
        }
    }

從上面的源碼可以看出:

  • send()有兩個重載的方法,當發送消息未指定回調函數時,會默認賦一個null,並調用重載的方法;
  • 重載的send()方法會先調用攔截器,如果自定義了攔截器則會使用用戶自定義的否則走默認的,經過攔截器之後才真正調用了發送消息的doSend()方法。
  • doSend()中主要做了如下幾件事:
  1. 判斷Producer是否已關閉;
  2. 獲取cluster元數據信息;
  3. 序列化key和value,並根據序列化後的key、value計算所要發送的分區號,下面會詳細介紹;
  4. 將消息添加到RecordAccumulator中,下面會詳細介紹;
  5. 當ProducerBatch已經滿了或又新建了ProducerBatch時,則喚醒Sender線程發送消息;

DefaultPartitioner

接下來看一下Kafka是怎樣根據消息來確認將要發送的分區號的。

partition()

// KafkaProducer的partition方法
private int partition(ProducerRecord<K, V> record, byte[] serializedKey, byte[] serializedValue, Cluster cluster) {
        Integer partition = record.partition();
        // 判斷消息是否指定了分區,如果指定了,則返回指定的分區號,否則調用partitioner的partition方法計算分區號
        return partition != null ?
                partition :
                partitioner.partition(
                        record.topic(), record.key(), serializedKey, record.value(), serializedValue, cluster);
    }
// DefaultPartitioner實現了Partitioner接口的partition方法,如果用戶沒有自定義Partitioner,則使用該默認的DefaultPartitioner
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
		//1. 獲取當前topic的所有分區
        List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
        int numPartitions = partitions.size();
        if (keyBytes == null) {
        	// 2. 既沒有 partition 值又沒有 key 值的情況下,第一次調用時隨機生成一個整數(後面每次調用在這個整數上自增)
            int nextValue = nextValue(topic);
            List<PartitionInfo> availablePartitions = cluster.availablePartitionsForTopic(topic);
            //  3. 根據2中算出的值與分區數取餘即爲該消息要發送的分區號
            if (availablePartitions.size() > 0) {
                int part = Utils.toPositive(nextValue) % availablePartitions.size();
                return availablePartitions.get(part).partition();
            } else {
                // no partitions are available, give a non-available partition
                return Utils.toPositive(nextValue) % numPartitions;
            }
        } else {
            // hash the keyBytes to choose a partition
            // 4. 根據key的hash值與分區數取餘作爲要發送的分區號
            return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;
        }
    }

上面兩個partition()方法主要做了如下幾件事:

  • 發送消息時如果指定了要發送的分區,則使用該分區;
  • 如果沒有指定分區,則根據key的hash值與分區數取餘作爲要發送的分區號;
  • 如果既沒有指定分區也沒有key,則第一次調用時隨機生成一個整數(後面每次調用在這個整數上自增),把這個整數與分區數取餘作爲要發送的分區號;

RecordAccumulator

上面介紹了怎樣計算消息要發送的分區號,接下來將介紹RecordAccumulator是怎樣將消息添加到Deque<ProducerBatch>中的。

append()

public RecordAppendResult append(TopicPartition tp,
                                     long timestamp,
                                     byte[] key,
                                     byte[] value,
                                     Header[] headers,
                                     Callback callback,
                                     long maxTimeToBlock) throws InterruptedException {
        // We keep track of the number of appending thread to make sure we do not miss batches in
        // abortIncompleteBatches().
        appendsInProgress.incrementAndGet();
        ByteBuffer buffer = null;
        if (headers == null) headers = Record.EMPTY_HEADERS;
        try {
            // check if we have an in-progress batch
            // 1. 當前分區獲取一個Deque,如果沒有可用的則新創建。
            Deque<ProducerBatch> dq = getOrCreateDeque(tp);
            synchronized (dq) {
                if (closed)
                    throw new KafkaException("Producer closed while send in progress");
                // 2. 從上面獲取到的Deque中可用的ProducerBatch,並根據消息的大小判斷是否可以存下當前消息,如果可以返回當前的ProducerBatch,否則返回空,從內存中新創建。
                RecordAppendResult appendResult = tryAppend(timestamp, key, value, headers, callback, dq);
                if (appendResult != null)
                    return appendResult;
            }

            // we don't have an in-progress record batch try to allocate a new batch
            byte maxUsableMagic = apiVersions.maxUsableProduceMagic();
            int size = Math.max(this.batchSize, AbstractRecords.estimateSizeInBytesUpperBound(maxUsableMagic, compression, key, value, headers));
            log.trace("Allocating a new {} byte message buffer for topic {} partition {}", size, tp.topic(), tp.partition());
            //3. 根據消息大小和batchSize兩者的最大值,在內存中開闢一塊內存用於新創建ProducerBatch
            buffer = free.allocate(size, maxTimeToBlock);
            synchronized (dq) {
                // Need to check if producer is closed again after grabbing the dequeue lock.
                if (closed)
                    throw new KafkaException("Producer closed while send in progress");

                RecordAppendResult appendResult = tryAppend(timestamp, key, value, headers, callback, dq);
                if (appendResult != null) {
                    // Somebody else found us a batch, return the one we waited for! Hopefully this doesn't happen often...
                    return appendResult;
                }
				
                MemoryRecordsBuilder recordsBuilder = recordsBuilder(buffer, maxUsableMagic);
                ProducerBatch batch = new ProducerBatch(tp, recordsBuilder, time.milliseconds());
                // 4. 將消息添加到新創建的ProducerBatch中
                FutureRecordMetadata future = Utils.notNull(batch.tryAppend(timestamp, key, value, headers, callback, time.milliseconds()));
				// 5. 將當前新創建的ProducerBatch添加到對應的Deque中
                dq.addLast(batch);
                // 6. 將當前新創建的ProducerBatch添加到未完成的Deque中,表示該ProducerBatch中的消息還未收到ack。
                incomplete.add(batch);

                // Don't deallocate this buffer in the finally block as it's being used in the record batch
                buffer = null;
                return new RecordAppendResult(future, dq.size() > 1 || batch.isFull(), true);
            }
        } finally {
            if (buffer != null)
                // 7. 釋放內存資源
                free.deallocate(buffer);
            appendsInProgress.decrementAndGet();
        }
    }

上面的append()方法主要做了如下幾件事:

  • 獲取當前分區的Deque,沒有可用的則新創建;
  • 從Deque中獲取ProducerBatch,沒有可用的則新創建;
  • 將消息添加到ProducerBatch中;
    Producer RecordAccumulator record 寫入流程

Sender.java

上面的幾個方法介紹了KafkaProducer是如何將消息添加添加到緩存中,RecordAccumulator又是如何保存消息的。接下來將從Sender線程方面介紹Kafka是如何取出消息的,又是如何緩存request,最終又是如何將下次發送出去的。

sendProducerData()

  • 當KafkaProducer的doSend()方法中調用sender.wakeup()時,Sender線程的run()方法便會被執行;
  • run()方法中又調用了runOnce()方法,runOnce()方法最終又調用了sendProducerData()方法;
  • sendProducerData()方法才包含了發送消息的核心,因此我將略過上面的run()runOnce(),直接講解下面的sendProducerData()
private long sendProducerData(long now) {
        Cluster cluster = metadata.fetch();
        // 獲取那些已經可以發送消息的Partition所在的node列表,以及哪些Partition中含有未知的leader
        RecordAccumulator.ReadyCheckResult result = this.accumulator.ready(cluster, now);

        // 如果有Partition的leader是未知的,則強制更新元數據	
        if (!result.unknownLeaderTopics.isEmpty()) {
            // The set of topics with unknown leader contains topics with leader election pending as well as
            // topics which may have expired. Add the topic again to metadata to ensure it is included
            // and request metadata update, since there are messages to send to the topic.
            for (String topic : result.unknownLeaderTopics)
                this.metadata.add(topic);

            log.debug("Requesting metadata update due to unknown leader topics from the batched records: {}",
                result.unknownLeaderTopics);
            this.metadata.requestUpdate();
        }

        // 獲取可以發送消息的node列表
        Iterator<Node> iter = result.readyNodes.iterator();
        long notReadyTimeout = Long.MAX_VALUE;
        while (iter.hasNext()) {
            Node node = iter.next();
            // 判斷client和當前node是否已連接,並準備好發送數據;如果爲連接,則初始化連接;
            if (!this.client.ready(node, now)) {
                iter.remove();
                notReadyTimeout = Math.min(notReadyTimeout, this.client.pollDelayMs(node, now));
            }
        }

        // 獲取每一個node上將要發送的消息
        Map<Integer, List<ProducerBatch>> batches = this.accumulator.drain(cluster, result.readyNodes, this.maxRequestSize, now);
        //將每個Partition的消息按照順序存儲到inFlightBatches緩存中
        addToInflightBatches(batches);
        if (guaranteeMessageOrder) {
            for (List<ProducerBatch> batchList : batches.values()) {
                for (ProducerBatch batch : batchList)
                	// 如果要保證發送消息的順序的話,將每個batch對應的Partition添加到mute中,這樣即使該batch已經ready了也不會發送。
                    this.accumulator.mutePartition(batch.topicPartition);
            }
        }

        accumulator.resetNextBatchExpiryTime();
        List<ProducerBatch> expiredInflightBatches = getExpiredInflightBatches(now);
        List<ProducerBatch> expiredBatches = this.accumulator.expiredBatches(now);
        expiredBatches.addAll(expiredInflightBatches);

        // Reset the producer id if an expired batch has previously been sent to the broker. Also update the metrics
        // for expired batches. see the documentation of @TransactionState.resetProducerId to understand why
        // we need to reset the producer id here.
        if (!expiredBatches.isEmpty())
            log.trace("Expired {} batches in accumulator", expiredBatches.size());
        for (ProducerBatch expiredBatch : expiredBatches) {
            String errorMessage = "Expiring " + expiredBatch.recordCount + " record(s) for " + expiredBatch.topicPartition
                + ":" + (now - expiredBatch.createdMs) + " ms has passed since batch creation";
            failBatch(expiredBatch, -1, NO_TIMESTAMP, new TimeoutException(errorMessage), false);
            if (transactionManager != null && expiredBatch.inRetry()) {
                // This ensures that no new batches are drained until the current in flight batches are fully resolved.
                transactionManager.markSequenceUnresolved(expiredBatch.topicPartition);
            }
        }
        sensors.updateProduceRequestMetrics(batches);

        // If we have any nodes that are ready to send + have sendable data, poll with 0 timeout so this can immediately
        // loop and try sending more data. Otherwise, the timeout will be the smaller value between next batch expiry
        // time, and the delay time for checking data availability. Note that the nodes may have data that isn't yet
        // sendable due to lingering, backing off, etc. This specifically does not include nodes with sendable data
        // that aren't ready to send since they would cause busy looping.
        long pollTimeout = Math.min(result.nextReadyCheckDelayMs, notReadyTimeout);
        pollTimeout = Math.min(pollTimeout, this.accumulator.nextExpiryTimeMs() - now);
        pollTimeout = Math.max(pollTimeout, 0);
        if (!result.readyNodes.isEmpty()) {
            log.trace("Nodes with data ready to send: {}", result.readyNodes);
            // if some partitions are already ready to be sent, the select time would be 0;
            // otherwise if some partition already has some data accumulated but not ready yet,
            // the select time will be the time difference between now and its linger expiry time;
            // otherwise the select time will be the time difference between now and the metadata expiry time;
            pollTimeout = 0;
        }
        // 把ProducerBatch轉換爲對應的ProduceRequest,並調用NetworkClient將消息寫入網絡發送出去
        sendProduceRequests(batches, now);
        return pollTimeout;
    }

sendProduceRequest()

 private void sendProduceRequest(long now, int destination, short acks, int timeout, List<ProducerBatch> batches) {
        if (batches.isEmpty())
            return;

        Map<TopicPartition, MemoryRecords> produceRecordsByPartition = new HashMap<>(batches.size());
        final Map<TopicPartition, ProducerBatch> recordsByPartition = new HashMap<>(batches.size());

        // find the minimum magic version used when creating the record sets
        byte minUsedMagic = apiVersions.maxUsableProduceMagic();
        for (ProducerBatch batch : batches) {
            if (batch.magic() < minUsedMagic)
                minUsedMagic = batch.magic();
        }

        for (ProducerBatch batch : batches) {
            TopicPartition tp = batch.topicPartition;
            MemoryRecords records = batch.records();

            // down convert if necessary to the minimum magic used. In general, there can be a delay between the time
            // that the producer starts building the batch and the time that we send the request, and we may have
            // chosen the message format based on out-dated metadata. In the worst case, we optimistically chose to use
            // the new message format, but found that the broker didn't support it, so we need to down-convert on the
            // client before sending. This is intended to handle edge cases around cluster upgrades where brokers may
            // not all support the same message format version. For example, if a partition migrates from a broker
            // which is supporting the new magic version to one which doesn't, then we will need to convert.
            if (!records.hasMatchingMagic(minUsedMagic))
                records = batch.records().downConvert(minUsedMagic, 0, time).records();
            produceRecordsByPartition.put(tp, records);
            recordsByPartition.put(tp, batch);
        }

        String transactionalId = null;
        if (transactionManager != null && transactionManager.isTransactional()) {
            transactionalId = transactionManager.transactionalId();
        }
        // 將ProducerBatch轉換爲ProduceRequest
        ProduceRequest.Builder requestBuilder = ProduceRequest.Builder.forMagic(minUsedMagic, acks, timeout,
                produceRecordsByPartition, transactionalId);
        RequestCompletionHandler callback = new RequestCompletionHandler() {
            public void onComplete(ClientResponse response) {
                handleProduceResponse(response, recordsByPartition, time.milliseconds());
            }
        };

        String nodeId = Integer.toString(destination);
        // 將ProduceRequest轉換爲clientRequest
        ClientRequest clientRequest = client.newClientRequest(nodeId, requestBuilder, now, acks != 0,
                requestTimeoutMs, callback);
        // 調用NetworkClient將消息寫入網絡發送出去
        client.send(clientRequest, now);
        log.trace("Sent produce request to {}: {}", nodeId, requestBuilder);
    }

以上幾個方法主要做了如下幾件事:

  • 從RecordAccumulator中讀取ProducerBatch,獲取node列表,並將ProducerBatch與node建立對應關係;
  • 將ProducerBatch轉換爲ProducerRequest,再進一步轉換爲ClientRequest;
  • 調用NetWorkClient的send方法將消息發送出去;
  • NetWorkClient中其實還做了很多事,但這裏不再進一步討論,有興趣的話可以自己看下源碼;

該博文的源碼是基於Kafka 2.3.0,整體架構部分主要參考了書籍《深入理解Kafka:核心設計與實踐原理》

發佈了117 篇原創文章 · 獲贊 192 · 訪問量 10萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章