confluent環境謹慎刪除topic

  1. 關注一段代碼
    kafka-connect-hdfs-2.0.0\src\main\java\io\confluent\connect\hdfs\TopicPartitionWriter.java
 private void writeRecord(SinkRecord record) throws IOException {
    long expectedOffset = offset + recordCounter;
    if (offset == -1) {
      offset = record.kafkaOffset();
    } else if (record.kafkaOffset() != expectedOffset) {
      // Currently it's possible to see stale data with the wrong offset after a rebalance when you
      // rewind, which we do since we manage our own offsets. See KAFKA-2894.
      if (!sawInvalidOffset) {
        log.info(
            "Ignoring stale out-of-order record in {}-{}. Has offset {} instead of expected offset {}",
            record.topic(), record.kafkaPartition(), record.kafkaOffset(), expectedOffset);
      }
      sawInvalidOffset = true;
      return;
    }
  1. 看一段日誌
[2016-07-01 18:19:50,199] INFO Ignoring stale out-of-order record in beaver_http_response-1. Has offset 122980245 instead of expected offset 96789608 (io.confluent.connect.hdfs.TopicPartitionWriter:470)
[2016-07-01 18:19:50,200] INFO Starting commit and rotation for topic partition beaver_http_response-1 with start offsets {} and end offsets {} (io.confluent.connect.hdfs.TopicPartitionWriter:267)

這段輕描淡寫的日誌,就是數據死活近不了hdfs的重要線索。一旦offset對不牢,就不會寫入數據了。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章