io.druid.java.util.common.ISE: Could not allocate segment for row with timestamp

今天使用阿里雲druid.io服務,發現kafka-index-service這個任務都以失敗結束,查詢錯誤日誌,錯誤如下


io.druid.java.util.common.ISE: Could not allocate segment for row with timestamp[2019-11-21T09:17:29.000Z]
	at io.druid.indexing.kafka.KafkaIndexTask.run(KafkaIndexTask.java:642) ~[?:?]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444) [druid-indexing-service-0.12.3.jar:0.12.3]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416) [druid-indexing-service-0.12.3.jar:0.12.3]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
2019-11-21T09:17:36,821 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_kafka_TEST_CTI_PT30M_PT1H_92985e76664003b_cihombii] status changed to [FAILED].
2019-11-21T09:17:36,824 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_kafka_TEST_CTI_PT30M_PT1H_92985e76664003b_cihombii",
  "status" : "FAILED",
  "duration" : 516
}

傻缺,你看看你的
segmentGranularity和queryGranularity是不是寫錯了啊,一定要保證queryGranularity>=segmentGranularity

  "granularitySpec": {
      "type": "uniform",
      "segmentGranularity": "DAY",
      "queryGranularity": "DAY",
      "rollup": true
    }
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章