ERROR Failed to clean up log for __consumer_offsets-30 in dir D:\kafka_2.13-2.5.0\kafka-logs

ERROR Failed to clean up log for __consumer_offsets-30 in dir D:\kafka_2.13-2.5.0\kafka-logs due to IOException (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: D:\kafka_2.13-2.5.0\kafka-logs\__consumer_offsets-30\00000000000000000000.timeindex.cleaned -> D:\kafka_2.13-2.5.0\kafka-logs\__consumer_offsets-30\00000000000000000000.timeindex.swap: 另一个程序正在使用此文件,进程无法访问。

        at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
        at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
        at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
        at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
        at java.nio.file.Files.move(Files.java:1395)
        at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:834)
        at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:207)
        at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:497)
        at kafka.log.Log.$anonfun$replaceSegments$4(Log.scala:2269)
        at kafka.log.Log.$anonfun$replaceSegments$4$adapted(Log.scala:2269)
        at scala.collection.immutable.List.foreach(List.scala:305)
        at kafka.log.Log.replaceSegments(Log.scala:2269)
        at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:594)
        at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:519)
        at kafka.log.Cleaner.doClean(LogCleaner.scala:518)
        at kafka.log.Cleaner.clean(LogCleaner.scala:492)
        at kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:361)
        at kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:334)
        at kafka.log.LogCleaner$CleanerThread.tryCleanFilthiestLog(LogCleaner.scala:314)
        at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:303)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
        Suppressed: java.nio.file.FileSystemException: D:\kafka_2.13-2.5.0\kafka-logs\__consumer_offsets-30\00000000000000000000.timeindex.cleaned -> D:\kafka_2.13-2.5.0\kafka-logs\__consumer_offsets-30\00000000000000000000.timeindex.swap: 另一个程序正在使用此文件,进程无法访问。

                at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
                at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
                at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
                at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
                at java.nio.file.Files.move(Files.java:1395)
                at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:831)
                ... 15 more
[2020-07-03 19:30:24,414] WARN [ReplicaManager broker=0] Stopping serving replicas in dir D:\kafka_2.13-2.5.0\kafka-logs (kafka.server.ReplicaManager)

kafka在启动的时候不久,就会出现这样的问题:

引用:https://community.microstrategy.com/s/article/Kafka-could-not-be-started-due-to-Failed-to-clean-up-log-for-consumer-offsets-in-MicroStrategy-10-x?language=en_US

Why is this happening? 

This is caused by a defect in Apache Kafka where the service crashes upon trying to clean up data files that have exceeded the retention policy. This crash can occur after the service has been running for some time, or on startup. The data files contain all data that has been received by the Telemetry Server (i.e. Platform Analytics Statistics and DSSErrors log contents) and by default are automatically cleaned up after 7 days. See the Apache website for more details of the Kafka issue.

人家说法是这样子的,kafka试图清理超出保留策略的数据文件,从而引起了服务崩溃。

https://issues.apache.org/jira/browse/KAFKA-7278 里面有挺多的相同问题的说明,大概看了下是kafka本身的一个问题,但是对于问题的处理,现在还是要考虑的,现在这个问题的本质是什么。

上面有记录发生问题的时间,但是不确定下一次是什么时候会出现。

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章