一次測試環境kafka服務報錯解決
問題現象
kafka集羣由3臺組成,消費者生產者都正常在跑,數據也沒啥問題,但是kafka服務端仍然在報錯,報錯信息如下:
[2019-05-31 10:55:27,540] ERROR Processor got uncaught exception. (kafka.network.Processor)
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:506)
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361)
at kafka.api.FetchRequest$$anonfun$1$$anonfun$apply$1.apply(FetchRequest.scala:53)
at kafka.api.FetchRequest$$anonfun$1$$anonfun$apply$1.apply(FetchRequest.scala:52)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.Range.foreach(Range.scala:141)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at kafka.api.FetchRequest$$anonfun$1.apply(FetchRequest.scala:52)
at kafka.api.FetchRequest$$anonfun$1.apply(FetchRequest.scala:49)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.immutable.Range.foreach(Range.scala:141)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
at kafka.api.FetchRequest$.readFrom(FetchRequest.scala:49)
at kafka.network.RequestChannel$Request$$anonfun$2.apply(RequestChannel.scala:65)
at kafka.network.RequestChannel$Request$$anonfun$2.apply(RequestChannel.scala:65)
at kafka.network.RequestChannel$Request$$anonfun$4.apply(RequestChannel.scala:71)
at kafka.network.RequestChannel$Request$$anonfun$4.apply(RequestChannel.scala:71)
at scala.Option.map(Option.scala:145)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:71)
at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:488)
at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:483)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.processCompletedReceives(SocketServer.scala:483)
at kafka.network.Processor.run(SocketServer.scala:413)
at java.lang.Thread.run(Thread.java:748)
問題分析
-
此前由於換了kafka-client包,但是隻生產者換了kafka-client,消費者沒換,導致不能消費,排查問題時查到kafka服務端,發現報錯,就想到了應該是與kafka通信的客戶端版本可能有差異導致。因爲不同client版本發送數據字字節長度和讀取數據字節長度有差異。
-
一排查,發現了生產者的kafka-client版本高於消費者,趕緊改。改完消費端沒問題了,可是服務端仍然在報錯。。。
-
一開始的排查思路有問題嗎?沒問題,就是kafka通信之間協議版本不對,再沿着這個思路扣,想到還有kafka集羣之間的通信,因爲他們之間至少有個副本備份的邏輯。一排查,果然如此
得出原因
- 兩臺kafka服務部署的時2.10_0.10.0.0版本,一臺部署的是2.10_0.10.1.1版本
解決
保持kafka集羣部署的版本完全一致,我是都換成了2.10_0.10.0.0版