Java NIO框架Netty教程 (七) 消息收發次數不匹配的問題

上回通過代碼理解了Netty底層信息的流的傳遞機制,不過只是一個感性上的認識。教會你應該如何使用和使用的時候應該注意的方面。但是有一些細節的問題,並沒有提及。比如:


1.private void sendMessageByFrame(ChannelStateEvent e) {
2.String msgOne = "Hello, ";
3.String msgTwo = "I'm ";
4.String msgThree = "client.";
5.e.getChannel().write(tranStr2Buffer(msgOne));
6.e.getChannel().write(tranStr2Buffer(msgTwo));
7.e.getChannel().write(tranStr2Buffer(msgThree));
8.}

這樣的方式,連續返送三次消息。但是如果你在服務端進行接收計數的話,你會發現,大部分時候都是接收到兩次的事件請求。不過消息都是完整的。網上也有人提到過,進行10000次的連續放鬆,往往接受到的消息個數是999X的,總是就是消息數目上不匹配,這又是爲何呢?筆者也只能通過閱讀Netty的源碼來找原因,我們一起來慢慢分析吧www.it165.net

起點自然是選擇在e.getChannel().writer()方法上。一路跟蹤首先來到了:AbstractNioWorker.java類


001.protected void write0(AbstractNioChannel<?> channel) {
002.boolean open = true;
003.boolean addOpWrite = false;
004.boolean removeOpWrite = false;
005.boolean iothread = isIoThread(channel);
006. 
007.long writtenBytes = 0;
008. 
009.final SocketSendBufferPool sendBufferPool = this.sendBufferPool;
010.final WritableByteChannel ch = channel.channel;
011.final Queue<MessageEvent> writeBuffer = channel.writeBufferQueue;
012.final int writeSpinCount = channel.getConfig().getWriteSpinCount();
013.synchronized (channel.writeLock) {
014.channel.inWriteNowLoop = true;
015.for (;;) {
016.MessageEvent evt = channel.currentWriteEvent;
017.SendBuffer buf;
018.if (evt == null) {
019.if ((channel.currentWriteEvent = evt = writeBuffer.poll()) == null) {
020.removeOpWrite = true;
021.channel.writeSuspended = false;
022.break;
023.}
024. 
025.channel.currentWriteBuffer = buf = sendBufferPool.acquire(evt.getMessage());
026.else {
027.buf = channel.currentWriteBuffer;
028.}
029. 
030.ChannelFuture future = evt.getFuture();
031.try {
032.long localWrittenBytes = 0;
033.for (int i = writeSpinCount; i > 0; i --) {
034.localWrittenBytes = buf.transferTo(ch);
035.if (localWrittenBytes != 0) {
036.writtenBytes += localWrittenBytes;
037.break;
038.}
039.if (buf.finished()) {
040.break;
041.}
042.}
043. 
044.if (buf.finished()) {
045.// Successful write - proceed to the next message.
046.buf.release();
047.channel.currentWriteEvent = null;
048.channel.currentWriteBuffer = null;
049.evt = null;
050.buf = null;
051.future.setSuccess();
052.else {
053.// Not written fully - perhaps the kernel buffer is full.
054.addOpWrite = true;
055.channel.writeSuspended = true;
056. 
057.if (localWrittenBytes > 0) {
058.// Notify progress listeners if necessary.
059.future.setProgress(
060.localWrittenBytes,
061.buf.writtenBytes(), buf.totalBytes());
062.}
063.break;
064.}
065.catch (AsynchronousCloseException e) {
066.// Doesn't need a user attention - ignore.
067.catch (Throwable t) {
068.if (buf != null) {
069.buf.release();
070.}
071.channel.currentWriteEvent = null;
072.channel.currentWriteBuffer = null;
073.buf = null;
074.evt = null;
075.future.setFailure(t);
076.if (iothread) {
077.fireExceptionCaught(channel, t);
078.else {
079.fireExceptionCaughtLater(channel, t);
080.}
081.if (t instanceof IOException) {
082.open = false;
083.close(channel, succeededFuture(channel));
084.}
085.}
086.}
087.channel.inWriteNowLoop = false;
088. 
089.// Initially, the following block was executed after releasing
090.// the writeLock, but there was a race condition, and it has to be
091.// executed before releasing the writeLock:
092.//
094.//
095.if (open) {
096.if (addOpWrite) {
097.setOpWrite(channel);
098.else if (removeOpWrite) {
099.clearOpWrite(channel);
100.}
101.}
102.}
103.if (iothread) {
104.fireWriteComplete(channel, writtenBytes);
105.else {
106.fireWriteCompleteLater(channel, writtenBytes);
107.}
108.}

這裏, buf.transferTo(ch);的就是調用底層WritableByteChannel的write方法,把buffer寫到管道里,傳遞過去。通過Debug可以看到,沒調用一次這個方法,服務端的messageReceived方法就會進入斷點一次。當然這個也只是表相,或者說也是在預料之內的。因爲筆者從開始就懷疑是連續寫入過快導致的問題,所以測試過每次write後停頓1秒。再write下一次。結果一切正常。

 
那麼我們跟到這裏的意義何在呢?筆者的思路是先證明不是在write端出現的寫覆蓋的問題,這樣就可以從read端尋找問題。這裏筆者也在這裏加入了一個計數,測試究竟transferTo了幾次。結果確實是3次。


1.for (int i = writeSpinCount; i > 0; i --) {
2.localWrittenBytes = buf.transferTo(ch);
3.System.out.println(++count);

接下來就從接收端找找原因,在NioWorker的read方法,實現如下:
01.@Override
02.protected boolean read(SelectionKey k) {
03.final SocketChannel ch = (SocketChannel) k.channel();
04.final NioSocketChannel channel = (NioSocketChannel) k.attachment();
05. 
06.final ReceiveBufferSizePredictor predictor =
07.channel.getConfig().getReceiveBufferSizePredictor();
08.final int predictedRecvBufSize = predictor.nextReceiveBufferSize();
09. 
10.int ret = 0;
11.int readBytes = 0;
12.boolean failure = true;
13. 
14.ByteBuffer bb = recvBufferPool.acquire(predictedRecvBufSize);
15.try {
16.while ((ret = ch.read(bb)) > 0) {
17.readBytes += ret;
18.if (!bb.hasRemaining()) {
19.break;
20.}
21.}
22.failure = false;
23.catch (ClosedChannelException e) {
24.// Can happen, and does not need a user attention.
25.catch (Throwable t) {
26.fireExceptionCaught(channel, t);
27.}
28. 
29.if (readBytes > 0) {
30.bb.flip();
31. 
32.final ChannelBufferFactory bufferFactory =
33.channel.getConfig().getBufferFactory();
34.final ChannelBuffer buffer = bufferFactory.getBuffer(readBytes);
35.buffer.setBytes(0, bb);
36.buffer.writerIndex(readBytes);
37. 
38.recvBufferPool.release(bb);
39. 
40.// Update the predictor.
41.predictor.previousReceiveBufferSize(readBytes);
42. 
43.// Fire the event.
44.fireMessageReceived(channel, buffer);
45.else {
46.recvBufferPool.release(bb);
47.}
48. 
49.if (ret < 0 || failure) {
50.k.cancel(); // Some JDK implementations run into an infinite loop without this.
51.close(channel, succeededFuture(channel));
52.return false;
53.}
54. 
55.return true;
56.}

在這個方法的外層是一個循環,不停的遍歷,如果有SelectionKey k存在,則進入此方法讀取buffer中的數據。這個SelectionKey 區分只是一種類型,這個設計到Java NIO中的Seletor機制,這個筆者準備下講穿插一下。屬於Netty底層的一個重要的機制。

messageReceived事件的觸發,是在讀取完當前緩衝池中所有的信息之後在觸發的。這倒是可以解釋,爲什麼即使我們收到事件的次數少,但是消息是完整的。
 
從目前來看,Netty通過Java 的NIO機制傳遞數據,數據讀寫跟事件沒有嚴格的綁定機制。數據是以流的形式獨立存在,讀寫都有一個緩衝池。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章