netty引導客戶端啓動類的代碼如下
EventLoopGroup group = new NioEventLoopGroup();
try {
+ Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.TCP_NODELAY, true)
.handler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(
//new LoggingHandler(LogLevel.INFO),
new EchoClientHandler(firstMessageSize));
}
});
// Start the client.
ChannelFuture f = b.connect(host, port).sync();
// Wait until the connection is closed.
f.channel().closeFuture().sync();
} finally {
// Shut down the event loop to terminate all threads.
group.shutdownGracefully();
}
還是老樣子,EventLoopGroup,handler這之類的在前幾篇博文已經介紹過了,這次就着重跟蹤ChannelFuture f = b.connect(host, port).sync(); 這行代碼。下面是Boostrap類下的connect()方法:
public ChannelFuture connect() {
validate();
SocketAddress remoteAddress = this.remoteAddress;
if (remoteAddress == null) {
throw new IllegalStateException("remoteAddress not set");
}
return doConnect(remoteAddress, localAddress());
}
不得不說跟之前跟蹤了的ServerBootstrap.bind()很像。validate()函數無非調用父類AbstractBootstrap的validate()方法,就是檢查此Bootstrap對象是否設置了group、channelFactory這兩個對象。
然後調用了doConnect創建連接
private ChannelFuture doConnect(final SocketAddress remoteAddress, final SocketAddress localAddress) {
final ChannelFuture regFuture = initAndRegister();
final Channel channel = regFuture.channel();
if (regFuture.cause() != null) {
return regFuture;
}
final ChannelPromise promise = channel.newPromise();
if (regFuture.isDone()) {
doConnect0(regFuture, channel, remoteAddress, localAddress, promise);
} else {
regFuture.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
doConnect0(regFuture, channel, remoteAddress, localAddress, promise);
}
});
}
return promise;
}
若如果看過我之前服務端的連接,那麼看這裏肯定會非常親切熟悉,首先通過initAndRegister()方法得到一個ChannelFuture的實例regFuture,通過regFuture可以看到channel註冊過程的狀態,通過regFuture.cause()方法判斷是否在執行initAndRegister方法時產生來異常,若產生異常則return;最後通過regFuture等channel註冊完成,再調用doConnect0();
我們再來具體分析下initAndRegister()這個方法。
final ChannelFuture initAndRegister() {
final Channel channel = channelFactory().newChannel();
try {
init(channel);
} catch (Throwable t) {
channel.unsafe().closeForcibly();
return channel.newFailedFuture(t);
}
ChannelFuture regFuture = group().register(channel);
if (regFuture.cause() != null) {
if (channel.isRegistered()) {
channel.close();
} else {
channel.unsafe().closeForcibly();
}
}
return regFuture;
}
final Channel channel = channelFactory().newChannel();
這個channel是NioSocketChannel的實例,我們看看NioSocketChannel的構造方法
public NioSocketChannel() {
this(newSocket());
}
public NioSocketChannel(SocketChannel socket) {
this(null, socket);
}
public NioSocketChannel(Channel parent, SocketChannel socket) {
super(parent, socket);
config = new DefaultSocketChannelConfig(this, socket.socket());
}
調用父類的構造,並傳入parentChannel爲null,socket爲newSocket()(利用provider打開了一個Java NIO SocketChannelImpl),繼承鏈向上傳的同時在AbstractNioByteChannel時設置NioSocektChannel的readInterestOp爲OP_READ,最後在父類AbstractChannel中綁定usafe跟pipline。
protected AbstractChannel(Channel parent) {
this.parent = parent;
unsafe = newUnsafe();
pipeline = new DefaultChannelPipeline(this);
}
init(channel);
具體邏輯在bootstrap類中,明顯比服務端簡單得多
void init(Channel channel) throws Exception {
ChannelPipeline p = channel.pipeline();
p.addLast(handler());
final Map<ChannelOption<?>, Object> options = options();
synchronized (options) {
for (Entry<ChannelOption<?>, Object> e: options.entrySet()) {
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
logger.warn("Unknown channel option: " + e);
}
} catch (Throwable t) {
logger.warn("Failed to set a channel option: " + channel, t);
}
}
}
先將一開始綁定在Boostrap上的handler綁定到channel對應的pipline中,其實就是把handler綁定在AbstractChannelHandlerContext上,並添加到pipline中的AbstractChannelHandlerContext爲節點的雙向鏈表中。剩下代碼無非將options跟attr綁定在channel上。
ChannelFuture regFuture = group().register(channel);
將channel註冊到group對應的eventLoop上,通過unsafe方法。(詳細見EventLoopGroup博文)
好了。。終於到connect0()方法上了。
private static void doConnect0(
final ChannelFuture regFuture, final Channel channel,
final SocketAddress remoteAddress, final SocketAddress localAddress, final ChannelPromise promise) {
channel.eventLoop().execute(new Runnable() {
@Override
public void run() {
if (regFuture.isSuccess()) {
if (localAddress == null) {
channel.connect(remoteAddress, promise);
} else {
channel.connect(remoteAddress, localAddress, promise);
}
promise.addListener(ChannelFutureListener.CLOSE_ON_FAILURE);
} else {
promise.setFailure(regFuture.cause());
}
}
});
}
還是通過regfuture判斷之前註冊跟初始化的狀態,如果成功了,則調用channel.connect()方法,當然此時方法執行在對應的EventLoop的線程內,channel是NioSocketChannel實例,調用AbstractChannel的connect()
@Override
public ChannelFuture connect(SocketAddress remoteAddress) {
return pipeline.connect(remoteAddress);
}
然後pipline調用的是tail的connect,並順着鏈向頭傳去
@Override
public ChannelFuture connect(SocketAddress remoteAddress) {
return tail.connect(remoteAddress);
}
@Override
public ChannelFuture connect(
final SocketAddress remoteAddress, final SocketAddress localAddress, final ChannelPromise promise) {
if (remoteAddress == null) {
throw new NullPointerException("remoteAddress");
}
validatePromise(promise, false);
final DefaultChannelHandlerContext next = findContextOutbound();
EventExecutor executor = next.executor();
if (executor.inEventLoop()) {
next.invokeConnect(remoteAddress, localAddress, promise);
} else {
safeExecute(executor, new Runnable() {
@Override
public void run() {
next.invokeConnect(remoteAddress, localAddress, promise);
}
}, promise, null);
}
return promise;
}
這兒的邏輯,在之前的博文中也分析過了,無非順着outbound鏈connect調用傳過去,一直到head節點後,調用headHandler內部的usafe.connect()
@Override
public void connect(
ChannelHandlerContext ctx,
SocketAddress remoteAddress, SocketAddress localAddress,
ChannelPromise promise) throws Exception {
unsafe.connect(remoteAddress, localAddress, promise);
}
unsafe是在AbstractNioByteChannel類中newUsafe()方法構造的
@Override
protected AbstractNioUnsafe newUnsafe() {
return new NioByteUnsafe();
}
我們看看NioByteUnsafe的中主要是read等更實際的操作,connect方法實現在其父類AbstractNioChannel中,其中核心邏輯就是調用了抽象方法doConnect,我們進入其子類查找doConnect的具體邏輯。
具體邏輯在NioSocketChannel中
@Override
protected boolean doConnect(SocketAddress remoteAddress, SocketAddress localAddress) throws Exception {
if (localAddress != null) {
javaChannel().socket().bind(localAddress);
}
boolean success = false;
try {
boolean connected = javaChannel().connect(remoteAddress);
if (!connected) {
selectionKey().interestOps(SelectionKey.OP_CONNECT);
}
success = true;
return connected;
} finally {
if (!success) {
doClose();
}
}
}
上面方法中javaChannel()方法返回的是NioSocketChannel實例初始化時所產生的Java NIO SocketChannel實例(更具體點爲SocketChannelImple實例)。 然後調用此實例的connect方法完成Java NIO層面上的Socket連接。此時跑到了jdk nio層面了。
既然客戶端發起了連接請求,服務端是不是底層會accept?我們繼續看下面的分析
服務端跑起來了,那麼其bossEventGroup組的reactor線程將會不停地檢測是否有新的accept事件發生。我們去看下具體的邏輯
select()接收到io任務,調用processSelectedKeysOptimized()處理io事件,調用processSelectedKey(k,ch)。
if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
unsafe.read();
if (!ch.isOpen()) {
// Connection already closed - no need to handle write.
return;
}
}
此時,react線程輪詢到OP_ACCEPT事件,說明有新的連接請求加入,則調用unsafe的read進行實際操作。
此時usafe應該是NioMessageUnsafe的實例,下面算是個小小的總結吧。
/**
* Unsafe是真正channel完成bind,connect,read,write等任務的接口,這裏有一個默認實現的抽象類AbstractNioUnsafe,其
* 與NIO的實現有兩個子類:
* NioByteUnsafe: 主要負責NioSocketChannel的相關操作
* NioMessageUnsafe: 主要負責NioServerSocketChannel相關的操作
*/
我們看下NioMessageUnsafe的read()方法吧
@Override
public void read() {
assert eventLoop().inEventLoop();
if (!config().isAutoRead()) {
removeReadOp();
}
final ChannelConfig config = config();
final int maxMessagesPerRead = config.getMaxMessagesPerRead();
final boolean autoRead = config.isAutoRead();
final ChannelPipeline pipeline = pipeline();
boolean closed = false;
Throwable exception = null;
try {
for (; ; ) {
int localRead = doReadMessages(readBuf);
if (localRead == 0) {
break;
}
if (localRead < 0) {
closed = true;
break;
}
if (readBuf.size() >= maxMessagesPerRead | !autoRead) {
break;
}
}
} catch (Throwable t) {
exception = t;
}
int size = readBuf.size();
for (int i = 0; i < size; i++) {
pipeline.fireChannelRead(readBuf.get(i));
}
readBuf.clear();
pipeline.fireChannelReadComplete();
if (exception != null) {
if (exception instanceof IOException) {
closed = !(AbstractNioMessageChannel.this instanceof ServerChannel);
}
pipeline.fireExceptionCaught(exception);
}
if (closed) {
if (isOpen()) {
close(voidPromise());
}
}
}
}
一開始先斷言當前線程是EventLoop線程,拿到pipline,調用抽象方法doMessage((ArrayList)readBuf);知道讀取失敗退出,或者autoRead爲false;再把readBuf中每個節點觸發fireChannelRead。
先去看看子類的doMessage的實現吧。
@Override
protected int doReadMessages(List<Object> buf) throws Exception {
//1.accept客戶端的連接,併產生對應的一個cLient SocketChannel
SocketChannel ch = javaChannel().accept();
try {
if (ch != null) {
/**
* 2.創建一個NioSocketChannel,通過構造函數,把對應的ServerSocketChannel和產生的新的SocketChannel傳入進去
* 同時把產生的(new NioSocketChannel(this, ch)放入到List<Object> buf
*/
buf.add(new NioSocketChannel(this, ch));
return 1;
}
} catch (Throwable t) {
logger.warn("Failed to create a new channel from an accepted socket.", t);
try {
ch.close();
} catch (Throwable t2) {
logger.warn("Failed to close a socket.", t2);
}
}
return 0;
}
可以看到,javaChannel()就是ServerSocketChannel實例,此時已經到了jdk層面。
到這裏其實本文已經結束了,我們再稍微擴展下、
select成功返回1,且往readBuf中添加NioSocketChannel()並傳入ServerSocketChannel跟接收到的SocketChannel;回到上面方法,把readBuf中每個節點觸發fireChannelRead(),傳入fireChannelRead方法參數,傳入的是NioSocketChannel對象,該NioSocketChannel對象封裝着1.accpet對應的java nio的ServerSocketChannel 2.accpet後產生的java nio的SocketChannel
都看到這裏了,那就繼續往下跟進吧,收到一個請求封裝成NioSocketChannel,然後呢?我們去NioSocketChannel中找答案吧。
構造函數,這段代碼在之前見過吧?哈哈,當時是在客戶端(parent=null),現在可是在服務器端(parent=ServerSocketChannel)。
public NioSocketChannel(Channel parent, SocketChannel socket) {
super(parent, socket);
config = new DefaultSocketChannelConfig(this, socket.socket());
}
我們回到pipeline.fireChannelRead(readBuf.get(i));無非調用NioServerSocketChannel對應的pipline的fireChannelRead
@Override
public ChannelPipeline fireChannelRead(Object msg) {
head.fireChannelRead(msg);
return this;
}
無非調用pipline的head的fireChannelRead,這次方向是inbound,從雙向鏈表的頭向尾傳遞。
@Override
public ChannelHandlerContext fireChannelRead(final Object msg) {
if (msg == null) {
throw new NullPointerException("msg");
}
final DefaultChannelHandlerContext next = findContextInbound();
EventExecutor executor = next.executor();
if (executor.inEventLoop()) {
next.invokeChannelRead(msg);
} else {
executor.execute(new Runnable() {
@Override
public void run() {
next.invokeChannelRead(msg);
}
});
}
return this;
}
向後傳遞,我們在前一篇分析,在ServerBootstrap的init函數中,會在pipline中添加監聽器,則initChannel()事件發生時,向pipline中添加ServerBootstrapAcceptor(是inboundhandler類型)。
private void invokeChannelRead(Object msg) {
try {
((ChannelInboundHandler) handler).channelRead(this, msg);
} catch (Throwable t) {
notifyHandlerException(t);
}
}
所以,在fireChannelRead事件調用鏈傳遞,會傳遞到ServerBootstrapAcceptor重寫的channelRead
@Override
@SuppressWarnings("unchecked")
public void channelRead(ChannelHandlerContext ctx, Object msg) {
//cast msg對象,還原其類型,這個child表示的是Server accpet後與cLient之間的通道
final Channel child = (Channel) msg;
//得到客戶端channel的pipline,並將用戶設置給ServerBootStrap childHandler的屬性給添加到child的pipline中去
//這說明每一個client channel中pipline是獨立的,且handler也是獨立的
child.pipeline().addLast(childHandler);
//對client channel進行參數設置
for (Entry<ChannelOption<?>, Object> e : childOptions) {
try {
if (!child.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
logger.warn("Unknown channel option: " + e);
}
} catch (Throwable t) {
logger.warn("Failed to set a channel option: " + child, t);
}
}
//對client channel進行屬性設置
for (Entry<AttributeKey<?>, Object> e : childAttrs) {
child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
try {
//todo: core
/**
*使用childGroup這個eventloop_group處理client channel
* 這裏是在Server accpet產生client的Channel即child給註冊到childGroup
*/
childGroup.register(child).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
forceClose(child, future.cause());
}
}
});
} catch (Throwable t) {
forceClose(child, t);
}
}
其實到這裏算是基本上結束了,把childrenhandler添加到client handler上,給每個client handler配置參數,並且把每個client chanel註冊到child/workEventGroup上。
之後就能正常通信了。netty核心部分講了這幾章博客算是講完了~