Zuul2 的 線程模型
轉自:https://www.jianshu.com/p/cb413fec1632
Zuul 2相對zuul 1 由同步改進爲異步機制,沒有了同步阻塞,全部基於事件驅動模型編程,線程模型也變得簡單。
zuul做爲一個網關接受客戶端的請求–服務端,又要和後端的服務建立鏈接,把請求轉發給後端服務–客戶端。下面我們來分析zuul 是怎麼通過一個線程池來實現的,即服務端和客戶端用同一個線程池,通過netty 編程這很容易,空口無憑,我們來看zuul 是怎麼實現的。
線程池初始化
線程池的個數如下
public DefaultEventLoopConfig()
{
eventLoopCount = WORKER_THREADS.get() > 0 ? WORKER_THREADS.get() : PROCESSOR_COUNT;
acceptorCount = ACCEPTOR_THREADS.get();
}
boss 線程數是1,
worker 線程數是:PROCESSOR_COUNT 缺省值如下,爲cpu的核數。
zuul 異步線程模型,吞吐量很高,所以線程的個數基本按cpu核數來,這樣上下文切換的開銷很少。
private static final int PROCESSOR_COUNT = Runtime.getRuntime().availableProcessors();
Zuul server 的start 方法入手
public void start(boolean sync)
{
//ServerGroup 是zuul 對netty eventloop的簡單封裝,
serverGroup = new ServerGroup("Salamander", eventLoopConfig.acceptorCount(), eventLoopConfig.eventLoopCount(), eventLoopGroupMetrics);
//使用epoll還是機遇jdk的多路複用select來實現事件驅動,epoll 只支持Linux,使用
serverGroup.initializeTransport();
try {
List<ChannelFuture> allBindFutures = new ArrayList<>();
// Setup each of the channel initializers on requested ports.
for (Map.Entry<Integer, ChannelInitializer> entry : portsToChannelInitializers.entrySet())
{
allBindFutures.add(setupServerBootstrap(entry.getKey(), entry.getValue()));
}
// Once all server bootstraps are successfully initialized, then bind to each port.
for (ChannelFuture f: allBindFutures) {
// Wait until the server socket is closed.
ChannelFuture cf = f.channel().closeFuture();
if (sync) {
cf.sync();
}
}
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
Epoll 還是Selector
我們知道,操作系統底層的IO方式有select,poll,epoll,最早的是select 機制,但是select機制對文件句柄的個數有限制,而且需要迭代,而epoll 無限制,而且不需要迭代,效率最高,jdk也有epoll 的api,不過默認是採用水平出發的方式,netty 的epoll 是採用邊緣觸發方式,效率更高,所以寫netty的都會做個適配,是用epoll還是用select。
服務端
private ChannelFuture setupServerBootstrap(int port, ChannelInitializer channelInitializer)
throws InterruptedException
{
//設置上面創建的兩個線程池
ServerBootstrap serverBootstrap = new ServerBootstrap().group(
serverGroup.clientToProxyBossPool,
serverGroup.clientToProxyWorkerPool);
// Choose socket options.
Map<ChannelOption, Object> channelOptions = new HashMap<>();
channelOptions.put(ChannelOption.SO_BACKLOG, 128);
//channelOptions.put(ChannelOption.SO_TIMEOUT, SERVER_SOCKET_TIMEOUT.get());
channelOptions.put(ChannelOption.SO_LINGER, -1);
channelOptions.put(ChannelOption.TCP_NODELAY, true);
channelOptions.put(ChannelOption.SO_KEEPALIVE, true);
// Choose EPoll or NIO.
if (USE_EPOLL.get()) {
LOG.warn("Proxy listening with TCP transport using EPOLL");
serverBootstrap = serverBootstrap.channel(EpollServerSocketChannel.class);
channelOptions.put(EpollChannelOption.TCP_DEFER_ACCEPT, Integer.valueOf(-1));
}
else {
LOG.warn("Proxy listening with TCP transport using NIO");
serverBootstrap = serverBootstrap.channel(NioServerSocketChannel.class);
}
// Apply socket options.
for (Map.Entry<ChannelOption, Object> optionEntry : channelOptions.entrySet()) {
serverBootstrap = serverBootstrap.option(optionEntry.getKey(), optionEntry.getValue());
}
//設置channelInitializer,後面分析的入口就在這裏了。
serverBootstrap.childHandler(channelInitializer);
serverBootstrap.validate();
LOG.info("Binding to port: " + port);
// Flag status as UP just before binding to the port.
serverStatusManager.localStatus(InstanceInfo.InstanceStatus.UP);
// Bind and start to accept incoming connections.
return serverBootstrap.bind(port).sync();
}
客戶端
服務端接收到請求後,請求經過一序列的filter 處理,會交給zuul的ProxyEndpoint 來把請求轉發給後端服務,ProxyEndpoint這裏 需要做路由和對後端鏈接的建立,即實現netty的客戶端,執行的入口如下:
ProxyEndpoint 的 proxyRequestToOrigin 方法:
attemptNum += 1;
requestStat = createRequestStat();
origin.preRequestChecks(zuulRequest);
concurrentReqCount++;
//關鍵是這裏,channelCtx.channel().eventLoop()
promise = origin.connectToOrigin(zuulRequest, channelCtx.channel().eventLoop(), attemptNum, passport, chosenServer);
logOriginServerIpAddr();
currentRequestAttempt = origin.newRequestAttempt(chosenServer.get(), context, attemptNum);
requestAttempts.add(currentRequestAttempt);
passport.add(PassportState.ORIGIN_CONN_ACQUIRE_START);
if (promise.isDone()) {
operationComplete(promise);
} else {
promise.addListener(this);
}
}
上面的這行代碼中的channelCtx.channel().eventLoop(),就是當前netty 接入端的worker event loop。
promise = origin.connectToOrigin(zuulRequest, channelCtx.channel().eventLoop(), attemptNum, passport, chosenServer);
這裏主要是實現如下幾點:
負載均衡,選擇一臺機器。
爲對應的機器創建鏈接池。
從鏈接池活着鏈接。
第一次建立鏈接時,最終會執行如下的代碼:
public ChannelFuture connect(final EventLoop eventLoop, String host, final int port, CurrentPassport passport) {
Class socketChannelClass;
if (Server.USE_EPOLL.get()) {
socketChannelClass = EpollSocketChannel.class;
} else {
socketChannelClass = NioSocketChannel.class;
}
final Bootstrap bootstrap = new Bootstrap()
.channel(socketChannelClass)
.handler(channelInitializer)
.group(eventLoop)
.attr(CurrentPassport.CHANNEL_ATTR, passport)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, connPoolConfig.getConnectTimeout())
.option(ChannelOption.SO_KEEPALIVE, connPoolConfig.getTcpKeepAlive())
.option(ChannelOption.TCP_NODELAY, connPoolConfig.getTcpNoDelay())
.option(ChannelOption.SO_SNDBUF, connPoolConfig.getTcpSendBufferSize())
.option(ChannelOption.SO_RCVBUF, connPoolConfig.getTcpReceiveBufferSize())
.option(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, connPoolConfig.getNettyWriteBufferHighWaterMark())
.option(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, connPoolConfig.getNettyWriteBufferLowWaterMark())
.option(ChannelOption.AUTO_READ, connPoolConfig.getNettyAutoRead())
.remoteAddress(new InetSocketAddress(host, port));
return bootstrap.connect();
}
上面的代碼是不是很熟悉,是netty客戶端的實現,綁定的eventloop 就是前面傳遞進來的即接入端的eventLoop,這樣zuul 就是實現了接入端的io 操作和後端服務的讀寫都是綁定到同一個eventLoop線程上。
總結
zuul2的線程模型好簡單,就是netty的一個eventloop 線程池,來處理所有的請求和遠程調用。
從上面的圖可以看出,一個請求進來和調用遠程服務,以及回寫都是由同一個線程來完成的,完全沒有上下文切換。zuul2 用一個線程池搞定所有的這些,這都是得益於netty 異步編程的威力。