文章基於rocket-mq4.0 代碼分析
在Broker啓動類BrokerStartup啓動過程中調用BrokerController的initialize()方法
在該方法執行過程中會給不同的請求註冊不同的處理器
具體代碼:
SendMessageProcessor這個類就是消息保存的實際處理器,處理方法是 processRequest
start流程啓動後開啓調用監聽
org.apache.rocketmq.broker.BrokerController#start
↓
org.apache.rocketmq.remoting.netty.NettyRemotingServer#start
↓
//注入netty nio監聽處理器
ServerBootstrap childHandler =
this.serverBootstrap.group(this.eventLoopGroupBoss, this.eventLoopGroupSelector)
.channel(useEpoll() ? EpollServerSocketChannel.class : NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1024)
.option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.SO_KEEPALIVE, false)
.childOption(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_SNDBUF, nettyServerConfig.getServerSocketSndBufSize())
.childOption(ChannelOption.SO_RCVBUF, nettyServerConfig.getServerSocketRcvBufSize())
.localAddress(new InetSocketAddress(this.nettyServerConfig.getListenPort()))
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline()
.addLast(defaultEventExecutorGroup, HANDSHAKE_HANDLER_NAME,
new HandshakeHandler(TlsSystemConfig.tlsMode))
.addLast(defaultEventExecutorGroup,
new NettyEncoder(),
new NettyDecoder(),
new IdleStateHandler(0, 0, nettyServerConfig.getServerChannelMaxIdleTimeSeconds()),
new NettyConnectManageHandler(),
new NettyServerHandler()
);
}
});
class NettyServerHandler extends SimpleChannelInboundHandler<RemotingCommand> 是處理繼承了消息接入的處理器,最終會把 REQUEST_COMMAND 類型的消息交給 SendMessageProcessor 處理
***********其實這裏rocketmq爲什麼會把這個Processor取名爲SendMessageProcessor我完全不能理解(爲什麼名字裏是Send?),對於broker來說這明明是一個接收消息的處理器不是發消息,難道取名字的時候是站在client角度的?不知道
主流程調用會到
private RemotingCommand sendMessage(final ChannelHandlerContext ctx,
final RemotingCommand request,
final SendMessageContext sendMessageContext,
final SendMessageRequestHeader requestHeader) throws RemotingCommandException {
這個方法會判斷是否事務消息而做不同的處理
普通消息和事務消息都會被封裝成 MessageExtBrokerInner 對象,最終都是調用 DefaultMessageStore的 putMessage方法處理消息;
public PutMessageResult putMessage(MessageExtBrokerInner msg)
DefaultMessageStore又會調用CommitLog的
public PutMessageResult putMessage(final MessageExtBrokerInner msg)
方法保存消息;CommitLog裏維護了commitlog文件的引用隊列,程序會拿到最後一個commitlog文件然後向文件裏寫入數據
上圖中的
result = mappedFile.appendMessage(msg, this.appendMessageCallback);
是一個將數據append到LogFile真實文件的內存映射中(MappedFile會回調CommitLog內部類DefaultAppendMessageCallback處理)並將結果包裝成一個 AppendMessageResult 對象
向文件映射中put
如果包裝成功會調用我們經常聽到的刷盤服務進行真正的落盤
以 GroupCommitService 爲例,核心代碼跟蹤到
CommitLog.this.mappedFileQueue.flush(0);
MappedFileQueue
public boolean flush(final int flushLeastPages) {
boolean result = true;
MappedFile mappedFile = this.findMappedFileByOffset(this.flushedWhere, false);
if (mappedFile != null) {
long tmpTimeStamp = mappedFile.getStoreTimestamp();
int offset = mappedFile.flush(flushLeastPages);
long where = mappedFile.getFileFromOffset() + offset;
result = where == this.flushedWhere;
this.flushedWhere = where;
if (0 == flushLeastPages) {
this.storeTimestamp = tmpTimeStamp;
}
}
return result;
}
然後調用 MappedFile
public int flush(final int flushLeastPages) {
if (this.isAbleToFlush(flushLeastPages)) {
if (this.hold()) {
int value = getReadPosition();
try {
//We only append data to fileChannel or mappedByteBuffer, never both.
if (writeBuffer != null || this.fileChannel.position() != 0) {
this.fileChannel.force(false);
} else {
this.mappedByteBuffer.force();
}
} catch (Throwable e) {
log.error("Error occurred when force data to disk.", e);
}
this.flushedPosition.set(value);
this.release();
} else {
log.warn("in flush, hold failed, flush offset = " + this.flushedPosition.get());
this.flushedPosition.set(getReadPosition());
}
}
return this.getFlushedPosition();
}
最終調用的是 FileChannel.force() 方法
該方法將通道里尚未寫入磁盤的數據強制寫到磁盤上,至此broker將接收到的消息寫到文件中流程結束