TX-LCN分佈式事務框架源碼解析(服務端-2-TxLcnInitializer初始化之TMRpcServer)

上篇文章我們通過註解@EnableTransactionManagerServer瞭解了服務端的啓動初始化流程。最重要的TxLcnInitializer#init我們在本篇文章詳細說明

先看下服務端實現了TxLcnInitializer接口的類有哪些

一共有五個實現類分別是

EnsureIdGenEngine類,用於處理和生成全局唯一id

MysqlLoggerHelper類,用於記錄mysql的日誌(不做細講)

RpcNettyInitializer類,netty的初始化類

TMAutoCluster類,服務端的自動集羣

TMRpcServer類,TM 服務端

其執行順序是

1、TMRpcServer

這個類是RPC 服務端,TX-LCN框架底層採用的是netty進行服務端與客戶端通信,TMRpcServer類的作用是啓動netty服務端。

我們看init方法

public void init() {
        // 1. 配置
        if (rpcConfig.getWaitTime() <= 5) {
            //最大等待時間
            rpcConfig.setWaitTime(1000);
        }
        if (rpcConfig.getAttrDelayTime() < 0) {
            //延時時間 8s
            rpcConfig.setAttrDelayTime(txManagerConfig.getDtxTime());
        }

        // 2. 初始化RPC Server
        ManagerProperties managerProperties = new ManagerProperties();
        //心跳檢測時間 5m
        managerProperties.setCheckTime(txManagerConfig.getHeartTime());
        //端口號
        managerProperties.setRpcPort(txManagerConfig.getPort());
        //ip
        managerProperties.setRpcHost(txManagerConfig.getHost());
        //委託NettyRpcServerInitializer類執行init方法
        rpcServerInitializer.init(managerProperties);
    }

這裏主要配置了rpcconfig,設置了ManagerProperties的三個屬性,這裏ip需要說一下。ip是存在TxManagerConfig類中

其初始化ip是在構造方法中進行的代碼如下

    public static final int PORT_CHANGE_VALUE = 100;

    @Autowired
    public TxManagerConfig(ServerProperties serverProperties) {
        this.port = Objects.requireNonNull(serverProperties.getPort(), "TM http port not configured?") +
                PORT_CHANGE_VALUE;
    }

如果工程的server.port=7900 ,則這裏的port爲7900+100 即8000,這裏netty啓動的端口是server.port+100。

1.1 NettyRpcServerInitializer的init方法如下

public void init(ManagerProperties managerProperties) {
        //爲NettyContext設置類型與參數
        NettyContext.type = NettyType.server;
        NettyContext.params = managerProperties;
        
        nettyRpcServerChannelInitializer.setManagerProperties(managerProperties);

        int port = managerProperties.getRpcPort();
        //標準的netty服務端寫法
        bossGroup = new NioEventLoopGroup();
        workerGroup = new NioEventLoopGroup();
        try {
            ServerBootstrap b = new ServerBootstrap();
            b.group(bossGroup, workerGroup)
                    .channel(NioServerSocketChannel.class)
                    .option(ChannelOption.SO_BACKLOG, 100)
                    .handler(new LoggingHandler(LogLevel.INFO))
                    //配置channelhandler
                    .childHandler(nettyRpcServerChannelInitializer);

            // 啓動服務.
            if (StringUtils.hasText(managerProperties.getRpcHost())) {
                b.bind(managerProperties.getRpcHost(), managerProperties.getRpcPort());
            } else {
                b.bind(port);
            }
            log.info("Socket started on {}:{} ",
                    StringUtils.hasText(managerProperties.getRpcHost()) ? managerProperties.getRpcHost() : "0.0.0.0", port);

        } catch (Exception e) {
            // Shut down all event loops to terminate all threads.
            e.printStackTrace();
        }
    }

設置的childHandler爲NettyRpcServerChannelInitializer此類繼承了ChannelInitializer會在客戶端連接後去執行NettyRpcServerChannelInitializer的initChannel方法。

1.2 NettyRpcServerChannelInitializer的initChannel方法如下

protected void initChannel(Channel ch) throws Exception {
        //這兩個類配合使用解決半包與粘包問題
        ch.pipeline().addLast(new LengthFieldPrepender(4, false));
        ch.pipeline().addLast(new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4));
        //心跳
        ch.pipeline().addLast(new IdleStateHandler(managerProperties.getCheckTime(),
                managerProperties.getCheckTime(), managerProperties.getCheckTime(), TimeUnit.MILLISECONDS));
        
        //編碼與解碼
        ch.pipeline().addLast(new ObjectSerializerEncoder());
        ch.pipeline().addLast(new ObjectSerializerDecoder());

        //框架特有的Handler
        ch.pipeline().addLast(rpcCmdDecoder);
        ch.pipeline().addLast(new RpcCmdEncoder());
        ch.pipeline().addLast(socketManagerInitHandler);
        ch.pipeline().addLast(rpcAnswerHandler);
    }

半包https://blog.csdn.net/cgj296645438/article/details/90667419

心跳https://blog.csdn.net/cgj296645438/article/details/90902821

編碼器ObjectSerializerEncoder繼承MessageToByteEncoder(netty)進行編碼,把對象轉化成byte

解碼器ObjectSerializerDecoder繼承MessageToMessageDecoder(netty)進行二次解碼把對象轉換爲對象

編碼器與解碼器使用的谷歌的protobuf工具。

框架handler

RpcCmdDecoder

RpcCmd是客戶端服務端消息傳輸的載體的父類,即是消息。

下面看源碼

public class RpcCmdDecoder extends SimpleChannelInboundHandler<NettyRpcCmd> {

    @Autowired
    private HeartbeatListener heartbeatListener;

    @Override
    protected void channelRead0(ChannelHandlerContext ctx, NettyRpcCmd cmd) {
        String key = cmd.getKey();
        log.debug("cmd->{}", cmd);

        //心態數據包直接響應
        if (cmd.getMsg() != null && MessageConstants.ACTION_HEART_CHECK.equals(cmd.getMsg().getAction())) {
            if (NettyContext.currentType().equals(NettyType.client)) {
                //設置值
                heartbeatListener.onTcReceivedHeart(cmd);
                ctx.writeAndFlush(cmd);
                return;
            } else {
                heartbeatListener.onTmReceivedHeart(cmd);
                return;
            }
        }

        //需要響應的數據包
        if (!StringUtils.isEmpty(key)) {
            RpcContent rpcContent = cmd.loadRpcContent();
            if (rpcContent != null) {
                log.debug("got response message[Netty Handler]");
                rpcContent.setRes(cmd.getMsg());
                rpcContent.signal();
            } else {
                ctx.fireChannelRead(cmd);
            }
        } else {
            ctx.fireChannelRead(cmd);
        }
    }
}

從代碼主要做了兩件事

1、如果消息是心跳消息,判斷當前是客戶端還是服務端,如果是服務端則調用EnsureIdGenEngine類的onTmReceivedHeart方法去操作redis刷新機器號的超時時間。如果是客戶端則調用TCSideRpcInitCallBack的onTcReceivedHeart方法爲當前的消息設置MachineId

2、如果不是心跳消息,根據有無key值去做進一步處理,主要是設置rpccontent

RpcCmdEncoder

這個類很簡單什麼也沒做只是把當前對象放入list裏面,代碼如下

public class RpcCmdEncoder extends MessageToMessageEncoder<NettyRpcCmd> {


    @Override
    protected void encode(ChannelHandlerContext ctx, NettyRpcCmd cmd, List<Object> out) throws Exception {
        log.debug("send->{}", cmd);
        out.add(cmd);
    }
}

SocketManagerInitHandler

代碼如下

public class SocketManagerInitHandler extends ChannelInboundHandlerAdapter {

    private RpcCmd heartCmd;

    @Autowired
    private RpcConnectionListener rpcConnectionListener;

    public SocketManagerInitHandler() {
        MessageDto messageDto = new MessageDto();
        messageDto.setAction(MessageConstants.ACTION_HEART_CHECK);
        heartCmd = new NettyRpcCmd();
        heartCmd.setMsg(messageDto);
        heartCmd.setKey(RandomUtils.simpleKey());
    }

    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        super.channelActive(ctx);
        rpcConnectionListener.connect(ctx.channel().remoteAddress().toString());
        SocketManager.getInstance().addChannel(ctx.channel());
    }

    @Override
    public void channelInactive(ChannelHandlerContext ctx) throws Exception {
        super.channelInactive(ctx);
        String removeKey = ctx.channel().remoteAddress().toString();
        String appName = SocketManager.getInstance().getModuleName(removeKey);
        rpcConnectionListener.disconnect(removeKey,appName);
        SocketManager.getInstance().removeChannel(ctx.channel());
    }


    @Override
    public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
        //心跳配置
        if (IdleStateEvent.class.isAssignableFrom(evt.getClass())) {
            IdleStateEvent event = (IdleStateEvent) evt;
            if (event.state() == IdleState.READER_IDLE) {
                ctx.writeAndFlush(heartCmd);
            }
        }
    }

}

主要做了三件事

1、構造心跳消息

2、在有客戶端連接上服務端是初始化SocketManager並把鏈接的channel保存到SocketManager裏,在客戶端斷開鏈接是把channel從SocketManager上移除

3、和IdleStateHandler配合使用當IdleStateHandler一段時間沒有監測到讀或寫請求,會執行此userEventTriggered方法向客戶端發送心跳數據包,然後嘿嘿又和RpcCmdDecoder聯合使用返回心跳數據包。

RpcAnswerHandler

應答handler

public class RpcAnswerHandler extends SimpleChannelInboundHandler<RpcCmd> {
    //服務端這裏是ServerRpcAnswer
    @Autowired
    private RpcAnswer rpcClientAnswer;

    @Override
    protected void channelRead0(ChannelHandlerContext ctx, RpcCmd cmd) {
        String remoteKey = ctx.channel().remoteAddress().toString();
        cmd.setRemoteKey(remoteKey);
        rpcClientAnswer.callback(cmd);
    }
}
ServerRpcAnswer代碼如下
public class ServerRpcAnswer implements RpcAnswer, DisposableBean {

    private final RpcClient rpcClient;

    private final ExecutorService executorService;

    private final TxLcnManagerRpcBeanHelper rpcBeanHelper;

    @Autowired
    public ServerRpcAnswer(RpcClient rpcClient, TxLcnManagerRpcBeanHelper rpcBeanHelper, TxManagerConfig managerConfig) {
        managerConfig.setConcurrentLevel(
                Math.max(Runtime.getRuntime().availableProcessors() * 5, managerConfig.getConcurrentLevel()));
        this.rpcClient = rpcClient;
        this.executorService = Executors.newFixedThreadPool(managerConfig.getConcurrentLevel(),
                new ThreadFactoryBuilder().setDaemon(false).setNameFormat("tm-rpc-service-%d").build());
        this.rpcBeanHelper = rpcBeanHelper;
    }


    @Override
    public void callback(RpcCmd rpcCmd) {
        executorService.submit(() -> {
            try {
                TransactionCmd transactionCmd = parser(rpcCmd);
                String action = transactionCmd.getMsg().getAction();
                RpcExecuteService rpcExecuteService = rpcBeanHelper.loadManagerService(transactionCmd.getType());
                MessageDto messageDto = null;
                try {
                    Serializable message = rpcExecuteService.execute(transactionCmd);
                    messageDto = MessageCreator.okResponse(message, action);
                } catch (Throwable e) {
                    log.error("rpc execute service error. action: " + action, e);
                    messageDto = MessageCreator.failResponse(e, action);
                } finally {
                    // 對需要響應信息的請求做出響應
                    if (rpcCmd.getKey() != null) {
                        assert Objects.nonNull(messageDto);
                        try {
                            messageDto.setGroupId(rpcCmd.getMsg().getGroupId());
                            rpcCmd.setMsg(messageDto);
                            rpcClient.send(rpcCmd);
                        } catch (RpcException ignored) {
                        }
                    }
                }
            } catch (Throwable e) {
                if (rpcCmd.getKey() != null) {
                    log.info("send response.");
                    String action = rpcCmd.getMsg().getAction();
                    // 事務協調器業務未處理的異常響應服務器失敗
                    rpcCmd.setMsg(MessageCreator.serverException(action));
                    try {
                        rpcClient.send(rpcCmd);
                        log.info("send response ok.");
                    } catch (RpcException ignored) {
                        log.error("requester:{} dead.", rpcCmd.getRemoteKey());
                    }
                }
            }
        });
    }

    @Override
    public void destroy() throws Exception {
        this.executorService.shutdown();
        this.executorService.awaitTermination(6, TimeUnit.SECONDS);
    }

    private TransactionCmd parser(RpcCmd rpcCmd) {
        TransactionCmd cmd = new TransactionCmd();
        cmd.setRequestKey(rpcCmd.getKey());
        cmd.setRemoteKey(rpcCmd.getRemoteKey());
        cmd.setType(LCNCmdType.parserCmd(rpcCmd.getMsg().getAction()));
        cmd.setGroupId(rpcCmd.getMsg().getGroupId());
        cmd.setMsg(rpcCmd.getMsg());
        return cmd;
    }

}

這個類是重點,所有的事物操作都在這裏面,我們仔細分析

構造方法

    private final RpcClient rpcClient;

    private final ExecutorService executorService;

    private final TxLcnManagerRpcBeanHelper rpcBeanHelper;

    @Autowired
    public ServerRpcAnswer(RpcClient rpcClient, TxLcnManagerRpcBeanHelper rpcBeanHelper, TxManagerConfig managerConfig) {
        managerConfig.setConcurrentLevel(
                Math.max(Runtime.getRuntime().availableProcessors() * 5, managerConfig.getConcurrentLevel()));
        this.rpcClient = rpcClient;
        this.executorService = Executors.newFixedThreadPool(managerConfig.getConcurrentLevel(),
                new ThreadFactoryBuilder().setDaemon(false).setNameFormat("tm-rpc-service-%d").build());
        this.rpcBeanHelper = rpcBeanHelper;
    }

這裏面三個參數

1、RpcClient,是用來通信的類,在服務端的實現是NettyRpcClient,部分代碼如下可以看到都是和通信相關的

public class NettyRpcClient extends RpcClient {


    @Override
    public RpcResponseState send(RpcCmd rpcCmd) throws RpcException {
        return SocketManager.getInstance().send(rpcCmd.getRemoteKey(), rpcCmd);
    }


    @Override
    public RpcResponseState send(String remoteKey, MessageDto msg) throws RpcException {
        RpcCmd rpcCmd = new NettyRpcCmd();
        rpcCmd.setMsg(msg);
        rpcCmd.setRemoteKey(remoteKey);
        return send(rpcCmd);
    }

    @Override
    public MessageDto request(RpcCmd rpcCmd) throws RpcException {
        return request0(rpcCmd, -1);
    }

    private MessageDto request0(RpcCmd rpcCmd, long timeout) throws RpcException {
        if (rpcCmd.getKey() == null) {
            throw new RpcException("key must be not null.");
        }
        return SocketManager.getInstance().request(rpcCmd.getRemoteKey(), rpcCmd, timeout);
    }

  .............

但是實際通信的功能都委託給了SocketManager。

2、ExecutorService線程池不用多說,根據當前可用核心創建線程池

3、TxLcnManagerRpcBeanHelper獲取spring中特定類型與姓名的bean的幫助類代碼如下

public class TxLcnManagerRpcBeanHelper {


    /**
     * manager bean 名稱格式
     * manager_%s_%s
     * manager:前綴 %s:業務處理(create,add,close)
     */
    private static final String RPC_BEAN_NAME_FORMAT = "rpc_%s";


    @Autowired
    private ApplicationContext spring;


    public String getServiceBeanName(LCNCmdType cmdType) {
        return String.format(RPC_BEAN_NAME_FORMAT, cmdType.getCode());
    }


    public RpcExecuteService loadManagerService(LCNCmdType cmdType) {
        return spring.getBean(getServiceBeanName(cmdType), RpcExecuteService.class);
    }

    public <T> T getByType(Class<T> type) {
        return spring.getBean(type);
    }
}

callback方法

public void callback(RpcCmd rpcCmd) {
        executorService.submit(() -> {
            try {
                TransactionCmd transactionCmd = parser(rpcCmd);
                //當前消息的操作類型比如創建事物組、加入事務組等
                String action = transactionCmd.getMsg().getAction();
                //根據操作類型獲取具體的service
                RpcExecuteService rpcExecuteService = rpcBeanHelper.loadManagerService(transactionCmd.getType());
                MessageDto messageDto = null;
                try {
                    //執行execute
                    Serializable message = rpcExecuteService.execute(transactionCmd);
                    //如果正確執行封裝響應消息
                    messageDto = MessageCreator.okResponse(message, action);
                } catch (Throwable e) {
                    log.error("rpc execute service error. action: " + action, e);
                    messageDto = MessageCreator.failResponse(e, action);
                } finally {
                    // 對需要響應信息的請求做出響應
                    if (rpcCmd.getKey() != null) {
                        assert Objects.nonNull(messageDto);
                        try {
                            messageDto.setGroupId(rpcCmd.getMsg().getGroupId());
                            rpcCmd.setMsg(messageDto);
                            //響應消息
                            rpcClient.send(rpcCmd);
                        } catch (RpcException ignored) {
                        }
                    }
                }
            } catch (Throwable e) {
                if (rpcCmd.getKey() != null) {
                    log.info("send response.");
                    String action = rpcCmd.getMsg().getAction();
                    // 事務協調器業務未處理的異常響應服務器失敗
                    rpcCmd.setMsg(MessageCreator.serverException(action));
                    try {
                        rpcClient.send(rpcCmd);
                        log.info("send response ok.");
                    } catch (RpcException ignored) {
                        log.error("requester:{} dead.", rpcCmd.getRemoteKey());
                    }
                }
            }
        });
    }

主要是通過線程池進行提交任務操作,具體說明已經寫到代碼中。

TMRpcServer的內容還是挺多的,重點是幾個handler的講解。我們回顧下一個有這幾個

LengthFieldPrepender     繼承MessageToMessageEncoder   間接繼承ChannelOutboundHandlerAdapter

LengthFieldBasedFrameDecoder   繼承ByteToMessageDecoder   間接繼承ChannelInboundHandlerAdapter

IdleStateHandler  繼承ChannelDuplexHandler   間接繼承ChannelInboundHandlerAdapter 間接實現ChannelOutboundHandler

ObjectSerializerEncoder  繼承MessageToByteEncoder  間接繼承ChannelOutboundHandlerAdapter

ObjectSerializerDecoder  繼承MessageToMessageDecoder   間接繼承ChannelInboundHandlerAdapter

RpcCmdDecoder   繼承SimpleChannelInboundHandler   間接繼承ChannelInboundHandlerAdapter

RpcCmdEncoder    繼承MessageToMessageEncoder   間接繼承ChannelOutboundHandlerAdapter

SocketManagerInitHandler   繼承ChannelInboundHandlerAdapter

RpcAnswerHandler   繼承SimpleChannelInboundHandler   間接繼承ChannelInboundHandlerAdapter

這些handler在pipeline中的定義順序與執行順序是這樣的,箭頭是執行順序

當一個消息從客戶端寫入服務端時

1、寫去解決半包問題即通過定長去讀取內容->LengthFieldBasedFrameDecoder   

2、然後消息傳遞到IdleStateHandler 這樣channelread被調用,正常傳遞到下一個handler

3、這是傳遞的ObjectSerializerDecoder 解碼器把bytebuf轉換爲我們的NettyRpcCmd對象,繼續到下一個handler

4、RpcCmdDecoder   會判斷當前的是否是心跳消息,做不同的處理。是心跳消息則不再往下透傳,不是心跳消息繼續透傳

5、不是心跳消息的情況下透傳到SocketManagerInitHandler   這裏因爲IdleStateHandler的channelread被調用,則不會執行userEventTriggered方法。如果IdleStateHandler的channelread方法長時間沒有調用則會定時執行userEventTriggered。

6、RpcAnswerHandler   是最後的handler,會根據消息的類型去執行不同的邏輯如果消息需要返回則會寫會客戶端。

當一個消息從服務端寫入客戶端

1、RpcCmdEncoder    對NettyRpcCmd進行編碼,這是沒做什麼只是把NettyRpcCmd對象放入了list中,然後透傳

2、到ObjectSerializerEncoder 會把NettyRpcCmd對象轉化爲ByteBuf,然後透傳

3、然後消息傳遞到IdleStateHandler 這樣channelread被調用,正常傳遞到下一個handler

4、LengthFieldPrepender    通過在消息頭加入消息長度字段來解決半包問題

沒想到只是一個TMRpcServer就這麼多決定分開寫,要不然篇幅太長

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章