seata server啓動源碼

版本:1.2.0

Seata主要包括三大組件:TC、TM和RM。TC(Transaction Coordinator)主要負責全局事務的提交和回滾,是seata的關鍵組件。對可用性及性能都有着較高的要求。

seata TC實現源碼Server的各個包:

  • coordinator:協調器核心模塊
  • event:事件管理模塊
  • lock:資源鎖模塊
  • metrics: metrics指標模塊
  • session:session通信模塊
  • storage:事務信息存儲模塊,file和DB
  • store:存儲配置模塊
  • transaction:事務模塊

Server啓動流程

server啓動的入口是Server.main方法

    /**
     * The entry point of application.
     *
     * @param args the input arguments
     * @throws IOException the io exception
     */
    public static void main(String[] args) throws IOException {
        //初始化參數解析器
        ParameterParser parameterParser = new ParameterParser(args);

        //初始化度量指標metrics
        MetricsManager.get().init();
        //設置日誌存儲模式 file/Db
        System.setProperty(ConfigurationKeys.STORE_MODE, parameterParser.getStoreMode());
        //創建nettyserver
        RpcServer rpcServer = new RpcServer(WORKING_THREADS);
        //server port
        rpcServer.setListenPort(parameterParser.getPort());
        UUIDGenerator.init(parameterParser.getServerNode());
        //sessionholder 初始化
        SessionHolder.init(parameterParser.getStoreMode());
        //創建默認協調器
        DefaultCoordinator coordinator = new DefaultCoordinator(rpcServer);
        coordinator.init();
        rpcServer.setHandler(coordinator);
        // 註冊關閉鉤子
        ShutdownHook.getInstance().addDisposable(coordinator);
        ShutdownHook.getInstance().addDisposable(rpcServer);

        //127.0.0.1 and 0.0.0.0 are not valid here.
        if (NetUtil.isValidIp(parameterParser.getHost(), false)) {
            XID.setIpAddress(parameterParser.getHost());
        } else {
            XID.setIpAddress(NetUtil.getLocalIp());
        }
        XID.setPort(rpcServer.getListenPort());

        try {
            //啓動
            rpcServer.init();
        } catch (Throwable e) {
            LOGGER.error("rpcServer init error:{}", e.getMessage(), e);
            System.exit(-1);
        }

        System.exit(0);
    }

主要流程:

  • 1.解析啓動參數
  • 2.初始化度量
  • 3.初始化netty server
  • 4.初始化sessionholder
  • 5.初始化默認協調器
  • 6.啓動netty server

參數解析

ParameterParser parameterParser = new ParameterParser(args);

    public ParameterParser(String[] args) {
        this.init(args);
    }

    private void init(String[] args) {
        try {
            boolean inContainer = this.isRunningInContainer();
            //判斷是否使用了容器Kubernetes/docker
            if (inContainer) {
                if (LOGGER.isInfoEnabled()) {
                    LOGGER.info("The server is running in container.");
                }
                //多配置環境
                this.seataEnv = StringUtils.trimToNull(System.getenv(ENV_SYSTEM_KEY));
                //host地址
                this.host = StringUtils.trimToNull(System.getenv(ENV_SEATA_IP_KEY));
                //server節點id
                this.serverNode = NumberUtils.toInt(System.getenv(ENV_SERVER_NODE_KEY), SERVER_DEFAULT_NODE);
                //端口號
                this.port = NumberUtils.toInt(System.getenv(ENV_SEATA_PORT_KEY), SERVER_DEFAULT_PORT);
                //存儲模式
                this.storeMode = StringUtils.trimToNull(System.getenv(ENV_STORE_MODE_KEY));
            } else {
                //JCommander 解析參數
                JCommander jCommander = JCommander.newBuilder().addObject(this).build();
                jCommander.parse(args);
                if (help) {
                    jCommander.setProgramName(PROGRAM_NAME);
                    jCommander.usage();
                    System.exit(0);
                }
            }
            if (StringUtils.isNotBlank(seataEnv)) {
                System.setProperty(ENV_PROPERTY_KEY, seataEnv);
            }
            if (StringUtils.isBlank(storeMode)) {
                storeMode = ConfigurationFactory.getInstance().getConfig(ConfigurationKeys.STORE_MODE,
                    SERVER_DEFAULT_STORE_MODE);
            }
        } catch (ParameterException e) {
            printError(e);
        }

    }

主要解析參數中的存儲模式、環境配置,ip端口。

初始化server

RpcServer rpcServer = new RpcServer(WORKING_THREADS);

構造器初始化父類構造器

    public AbstractRpcRemotingServer(final ThreadPoolExecutor messageExecutor, NettyServerConfig nettyServerConfig) {
        super(messageExecutor);
        //初始化RpcServer
        serverBootstrap = new RpcServerBootstrap(nettyServerConfig);
    }

根據配置判斷初始化EventLoopGroup

    public RpcServerBootstrap(NettyServerConfig nettyServerConfig) {

        this.nettyServerConfig = nettyServerConfig;
        //根據配置判斷初始化根據配置判斷初始化EventLoopGroup
        if (NettyServerConfig.enableEpoll()) {
            this.eventLoopGroupBoss = new EpollEventLoopGroup(nettyServerConfig.getBossThreadSize(),
                new NamedThreadFactory(nettyServerConfig.getBossThreadPrefix(), nettyServerConfig.getBossThreadSize()));
            this.eventLoopGroupWorker = new EpollEventLoopGroup(nettyServerConfig.getServerWorkerThreads(),
                new NamedThreadFactory(nettyServerConfig.getWorkerThreadPrefix(),
                    nettyServerConfig.getServerWorkerThreads()));
        } else {
            this.eventLoopGroupBoss = new NioEventLoopGroup(nettyServerConfig.getBossThreadSize(),
                new NamedThreadFactory(nettyServerConfig.getBossThreadPrefix(), nettyServerConfig.getBossThreadSize()));
            this.eventLoopGroupWorker = new NioEventLoopGroup(nettyServerConfig.getServerWorkerThreads(),
                new NamedThreadFactory(nettyServerConfig.getWorkerThreadPrefix(),
                    nettyServerConfig.getServerWorkerThreads()));
        }

        // 構造設置端口,防止端口爲空
        setListenPort(nettyServerConfig.getDefaultListenPort());
    }

SessionHolder初始化

SessionHolder.init(parameterParser.getStoreMode());

根據持久化配置file/db,去初始化sessionManager 進行session管理和持久化,主要包括下面四種

  • ROOT_SESSION_MANAGER root session manager
  • ASYNC_COMMITTING_SESSION_MANAGER異步提交session manager
  • RETRY_COMMITTING_SESSION_MANAGER重試提交session manager
  • RETRY_ROLLBACKING_SESSION_MANAGER重試回滾session manager
    public static void init(String mode) throws IOException {
        if (StringUtils.isBlank(mode)) {
            mode = CONFIG.getConfig(ConfigurationKeys.STORE_MODE);
        }
        StoreMode storeMode = StoreMode.get(mode);
        //DB 方式
        if (StoreMode.DB.equals(storeMode)) {
            //初始化四種sessionmanager
            ...
        // File 方式
        } else if (StoreMode.FILE.equals(storeMode)) {
            //初始化四種sessionmanager
            ...
        } else {
            throw new IllegalArgumentException("unknown store mode:" + mode);
        }
        //reload是否需要進行回滾或提交
        //處理未完成的事務
        reload();
    }

TC協調器初始化

DefaultCoordinator coordinator = new DefaultCoordinator(rpcServer);
coordinator.init();

創建默認的TC協調器,並將server與其組合,主要看init方法

    public void init() {
        retryRollbacking.scheduleAtFixedRate(() -> {
            try {
                handleRetryRollbacking();
            } catch (Exception e) {
                LOGGER.info("Exception retry rollbacking ... ", e);
            }
        }, 0, ROLLBACKING_RETRY_PERIOD, TimeUnit.MILLISECONDS);

        retryCommitting.scheduleAtFixedRate(() -> {
            try {
                handleRetryCommitting();
            } catch (Exception e) {
                LOGGER.info("Exception retry committing ... ", e);
            }
        }, 0, COMMITTING_RETRY_PERIOD, TimeUnit.MILLISECONDS);

        asyncCommitting.scheduleAtFixedRate(() -> {
            try {
                handleAsyncCommitting();
            } catch (Exception e) {
                LOGGER.info("Exception async committing ... ", e);
            }
        }, 0, ASYNC_COMMITTING_RETRY_PERIOD, TimeUnit.MILLISECONDS);

        timeoutCheck.scheduleAtFixedRate(() -> {
            try {
                timeoutCheck();
            } catch (Exception e) {
                LOGGER.info("Exception timeout checking ... ", e);
            }
        }, 0, TIMEOUT_RETRY_PERIOD, TimeUnit.MILLISECONDS);

        undoLogDelete.scheduleAtFixedRate(() -> {
            try {
                undoLogDelete();
            } catch (Exception e) {
                LOGGER.info("Exception undoLog deleting ... ", e);
            }
        }, UNDO_LOG_DELAY_DELETE_PERIOD, UNDO_LOG_DELETE_PERIOD, TimeUnit.MILLISECONDS);
    }

init方法主要是添加定時任務,處理提交回滾刪除日誌等操作

啓動server

rpcServer.init();

    public void init() {
        //實例化默認的服務消息監聽器,包括事務消息、RM註冊消息、TM註冊消息、檢查消息
        DefaultServerMessageListenerImpl defaultServerMessageListenerImpl =
            new DefaultServerMessageListenerImpl(getTransactionMessageHandler());
        //初始化日誌處理線程池
        defaultServerMessageListenerImpl.init();
        defaultServerMessageListenerImpl.setServerMessageSender(this);
        super.setServerMessageListener(defaultServerMessageListenerImpl);
        //添加channelhandler
        super.setChannelHandlers(new ServerHandler());
        //核心方法
        super.init();
    }

進入init方法,AbstractRpcRemotingServer.init()

    public void init() {
        //啓動超時消息定時任務
        super.init();
        //啓動服務
        serverBootstrap.start();
    }

跟入serverBootstrap.start();,這裏就是服務的初始化

    public void start() {
        //設置channel參數
        this.serverBootstrap.group(this.eventLoopGroupBoss, this.eventLoopGroupWorker)
            .channel(nettyServerConfig.SERVER_CHANNEL_CLAZZ)
            .option(ChannelOption.SO_BACKLOG, nettyServerConfig.getSoBackLogSize())
            .option(ChannelOption.SO_REUSEADDR, true)
            .childOption(ChannelOption.SO_KEEPALIVE, true)
            .childOption(ChannelOption.TCP_NODELAY, true)
            .childOption(ChannelOption.SO_SNDBUF, nettyServerConfig.getServerSocketSendBufSize())
            .childOption(ChannelOption.SO_RCVBUF, nettyServerConfig.getServerSocketResvBufSize())
            .childOption(ChannelOption.WRITE_BUFFER_WATER_MARK,
                new WriteBufferWaterMark(nettyServerConfig.getWriteBufferLowWaterMark(),
                    nettyServerConfig.getWriteBufferHighWaterMark()))
            .localAddress(new InetSocketAddress(listenPort))
            .childHandler(new ChannelInitializer<SocketChannel>() {
                @Override
                public void initChannel(SocketChannel ch) {
                    ch.pipeline().addLast(new IdleStateHandler(nettyServerConfig.getChannelMaxReadIdleSeconds(), 0, 0))
                        .addLast(new ProtocolV1Decoder())
                        .addLast(new ProtocolV1Encoder());
                    if (null != channelHandlers) {
                        addChannelPipelineLast(ch, channelHandlers);
                    }

                }
            });

        try {
            ChannelFuture future = this.serverBootstrap.bind(listenPort).sync();
            LOGGER.info("Server started ... ");
            //高可用註冊
            RegistryFactory.getInstance().register(new InetSocketAddress(XID.getIpAddress(), XID.getPort()));
            initialized.set(true);
            future.channel().closeFuture().sync();
        } catch (Exception exx) {
            throw new RuntimeException(exx);
        }

    }

主要是netty參數的配置並啓動,到這裏也就完成了SeataServer的初始化及啓動。通過seata-server的源碼看出,其內部使用netty作爲服務器,並且用到大量線程池和定時任務去提高性能。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章