Tx-lcn lcn-client 客戶端執行邏輯(6)

一、源碼分析入口@EnableDistributedTransaction

客戶端只需在啓動類上 增加@EnableDistributedTransaction 即可實現 分佈式事務,所以我們以此爲切入點。

我們從註解@EnableDistributedTransaction開始,這個註解是開啓分佈式事務客戶端的唯一註解。

@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
@Documented
@Import({TCAutoConfiguration.class, DependenciesImportSelector.class})
public @interface EnableDistributedTransaction {
    boolean enableTxc() default true;
}

 跟進TCAutoConfiguration 源碼:

@Configuration
@ComponentScan(
        excludeFilters = @ComponentScan.Filter(
                type = FilterType.ASPECTJ, pattern = "com.codingapi.txlcn.tc.core.transaction.txc..*"
        )
)
@Import({TxLoggerConfiguration.class, TracingAutoConfiguration.class})
public class TCAutoConfiguration {

    /**
     * All initialization about TX-LCN
     *
     * @param applicationContext Spring ApplicationContext
     * @return TX-LCN custom runner
     */
    @Bean
    public ApplicationRunner txLcnApplicationRunner(ApplicationContext applicationContext) {
        return new TxLcnApplicationRunner(applicationContext);
    }

    @Bean
    @ConditionalOnMissingBean
    public ModIdProvider modIdProvider(ConfigurableEnvironment environment,
                                       @Autowired(required = false) ServerProperties serverProperties) {
        return () -> ApplicationInformation.modId(environment, serverProperties);
    }
}

代碼比服務端少,從註釋上看所有的功能都在構建ApplicationRunner上,主要是默認執行其run()方法。

 @Override
    public void run(ApplicationArguments args) throws Exception {
        //找到bean 實現了TxLcnInitializer接口的bean
        Map<String, TxLcnInitializer> runnerMap = applicationContext.getBeansOfType(TxLcnInitializer.class);
        // 根據實現了Order 接口 對已經實現了TxLcnInitializer接口的bean 進行排序
        initializers = runnerMap.values().stream().sorted(Comparator.comparing(TxLcnInitializer::order))
                .collect(Collectors.toList());
        // 遍歷initializers  對每個txLcnInitializer 執行init 方法
        for (TxLcnInitializer txLcnInitializer : initializers) {
            txLcnInitializer.init();
        }
    }

代碼和服務端是一樣的,也是找到所有的TxLcnInitializer,然後調用其init方法。

 二、分解各種實現了TxLcnInitializer接口的Bean的init

2.1 DTXCheckingInitialization分佈式事務檢測初始化器

@Component
public class DTXCheckingInitialization implements TxLcnInitializer {

    private final DTXChecking dtxChecking;

    private final TransactionCleanTemplate transactionCleanTemplate;

    @Autowired
    public DTXCheckingInitialization(DTXChecking dtxChecking, TransactionCleanTemplate transactionCleanTemplate) {
        this.dtxChecking = dtxChecking;
        this.transactionCleanTemplate = transactionCleanTemplate;
    }

    @Override
    public void init() throws Exception {
        if (dtxChecking instanceof SimpleDTXChecking) {
            ((SimpleDTXChecking) dtxChecking).setTransactionCleanTemplate(transactionCleanTemplate);
        }
    }
}

代碼很簡單,該類持有兩個對象,分佈式事務檢測器與事務清理模板,init根據DTXChecking的類型設置了事務清理模板。

2.2 TCRpcServer客戶端

@Component
public class TCRpcServer implements TxLcnInitializer {

    private final RpcClientInitializer rpcClientInitializer;

    private final TxClientConfig txClientConfig;

    private final RpcConfig rpcConfig;

    @Autowired
    public TCRpcServer(RpcClientInitializer rpcClientInitializer,
                       TxClientConfig txClientConfig, RpcConfig rpcConfig) {
        this.rpcClientInitializer = rpcClientInitializer;
        this.txClientConfig = txClientConfig;
        this.rpcConfig = rpcConfig;
    }

    @Override
    public void init() throws Exception {
        // rpc timeout (ms)
        if (rpcConfig.getWaitTime() <= 5) {
            rpcConfig.setWaitTime(1000);
        }

        // rpc client init.
        rpcClientInitializer.init(TxManagerHost.parserList(txClientConfig.getManagerAddress()), false);
    }
}

2.3 NettyRpcClientInitializer

@Component
@Slf4j
public class NettyRpcClientInitializer implements RpcClientInitializer, DisposableBean {

    private static NettyRpcClientInitializer INSTANCE;

    private final NettyRpcClientChannelInitializer nettyRpcClientChannelInitializer;

    private final RpcConfig rpcConfig;

    private final ClientInitCallBack clientInitCallBack;

    private EventLoopGroup workerGroup;

    @Autowired
    public NettyRpcClientInitializer(NettyRpcClientChannelInitializer nettyRpcClientChannelInitializer, RpcConfig rpcConfig, ClientInitCallBack clientInitCallBack) {
        this.nettyRpcClientChannelInitializer = nettyRpcClientChannelInitializer;
        this.rpcConfig = rpcConfig;
        this.clientInitCallBack = clientInitCallBack;
        INSTANCE = this;
    }

    public static void reConnect(SocketAddress socketAddress) {
        Objects.requireNonNull(socketAddress, "non support!");
        INSTANCE.connect(socketAddress);
    }

    @Override
    public void init(List<TxManagerHost> hosts, boolean sync) {
        NettyContext.type = NettyType.client;
        NettyContext.params = hosts;
        workerGroup = new NioEventLoopGroup();
        for (TxManagerHost host : hosts) {
            Optional<Future> future = connect(new InetSocketAddress(host.getHost(), host.getPort()));
            if (sync && future.isPresent()) {
                try {
                    future.get().get(10, TimeUnit.SECONDS);
                } catch (InterruptedException | ExecutionException | TimeoutException e) {
                    log.error(e.getMessage(), e);
                }
            }
        }
    }


    @Override
    public synchronized Optional<Future> connect(SocketAddress socketAddress) {
        for (int i = 0; i < rpcConfig.getReconnectCount(); i++) {
            if (SocketManager.getInstance().noConnect(socketAddress)) {
                try {
                    log.info("Try connect socket({}) - count {}", socketAddress, i + 1);
                    Bootstrap b = new Bootstrap();
                    b.group(workerGroup);
                    b.channel(NioSocketChannel.class);
                    b.option(ChannelOption.SO_KEEPALIVE, true);
                    b.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 5000);
                    b.handler(nettyRpcClientChannelInitializer);
                    return Optional.of(b.connect(socketAddress).syncUninterruptibly());
                } catch (Exception e) {
                    log.warn("Connect socket({}) fail. {}ms latter try again.", socketAddress, rpcConfig.getReconnectDelay());
                    try {
                        Thread.sleep(rpcConfig.getReconnectDelay());
                    } catch (InterruptedException e1) {
                        e1.printStackTrace();
                    }
                    continue;
                }
            }
            // 忽略已連接的連接
            return Optional.empty();
        }

        log.warn("Finally, netty connection fail , socket is {}", socketAddress);
        clientInitCallBack.connectFail(socketAddress.toString());
        return Optional.empty();
    }

    @Override
    public void destroy() {
        workerGroup.shutdownGracefully();
        log.info("RPC client was down.");
    }
}

這裏啓動了一個netty客戶端,根據manager-address配置連接到了服務器端。

上面的connect方法還實現了一個功能就是重連機制,根據配置的重連次數ReconnectCount(默認8)進行重連。默認重試8次,間隔6秒。

2.4 NettyRpcClientChannelInitializer

NettyRpcClientChannelInitializer實現了ChannelInitializer在啓動客戶端調用initChannel方法。

@Component
public class NettyRpcClientChannelInitializer extends ChannelInitializer<Channel> {

    @Autowired
    private RpcAnswerHandler rpcAnswerHandler;

    @Autowired
    private NettyClientRetryHandler nettyClientRetryHandler;

    @Autowired
    private SocketManagerInitHandler socketManagerInitHandler;

    @Autowired
    private RpcCmdDecoder rpcCmdDecoder;

    @Override
    protected void initChannel(Channel ch) throws Exception {

        ch.pipeline().addLast(new LengthFieldPrepender(4, false));
        ch.pipeline().addLast(new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE,
                0, 4, 0, 4));

        ch.pipeline().addLast(new ObjectSerializerEncoder());
        ch.pipeline().addLast(new ObjectSerializerDecoder());


        ch.pipeline().addLast(rpcCmdDecoder);
        ch.pipeline().addLast(new RpcCmdEncoder());
        ch.pipeline().addLast(nettyClientRetryHandler);
        ch.pipeline().addLast(socketManagerInitHandler);
        //同服務端,但是功能少了一個功能
        ch.pipeline().addLast(rpcAnswerHandler);
    }
}

與服務端相比少了一個IdleStateHandler用於心跳檢測,所以socketManagerInitHandler中少了一個功能即userEventTriggered不會被調用。

多了一個nettyClientRetryHandler,主要有兩個作用

1、重連機制,默認8次間隔6秒。

@Override
    public void channelInactive(ChannelHandlerContext ctx) throws Exception {
        super.channelInactive(ctx);
        log.error("keepSize:{},nowSize:{}", keepSize, SocketManager.getInstance().currentSize());

        SocketAddress socketAddress = ctx.channel().remoteAddress();
        log.error("socketAddress:{} ", socketAddress);

        //斷線重連
        NettyRpcClientInitializer.reConnect(socketAddress);
    }


    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        log.error("NettyClientRetryHandler - exception . ", cause);

        if (cause instanceof ConnectException) {
            int size = SocketManager.getInstance().currentSize();
            Thread.sleep(1000 * 15);
            log.error("current size:{}  ", size);
            log.error("try connect tx-manager:{} ", ctx.channel().remoteAddress());
            //斷線重連
            NettyRpcClientInitializer.reConnect(ctx.channel().remoteAddress());
        }
        //發送數據包檢測是否斷開連接.
        ctx.writeAndFlush(heartCmd);

    }
public static void reConnect(SocketAddress socketAddress) {
        Objects.requireNonNull(socketAddress, "non support!");
        INSTANCE.connect(socketAddress);
    }
 public synchronized Optional<Future> connect(SocketAddress socketAddress) {
        for (int i = 0; i < rpcConfig.getReconnectCount(); i++) {
            if (SocketManager.getInstance().noConnect(socketAddress)) {
                try {
                    log.info("Try connect socket({}) - count {}", socketAddress, i + 1);
                    Bootstrap b = new Bootstrap();
                    b.group(workerGroup);
                    b.channel(NioSocketChannel.class);
                    b.option(ChannelOption.SO_KEEPALIVE, true);
                    b.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 5000);
                    b.handler(nettyRpcClientChannelInitializer);
                    return Optional.of(b.connect(socketAddress).syncUninterruptibly());
                } catch (Exception e) {
                    log.warn("Connect socket({}) fail. {}ms latter try again.", socketAddress, rpcConfig.getReconnectDelay());
                    try {
                        Thread.sleep(rpcConfig.getReconnectDelay());
                    } catch (InterruptedException e1) {
                        e1.printStackTrace();
                    }
                    continue;
                }
            }
            // 忽略已連接的連接
            return Optional.empty();
        }
 
        log.warn("Finally, netty connection fail , socket is {}", socketAddress);
        clientInitCallBack.connectFail(socketAddress.toString());
        return Optional.empty();
    }

當鏈接斷開或者發生異常時會進行重連機制,可以看到這裏就是調用了connet方法進行重連的。

2、連接成功後回調機制。

回調作用

2.1、從服務端獲取機器id、分佈式事物超時時間、最大等待時間等參數(客戶端不能配置這些參數需要以服務端爲準)

2.2、如果服務端啓動的數量大於客戶端配置的服務器數量,會通過回調使得客戶端連接所有的服務端。

public void channelActive(ChannelHandlerContext ctx) throws Exception {
        super.channelActive(ctx);
        keepSize = NettyContext.currentParam(List.class).size();
        //回調函數
        clientInitCallBack.connected(ctx.channel().remoteAddress().toString());
    }
public void connected(String remoteKey) {
        //監聽,在連接成功執行,此處爲空實現
        rpcEnvStatusListeners.forEach(rpcEnvStatusListener -> rpcEnvStatusListener.onConnected(remoteKey));
        new Thread(() -> {
            try {
                log.info("Send init message to TM[{}]", remoteKey);
                //向服務端發送消息,獲取配置信息
                MessageDto msg = rpcClient.request(
                        remoteKey, MessageCreator.initClient(applicationName, modIdProvider.modId()), 5000);
                if (MessageUtils.statusOk(msg)) {
                    //每一次建立連接時將會獲取最新的時間
                    InitClientParams resParams = msg.loadBean(InitClientParams.class);
                    // 1. 設置DTX Time 、 TM RPC timeout 和 MachineId
                    txClientConfig.applyDtxTime(resParams.getDtxTime());
                    txClientConfig.applyTmRpcTimeout(resParams.getTmRpcTimeout());
                    txClientConfig.applyMachineId(resParams.getMachineId());
 
                    // 2. IdGen 初始化
                    IdGenInit.applyDefaultIdGen(resParams.getSeqLen(), resParams.getMachineId());
 
                    // 3. 日誌
                    log.info("Finally, determined dtx time is {}ms, tm rpc timeout is {} ms, machineId is {}",
                            resParams.getDtxTime(), resParams.getTmRpcTimeout(), resParams.getMachineId());
                    // 4. 執行其它監聽器
                    rpcEnvStatusListeners.forEach(rpcEnvStatusListener -> rpcEnvStatusListener.onInitialized(remoteKey));
                    return;
                }
                log.error("TM[{}] exception. connect fail!", remoteKey);
            } catch (RpcException e) {
                log.error("Send init message exception: {}. connect fail!", e.getMessage());
            }
        }).start();
    }

1、向服務端發送消息,獲取服務端配置

MessageDto msg = rpcClient.request(
                        remoteKey, MessageCreator.initClient(applicationName, modIdProvider.modId()), 5000);
//構造消息體,設置action爲init
public static MessageDto initClient(String appName, String labelName) {
        InitClientParams initClientParams = new InitClientParams();
        initClientParams.setAppName(appName);
        initClientParams.setLabelName(labelName);
        MessageDto messageDto = new MessageDto();
        messageDto.setData(initClientParams);
        messageDto.setAction(MessageConstants.ACTION_INIT_CLIENT);
        return messageDto;
    }
//構造發送消息,發送
public MessageDto request(String remoteKey, MessageDto msg, long timeout) throws RpcException {
        long startTime = System.currentTimeMillis();
        NettyRpcCmd rpcCmd = new NettyRpcCmd();
        rpcCmd.setMsg(msg);
        String key = rpcCmd.randomKey();
        rpcCmd.setKey(key);
        rpcCmd.setRemoteKey(remoteKey);
        MessageDto result = request0(rpcCmd, timeout);
        log.debug("cmd request used time: {} ms", System.currentTimeMillis() - startTime);
        return result;
    }

服務端是通過InitClientService類去處理消息的

public Serializable execute(TransactionCmd transactionCmd) throws TxManagerException {
//獲取參數        
InitClientParams initClientParams = transactionCmd.getMsg().loadBean(InitClientParams.class);
        log.info("Registered TC: {}", initClientParams.getLabelName());
        try {
            //綁定
            rpcClient.bindAppName(transactionCmd.getRemoteKey(), initClientParams.getAppName(), initClientParams.getLabelName());
        } catch (RpcException e) {
            throw new TxManagerException(e);
        }
        //以下爲把服務端的一些信息放到參數中返回到客戶端
        // Machine len and id
        initClientParams.setSeqLen(txManagerConfig.getSeqLen());
        //服務端生成機器id
        initClientParams.setMachineId(managerService.machineIdSync());
        // DTX Time and TM timeout.
        initClientParams.setDtxTime(txManagerConfig.getDtxTime());
        initClientParams.setTmRpcTimeout(rpcConfig.getWaitTime());
        // TM Name
        initClientParams.setAppName(modIdProvider.modId());
        return initClientParams;
    }

綁定,構建AppInfo和remoteKey關聯,存入appNames

public void bindAppName(String remoteKey, String appName,String labelName) throws RpcException {
        SocketManager.getInstance().bindModuleName(remoteKey, appName,labelName);
    }
public void bindModuleName(String remoteKey, String appName,String labelName) throws RpcException{
        AppInfo appInfo = new AppInfo();
        appInfo.setAppName(appName);
        appInfo.setLabelName(labelName);
        appInfo.setCreateTime(new Date());
        if(containsLabelName(labelName)){
            throw new RpcException("labelName:"+labelName+" has exist.");
        }
        appNames.put(remoteKey, appInfo);
    }

2、把服務端返回的信息設置到本config

                     //每一次建立連接時將會獲取最新的時間
                    InitClientParams resParams = msg.loadBean(InitClientParams.class);
                    // 1. 設置DTX Time 、 TM RPC timeout 和 MachineId
                    txClientConfig.applyDtxTime(resParams.getDtxTime());
                    txClientConfig.applyTmRpcTimeout(resParams.getTmRpcTimeout());
                    txClientConfig.applyMachineId(resParams.getMachineId());
 
                    // 2. IdGen 初始化
                    IdGenInit.applyDefaultIdGen(resParams.getSeqLen(), resParams.getMachineId());

3、執行監聽器的onInitialized方法

此處監聽器只有AutoTMClusterEngine類,onInitialized方法用來搜索所有的服務端,使客戶端與其連接

什麼意思呢?

我們知道客戶端配置連接服務端地址並不需要把所有的服務端地址都要寫上,只需要寫上一個或是幾個就能使客戶端都自動連接上沒有配置上的服務端,這是怎麼實現的呢,就在這個方法裏。

我們舉個例子:啓動了兩個服務端A,B,只有一個客戶端C,C客戶端只配置了連接A服務端,那麼客戶端C是怎麼和服務端B自動連接上的呢?

@1、客戶端C啓動時會根據配置啓動一個netty客戶端去連接服務端A。

@2、客戶端C完成連接初始化後,會觸發channelActive時間,調用channelActive方法,進而調用connected,從服務端獲取配置信息,並設置到本地

@3、最後會調用onInitialized方法,主要作用就是檢測我的客戶端C是否與所有的服務端都連接了。這裏有兩個參數,一個是配置的服務端地址size在本例中只配置了一個,所以size爲1;另一個是嘗試連接的數量tryConnectCount這個默認是0,每連接一次加1;如果size=tryConnectCount時,會去通過netty向服務端發送消息,尋找所有的TM數量,TM收到信息,查詢redis中的tm.instances(前面講過,存儲的是所有已經啓動的TM地址信息)值封裝後返回客戶端。客戶端收到所有的服務端地址(A,B),然後排除已經連接的地址(A),會再啓動一個netty客戶端去連接B

總體流程就是上面那樣子,但是一些細節需要在代碼中體現。
 

public void onInitialized(String remoteKey) {
        //準備尋找TM
        if (prepareToResearchTMCluster()) {
            TMSearcher.echoTmClusterSize();
        }
    }
 private AtomicInteger tryConnectCount = new AtomicInteger(0);
//返回值爲true爲所有結束
private boolean prepareToResearchTMCluster() {
        //原子類值加1,每連接一次都會加上1
        int count = tryConnectCount.incrementAndGet();
        //客戶端配置的服務端地址數量
        int size = txClientConfig.getManagerAddress().size();
        //三種情況不同的場景
        if (count == size) {
            TMSearcher.search();
            
            return false;
        } else if (count > size) {
            return !TMSearcher.searchedOne();
        }
        return true;
    }
public static void search() {
        Objects.requireNonNull(RPC_CLIENT_INITIALIZER);
        log.info("Searching for more TM...");
        try {
            //獲取服務端返回的TM信息
            HashSet<String> cluster = RELIABLE_MESSENGER.queryTMCluster();
            if (cluster.isEmpty()) {
                log.info("No more TM.");
                echoTMClusterSuccessful();
                return;
            }
            //CountDownLatch
            clusterCountLatch = new CountDownLatch(cluster.size() - knownTMClusterSize);
            log.debug("wait connect size is {}", cluster.size() - knownTMClusterSize);
            //啓動netty客戶端完成連接服務端            
            RPC_CLIENT_INITIALIZER.init(TxManagerHost.parserList(new ArrayList<>(cluster)), true);
            //阻塞等待
            clusterCountLatch.await(10, TimeUnit.SECONDS);
            echoTMClusterSuccessful();
        } catch (RpcException | InterruptedException e) {
            throw new IllegalStateException("There is no normal TM.");
        }
    }
public static boolean searchedOne() {
        if (Objects.nonNull(clusterCountLatch)) {
            if (clusterCountLatch.getCount() == 0) {
                return false;
            }
            //減一
            clusterCountLatch.countDown();
            return true;
        }
        return false;
    }

我們對所有的場景做一個分析

1、服務端數量與客戶端配置數量相等,則會執行size>count 與count=size

2、客戶端配置的數量小於服務端數量,三種情況都會執行

我們一第二種情況做分析,假定服務端4個A,B,C,D 客戶端E配置兩個A,B

首先客戶端E啓動兩個netty客戶端去連接A,B,並且執行回調啓動兩個線程1,2。

線程1首先執行prepareToResearchTMCluster方法,size=2,count=1 這時size>count 返回true 打印所有的TM

線程2執行prepareToResearchTMCluster方法,size=2,count=2 這時size=count 執行search方法,獲取到服務端的所有TM爲4個,CountDownLatch的值爲2,並且又啓動兩個netty客戶端去連接C,D,又啓動兩個回調線程3,4,這是線程2是阻塞的,阻塞完成打印所有的TM。

線程3執行prepareToResearchTMCluster方法,size=2,count=3 這時size<count 執行searchedOne方法CountDownLatch減1此時CountDownLatch的值爲1

線程4執行prepareToResearchTMCluster方法,size=2,count=4 這時size<count 執行searchedOne方法CountDownLatch減1此時CountDownLatch的值爲0,線程2完成阻塞,打印所有找到的TM.
 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章