mongodb源碼實現系列-網絡傳輸層模塊實現二

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"關於作者"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 前滴滴出行技術專家,現任OPPO文檔數據庫mongodb負責人,負責oppo千萬級峯值TPS/十萬億級數據量文檔數據庫mongodb內核研發及運維工作,一直專注於分佈式緩存、高性能服務端、數據庫、中間件等相關研發。後續持續分享《MongoDB內核源碼設計、性能優化、最佳運維實踐》,Github賬號地址:"},{"type":"link","attrs":{"href":"https://github.com/y123456yz","title":null},"content":[{"type":"text","marks":[{"type":"underline"}],"text":"https://github.com/y123456yz"}]}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"1. 說明"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"    mongodb源碼實現系列文章有前後邏輯關係,閱讀本文前,請提前閱讀<>"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 在之前的<>一文中分析瞭如何閱讀百萬級大工程源碼、Asio網絡庫實現、transport傳輸層網絡模塊中線程模型實現,但是由於篇幅原因,傳輸層網絡模塊中的以下模塊實現原理沒有分析,本文降將繼續分析遺留的以下子模塊:"}]},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"transport_layer套接字處理及傳輸層管理子模塊"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"session會話子模塊"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"Ticket數據收發子模塊"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"service_entry_point服務入口點子模塊"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"service_state_machine狀態機子模塊(該《模塊在網絡傳輸層模塊源碼實現三》中分析)"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":6,"align":null,"origin":null},"content":[{"type":"text","text":"service_executor線程模型子模塊(該《模塊在網絡傳輸層模塊源碼實現四》中分析)"}]}]}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"2. transport_layer套接字處理及傳輸層管理子模塊"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" transport_layer套接字處理及傳輸層管理子模塊功能包括套接字相關初始化處理、結合asio庫實現異步accept處理、不同線程模型管理及初始化等,該模塊的源碼實現主要由以下幾個文件實現:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/44/4423735ecfa73dd118536a03966e8005.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 上圖是套接字處理及傳輸層管理子模塊源碼實現的相關文件,其中mock和test文件主要用於模擬測試等,所以真正核心的代碼實現只有下表的幾個文件,對應源碼文件功能說明如下表所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/79/79f0f5e3bd14376fe79ff6eae812fba5.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.1核心代碼實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 該子模塊核心代碼主要由TransportLayerManager類和TransportLayerASIO類相關接口實現。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.1.1  TransportLayerManager類核心代碼實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" TransportLayerManager類主要成員及接口如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//網絡會話鏈接,消息處理管理相關的類,在createWithConfig構造該類存入_tls  \n2.class TransportLayerManager final : public TransportLayer {  \n3.    //以下四個接口真正實現在TransportLayerASIO類中具體實現  \n4.    Ticket sourceMessage(...) override;  \n5.    Ticket sinkMessage(...) override;      \n6.    Status wait(Ticket&& ticket) override;  \n7.    void asyncWait(...) override;  \n8.    //配置初始化實現  \n9.    std::unique_ptr createWithConfig(...);  \n10.  \n11.    //createWithConfig中賦值,對應TransportLayerASIO,  \n12.    //實際上容器中就一個成員,就是TransportLayerASIO  \n13.    std::vector<:unique_ptr>> _tls;  \n14.};  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"TransportLayerManager類包含一個_tls成員,該類最核心的createWithConfig接口代碼實現如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"15.//根據配置構造相應類信息  _initAndListen中調用  \n16.std::unique_ptr TransportLayerManager::createWithConfig(...) {  \n17.    std::unique_ptr transportLayer;  \n18.    //服務類型,也就是本實例是mongos還是mongod  \n19.    //mongos對應ServiceEntryPointMongod,mongod對應ServiceEntryPointMongos  \n20.    auto sep = ctx->getServiceEntryPoint();  \n21.    //net.transportLayer配置模式,默認asio, legacy模式已淘汰  \n22.    if (config->transportLayer == \"asio\") {  \n23.     //同步方式還是異步方式,默認synchronous  \n24.        if (config->serviceExecutor == \"adaptive\") {  \n25.         //動態線程池模型,也就是異步模式  \n26.            opts.transportMode = transport::Mode::kAsynchronous;  \n27.        } else if (config->serviceExecutor == \"synchronous\") {  \n28.            //一個鏈接一個線程模型,也就是同步模式  \n29.            opts.transportMode = transport::Mode::kSynchronous;  \n30.        }   \n31.     //如果配置是asio,構造TransportLayerASIO類  \n32.     auto transportLayerASIO = stdx::make_unique<:transportlayerasio>(opts, sep);  \n33.     if (config->serviceExecutor == \"adaptive\") { //異步方式  \n34.         //構造動態線程模型對應的執行器ServiceExecutorAdaptive  \n35.            ctx->setServiceExecutor(stdx::make_unique(  \n36.                ctx, transportLayerASIO->getIOContext()));  \n37.         } else if (config->serviceExecutor == \"synchronous\") { //同步方式  \n38.            //構造一個鏈接一個線程模型對應的執行器ServiceExecutorSynchronous  \n39.            ctx->setServiceExecutor(stdx::make_unique(ctx));  \n40.         }  \n41.     //transportLayerASIO轉換爲transportLayer類  \n42.         transportLayer = std::move(transportLayerASIO);  \n43.    }   \n44.   //transportLayer轉存到對應retVector數組中並返回  \n45.    std::vector<:unique_ptr>> retVector;  \n46.    retVector.emplace_back(std::move(transportLayer));  \n47.    return stdx::make_unique(std::move(retVector));  \n48.}  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" createWithConfig函數根據配置文件來確定對應的TransportLayer,如果net.transportLayer配置爲”asio”,則選用TransportLayerASIO類來進行底層的網絡IO處理,如果配置爲”legacy”,則選用TransportLayerLegacy。”legacy”模式當前已淘汰,本文只分析”asio”模式實現。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" “asio”模式包含兩種線程模型:adaptive(動態線程模型)和synchronous(同步線程模型)。adaptive模式線程設計採用動態線程方式,線程數和mongodb壓力直接相關,如果mongodb壓力大,則線程數增加;如果mongodb壓力變小,則線程數自動減少。同步線程模式也就是一個鏈接一個線程模型,線程數的多少和鏈接數的多少成正比,鏈接數越多則線程數也越大。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Mongodb內核實現中通過opts.transportMode來標記asio的線程模型,這兩種模型對應標記如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/29/2961bc42399f91357e46bfcee43efcfb.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     說明:adaptive線程模型被標記爲KAsynchronous,synchronous被標記爲KSynchronous是有原因的,adaptive動態線程模型網絡IO處理藉助epoll異步實現,而synchronous一個鏈接一個線程模型網絡IO處理是同步讀寫操作。Mongodb網絡線程模型具體實現及各種優缺點可以參考:"},{"type":"link","attrs":{"href":"https://my.oschina.net/u/4087916/blog/4295038","title":null},"content":[{"type":"text","marks":[{"type":"underline"}],"text":"Mongodb網絡傳輸處理源碼實現及性能調優-體驗內核性能極致設計"}]}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.1.2 TransportLayerASIO類核心代碼實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" TransportLayerASIO類核心成員及接口如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.class TransportLayerASIO final : public TransportLayer {  \n2.    //以下四個接口主要和套接字數據讀寫相關  \n3.    Ticket sourceMessage(...);  \n4.    Ticket sinkMessage(...);  \n5.    Status wait(Ticket&& ticket);  \n6.    void asyncWait(Ticket&& ticket, TicketCallback callback);  \n7.    void end(const SessionHandle& session);  \n8.    //新鏈接處理  \n9.    void _acceptConnection(GenericAcceptor& acceptor);  \n10.      \n11.    //adaptive線程模型網絡IO上下文處理  \n12.    std::shared_ptr<:io_context> _workerIOContext;   \n13.    //accept接收客戶端鏈接對應的IO上下文  \n14.    std::unique_ptr<:io_context> _acceptorIOContext;    \n15.    //bindIp配置中的ip地址列表,用於bind監聽,accept客戶端請求  \n16.    std::vector<:pair>> _acceptors;  \n17.    //listener線程負責接收客戶端新鏈接  \n18.    stdx::thread _listenerThread;  \n19.    //服務類型,也就是本實例是mongos還是mongod  \n20.    //mongos對應ServiceEntryPointMongod,mongod對應ServiceEntryPointMongos  \n21.    ServiceEntryPoint* const _sep = nullptr;  \n22.    //當前運行狀態  \n23.    AtomicWord _running{false};  \n24.    //listener處理相關的配置信息  \n25.    Options _listenerOptions;  \n26.}  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從上面的類結構可以看出,該類主要通過listenerThread線程完成bind綁定及listen監聽操作,同時部分接口實現新連接上的數據讀寫。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 套接字初始化代碼實現如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.Status TransportLayerASIO::setup() {  \n2.    std::vector<:string> listenAddrs;  \n3. //如果沒有配置bindIp,則默認監聽\"127.0.0.1:27017\"\n4.    if (_listenerOptions.ipList.empty()) {  \n5.        listenAddrs = {\"127.0.0.1\"};  \n6.    } else {  \n7.        //配置文件中的bindIp:1.1.1.1,2.2.2.2,以逗號分隔符獲取ip列表存入ipList  \n8.        boost::split(listenAddrs, _listenerOptions.ipList, boost::is_any_of(\",\"), boost::token_compress_on);  \n9.    }  \n10.    //遍歷ip地址列表  \n11.    for (auto& ip : listenAddrs) {  \n12.     //根據IP和端口構造對應SockAddr結構  \n13.        const auto addrs = SockAddr::createAll(  \n14.            ip, _listenerOptions.port, _listenerOptions.enableIPv6 ? AF_UNSPEC : AF_INET);  \n15.        ......  \n16.        //根據addr構造endpoint  \n17.        asio::generic::stream_protocol::endpoint endpoint(addr.raw(), addr.addressSize);  \n18.     //_acceptorIOContext和_acceptors關聯  \n19.        GenericAcceptor acceptor(*_acceptorIOContext);  \n20.     //epoll註冊,也就是fd和epoll關聯  \n21.        //basic_socket_acceptor::open  \n22.        acceptor.open(endpoint.protocol());   \n23.     //SO_REUSEADDR配置 basic_socket_acceptor::set_option  \n24.        acceptor.set_option(GenericAcceptor::reuse_address(true));  \n25.     //非阻塞設置 basic_socket_acceptor::non_blocking  \n26.        acceptor.non_blocking(true, ec);    \n27.        //bind綁定    \n28.        acceptor.bind(endpoint, ec);   \n29.        if (ec) {  \n30.            return errorCodeToStatus(ec);  \n31.        }  \n32.    }  \n}"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從上面的分析可以看出,代碼實現首先解析出配置文件中bindIP中的ip:port列表,然後遍歷列表綁定所有服務端需要監聽的ip:port,每個ip:port對應一個GenericAcceptor ,所有acceptor和全局accept IO上下文_acceptorIOContext關聯,同時bind()綁定所有ip:port。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"      Bind()綁定所有配置文件中的Ip:port後,然後通過TransportLayerASIO::start()完成後續處理,該接口代碼實現如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//_initAndListen中調用執行   \n2.Status TransportLayerASIO::start() { //listen線程處理  \n3.    ......  \n4.    //這裏專門起一個線程做listen相關的accept事件處理  \n5.    _listenerThread = stdx::thread([this] {  \n6.        //修改線程名  \n7.        setThreadName(\"listener\");   \n8.        //該函數中循環處理accept事件  \n9.        while (_running.load()) {  \n10.            asio::io_context::work work(*_acceptorIOContext);   \n11.            try {  \n12.                //accept事件調度處理  \n13.         _acceptorIOContext->run();    \n14.            } catch (...) { //異常處理  \n15.                severe() <run()接口來調度,當有新鏈接到來的時候,就會執行相應的accept回調處理,accept回調註冊到io"},{"type":"text","text":"context的流程由acceptConnection()完成,該接口核心源碼實現如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//accept新連接到來的回調註冊 \n2.void TransportLayerASIO::_acceptConnection(GenericAcceptor& acceptor) {  \n3.     //新鏈接到來時候的回調函數,服務端接收到新連接都會執行該回調\n4. //注意這裏面是遞歸執行,保證所有accept事件都會一次處理完畢\n5.    auto acceptCb = [this, &acceptor](const std::error_code& ec, GenericSocket peerSocket) mutable {  \n6.        if (!_running.load())  \n7.            return;  \n8.  \n9.        ......  \n10.     //每個新的鏈接都會new一個新的ASIOSession  \n11.        std::shared_ptr session(new ASIOSession(this, std::move(peerSocket)));  \n12.     //新的鏈接處理ServiceEntryPointImpl::startSession,  \n13.        //和ServiceEntryPointImpl服務入口點模塊關聯起來  \n14.        _sep->startSession(std::move(session));  \n15.        //遞歸,直到處理完所有的網絡accept事件  \n16.        _acceptConnection(acceptor);   \n17.    };  \n18.    //accept新連接到來後服務端的回調處理在這裏註冊  \n19.    acceptor.async_accept(*_workerIOContext, std::move(acceptCb));  \n20.}  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" TransportLayerASIO::_acceptConnection的新連接處理過程藉助ASIO庫實現,通過acceptor.async_accept實現所有監聽的acceptor回調異步註冊。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 當服務端接收到客戶端新連接事件通知後,會觸發執行acceptCb()回調,該回調中底層ASIO庫通過 epoll_wait獲取到所有的accept事件,每獲取到一個accept事件就代表一個新的客戶端鏈接,然後調用ServiceEntryPointImpl::startSession()接口處理這個新的鏈接事件,整個過程遞歸執行,保證一次可以處理所有的客戶端accept請求信息。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 每個鏈接都會構造一個唯一的session信息,該session就代表一個唯一的新連接,鏈接和session一一對應。此外,最終會調用ServiceEntryPointImpl::startSession()進行真正的accept()處理,從而獲取到一個新的鏈接。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 注意"},{"type":"text","text":":TransportLayerASIO::_acceptConnection()中實現了TransportLayerASIO類和ServiceEntryPointImpl類的關聯,這兩個類在該接口實現了關聯。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 此外,從前面的TransportLayerASIO類結構中可以看出,該類還包含如下四個接口:sourceMessage(...)、sinkMessage(...)、wait(Ticket&& ticket)、asyncWait(Ticket&& ticket, TicketCallback callback),這四個接口入參都和Ticket數據分發子模塊相關聯,具體核心代碼實現如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//根據asioSession, expiration, message三個信息構造數據接收類ASIOSourceTicket  \n2.Ticket TransportLayerASIO::sourceMessage(...) {  \n3.    ......  \n4.    auto asioSession = checked_pointer_cast(session);  \n5.    //根據asioSession, expiration, message三個信息構造ASIOSourceTicket  \n6.    auto ticket = stdx::make_unique(asioSession, expiration, message);  \n7.    return {this, std::move(ticket)};  \n8.}  \n9.  \n10.//根據asioSession, expiration, message三個信息構造數據發送類ASIOSinkTicket  \n11.Ticket TransportLayerASIO::sinkMessage(...) {  \n12.    auto asioSession = checked_pointer_cast(session);  \n13.    auto ticket = stdx::make_unique(asioSession, expiration, message);  \n14.    return {this, std::move(ticket)};  \n15.}  \n16.  \n17.//同步接收或者發送,最終調用ASIOSourceTicket::fill 或者 ASIOSinkTicket::fill  \n18.Status TransportLayerASIO::wait(Ticket&& ticket) {  \n19.    //獲取對應Ticket,接收對應ASIOSourceTicket,發送對應ASIOSinkTicket  \n20.    auto ownedASIOTicket = getOwnedTicketImpl(std::move(ticket));  \n21.    auto asioTicket = checked_cast(ownedASIOTicket.get());  \n22.    ......  \n23.    //調用對應fill接口 同步接收ASIOSourceTicket::fill 或者 同步發送ASIOSinkTicket::fill  \n24.    asioTicket->fill(true, [&waitStatus](Status result) { waitStatus = result; });  \n25.    return waitStatus;  \n26.}  \n27.//異步接收或者發送,最終調用ASIOSourceTicket::fill 或者 ASIOSinkTicket::fill  \n28.void TransportLayerASIO::asyncWait(Ticket&& ticket, TicketCallback callback) {  \n29.    //獲取對應數據收發的Ticket,接收對應ASIOSourceTicket,發送對應ASIOSinkTicket  \n30.    auto ownedASIOTicket = std::shared_ptr(getOwnedTicketImpl(std::move(ticket)));  \n31.    auto asioTicket = checked_cast(ownedASIOTicket.get());  \n32.  \n33.   //調用對應ASIOTicket::fill  \n34.    asioTicket->fill(  \n35.        false,   [ callback = std::move(callback),  \n36.        ownedASIOTicket = std::move(ownedASIOTicket) ](Status status) { callback(status); });  \n37.}  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 上面四個接口中的前兩個接口主要通過Session, expiration, message這三個參數來獲取對應的Ticket 信息,實際上mongodb內核實現中把接收數據的Ticket和發送數據的Ticket分別用不同的繼承類ASIOSourceTicket和ASIOSinkTicket來區分,三個參數的作用如下表所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/6e/6eafa3d9d2dbc8d209001fc367010c4f.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 數據收發包括同步收發和異步收發,同步收發通過TransportLayerASIO::wait()實現,異步收發通過TransportLayerASIO::asyncWait()實現。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 注意:"},{"type":"text","text":"以上四個接口把TransportLayerASIO類和Ticket 數據收發類的關聯。   "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2.2總結"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" "},{"type":"text","text":"    transport_layer套接字處理及傳輸層管理子模塊主要由transport_layer_manager和transport_layer_asio兩個核心類組成,這兩個類的核心接口功能總結如下表所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/04/04c5f0bc85ba598f4dcda2e11c2e9ae7.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Transport_layer_manager中初始化TransportLayer和serviceExecutor,net.TransportLayer配置可以爲legacy和asio,其中legacy已經淘汰,當前內核只支持asio模式。asio配置對應的TransportLayer由TransportLayerASIO實現,對應的serviceExecutor線程模型可以是adaptive動態線程模型,也可以是synchronous同步線程模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 套接字創建、bind()綁定、listen()監聽、accept事件註冊等都由本類實現,同時數據分發Ticket模塊也與本模塊關聯,一起配合完成整個後續Ticket模塊模塊的同步及異步數據讀寫流程。此外,本模塊還通過ServiceEntryPoint服務入口子模塊聯動,保證了套接字初始化、accept事件註冊完成後,服務入口子模塊能有序的進行新連接接收處理。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 接下來繼續分析本模塊相關聯的ServiceEntryPoint服務入口子模塊和Ticket數據分發子模塊實現。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"3. service_entry_point服務入口點子模塊"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" service_entry_point服務入口點子模塊主要負責如下功能:新連接處理、Session會話管理、接收到一個完整報文後的回調處理(含報文解析、認證、引擎層處理等)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 該模塊的源碼實現主要包含以下幾個文件:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/cf/cffc5b74689499f249cb454bb8c0d845.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     service_entry_point開頭的代碼文件都和本模塊相關,其中service_entry_point_utils*負責工作線程創建,service_entry_point_impl*完成新鏈接回調處理及sesseion會話管理。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.1核心源碼實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     服務入口子模塊相關代碼實現比較簡潔,主要由ServiceEntryPointImpl類和service_entry_point_utils中的線程創建函數組成。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.1 ServiceEntryPointImpl類核心代碼實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ServiceEntryPointImpl類主要成員和接口如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.class ServiceEntryPointImpl : public ServiceEntryPoint {  \n2.    MONGO_DISALLOW_COPYING(ServiceEntryPointImpl);  \n3.public:  \n4.    //構造函數  \n5.    explicit ServiceEntryPointImpl(ServiceContext* svcCtx);     \n6.    //以下三個接口進行session會話處理控制  \n7.    void startSession(transport::SessionHandle session) final;  \n8.    void endAllSessions(transport::Session::TagMask tags) final;  \n9.    bool shutdown(Milliseconds timeout) final;  \n10.    //session會話統計  \n11.    Stats sessionStats() const final;  \n12.    ......  \n13.private:  \n14.    //該list結構管理所有的ServiceStateMachine信息  \n15.    using SSMList = stdx::list<:shared_ptr>>;  \n16.    //SSMList對應的迭代器  \n17.    using SSMListIterator = SSMList::iterator;  \n18.    //賦值ServiceEntryPointImpl::ServiceEntryPointImpl  \n19.    //對應ServiceContextMongoD(mongod)或者ServiceContextNoop(mongos)類  \n20.    ServiceContext* const _svcCtx;   \n21.    //該成員變量在代碼中沒有使用  \n22.    AtomicWord<:size_t> _nWorkers;  \n23.    //鎖  \n24.    mutable stdx::mutex _sessionsMutex;  \n25.    //一個新鏈接對應一個ssm保存到ServiceEntryPointImpl._sessions中  \n26.    SSMList _sessions;  \n27.    //最大鏈接數控制  \n28.    size_t _maxNumConnections{DEFAULT_MAX_CONN};  \n29.    //當前的總鏈接數,不包括關閉的鏈接  \n30.    AtomicWord _currentConnections{0};  \n31.    //所有的鏈接,包括已經關閉的鏈接  \n32.    AtomicWord _createdConnections{0};  \n33.};  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 該類的幾個接口主要是session相關控制處理,該類中的變量成員說明如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/4d/4d55c0b95a1708b085050565238e27bc.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ServiceEntryPointImpl類最核心的startSession()接口負責每個新連接到來後的內部回調處理,具體實現如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//新鏈接到來後的回調處理  \n2.void ServiceEntryPointImpl::startSession(transport::SessionHandle session) {   \n3.    //獲取該新連接對應的服務端和客戶端地址信息  \n4.    const auto& remoteAddr = session->remote().sockAddr();  \n5.    const auto& localAddr = session->local().sockAddr();  \n6.    //服務端和客戶端地址記錄到session中  \n7.    auto restrictionEnvironment =  stdx::make_unique(*remoteAddr, *localAddr);  \n8.    RestrictionEnvironment::set(session, std::move(restrictionEnvironment));  \n9.    ......  \n10.  \n11.    //獲取transportMode,kAsynchronous或者kSynchronous  \n12.    auto transportMode = _svcCtx->getServiceExecutor()->transportMode();  \n13.    //構造ssm  \n14.    auto ssm = ServiceStateMachine::create(_svcCtx, session, transportMode);  \n15.    {//該{}體內實現鏈接計數,同時把ssm統一添加到_sessions列表管理  \n16.        stdx::lock_guard lk(_sessionsMutex);  \n17.        connectionCount = _sessions.size() + 1; //連接數自增  \n18.        if (connectionCount <= _maxNumConnections) {  \n19.            //新來的鏈接對應的session保存到_sessions鏈表    \n20.            //一個新鏈接對應一個ssm保存到ServiceEntryPointImpl._sessions中  \n21.            ssmIt = _sessions.emplace(_sessions.begin(), ssm);  \n22.            _currentConnections.store(connectionCount);  \n23.            _createdConnections.addAndFetch(1);  \n24.        }  \n25.    }  \n26.    //鏈接超限,直接退出  \n27.    if (connectionCount > _maxNumConnections) {   \n28.        ......  \n29.        return;  \n30.    }  \n31.    //鏈接關閉的回收處理  \n32.    ssm->setCleanupHook([ this, ssmIt, session = std::move(session) ] {  \n33.         ......  \n34.    });  \n35.    //獲取transport模式爲同步模式還是異步模式,也就是adaptive線程模式還是synchronous線程模式  \n36.    auto ownership = ServiceStateMachine::Ownership::kOwned;  \n37.    if (transportMode == transport::Mode::kSynchronous) {  \n38.        ownership = ServiceStateMachine::Ownership::kStatic;  \n39.    }  \n40.    //ServiceStateMachine::start,這裏和服務狀態機模塊銜接起來  \n41.    ssm->start(ownership);  \n42.}  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     該接口拿到該鏈接對應的服務端和客戶端地址後,記錄到該鏈接對應session中,然後根據該session、transportMode、_svcCtx構建一個服務狀態機ssm(ServiceStateMachine)。一個新鏈接對應一個唯一session,一個session對應一個唯一的服務狀態機ssm,這三者保持唯一的一對一關係。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"      最終,startSession()讓服務入口子模塊、session會話子模塊、ssm狀態機子模塊關聯起來。   "}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.2  service_entry_point_utils核心代碼實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" service_entry_point_utils源碼文件只有launchServiceWorkerThread一個接口,該接口主要負責工作線程創建,並設置每個工作線程的線程棧大小,如果系統默認棧大於1M,則每個工作線程的線程棧大小設置爲1M,如果系統棧大小小於1M,則以系統堆棧大小爲準,同時warning打印提示。該函數實現如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.Status launchServiceWorkerThread(stdx::function task) {  \n2.        static const size_t kStackSize = 1024 * 1024;  //1M  \n3.        struct rlimit limits;  \n4.        //或者系統堆棧大小  \n5.        invariant(getrlimit(RLIMIT_STACK, &limits) == 0);  \n6.        //如果系統堆棧大小大於1M,則默認設置線程棧大小爲1M  \n7.        if (limits.rlim_cur > kStackSize) {  \n8.            size_t stackSizeToSet = kStackSize;  \n9.            int failed = pthread_attr_setstacksize(&attrs, stackSizeToSet);  \n10.            if (failed) {  \n11.                const auto ewd = errnoWithDescription(failed);  \n12.                warning() <>(std::move(task));  \n21.        int failed = pthread_create(&thread, &attrs, runFunc, ctx.get());   \n22.        ......  \n23.} "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.2總結"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" service_entry_point服務入口點子模塊主要負責新連接後的回調處理及工作線程創建,該模塊和後續的session會話模塊、SSM服務狀態機模塊銜接配合,完成數據收發的正常邏輯轉換處理。上面的分析只列出了服務入口點子模塊的核心接口實現,下表總結該模塊所有的接口功能:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/36/361c2f04b4b66c616753bd12a4ffa8ae.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"3. Ticket數據收發子模塊"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     Ticket數據收發子模塊主要功能如下:調用session子模塊進行底層asio庫處理、拆分數據接收和數據發送到兩個類、完整mongodb報文讀取 、接收或者發送mongodb報文後的回調處理。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.1 ASIOTicket類核心代碼實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Ticket數據收發模塊相關實現主要由ASIOTicket類完成,該類結構如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//下面的ASIOSinkTicket和ASIOSourceTicket繼承該類,用於控制數據的發送和接收  \n2.class TransportLayerASIO::ASIOTicket : public TicketImpl {  \n3.public:  \n4.    //初始化構造  \n5.    explicit ASIOTicket(const ASIOSessionHandle& session, Date_t expiration);  \n6.    //獲取sessionId  \n7.    SessionId sessionId() const final {  \n8.        return _sessionId;  \n9.    }  \n10.    //asio模式沒用,針對legacy模型  \n11.    Date_t expiration() const final {  \n12.        return _expiration;  \n13.    }  \n14.\n15.    //以下四個接口用於數據收發相關處理  \n16.    void fill(bool sync, TicketCallback&& cb);  \n17.protected:  \n18.    void finishFill(Status status);  \n19.    bool isSync() const;  \n20.    virtual void fillImpl() = 0;  \n21.private:  \n22.    //會話信息,一個鏈接一個session  \n23.    std::weak_ptr _session;  \n24.    //每個session有一個唯一id  \n25.    const SessionId _sessionId;  \n26.    //asio模型沒用,針對legacy生效  \n27.    const Date_t _expiration;  \n28.    //數據發送或者接收成功後的回調處理  \n29.    TicketCallback _fillCallback;  \n30.    //同步方式還是異步方式進行數據處理,默認異步  \n31.    bool _fillSync;  \n32.};  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"該類保護多個成員變量,這些成員變量功能說明如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/9f/9f0008d66dbab9b0c1712377fde9b03f.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" mongodb在具體實現上,數據接收和數據發送分開實現,分別是數據接收類ASIOSourceTicket和數據發送類ASIOSinkTicket,這兩個類都繼承自ASIOTicket類,這兩個類的主要結構如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//數據接收的ticket  \n2.class TransportLayerASIO::ASIOSourceTicket : public TransportLayerASIO::ASIOTicket {  \n3.public:  \n4.    //初始化構造  \n5.    ASIOSourceTicket(const ASIOSessionHandle& session, Date_t expiration, Message* msg);  \n6.protected:  \n7.    //數據接收Impl  \n8.    void fillImpl() final;  \n9.private:  \n10.    //接收到mongodb頭部數據後的回調處理  \n11.    void _headerCallback(const std::error_code& ec, size_t size);  \n12.    //接收到mongodb包體數據後的回調處理    \n13.    void _bodyCallback(const std::error_code& ec, size_t size);  \n14.  \n15.    //存儲數據的buffer,網絡IO讀取到的原始數據內容  \n16.    SharedBuffer _buffer;  \n17.    //數據Message管理,數據來源爲_buffer  \n18.    Message* _target;  \n19.};  \n1.\n2.  \n20.//數據發送的ticket  \n21.class TransportLayerASIO::ASIOSinkTicket : public TransportLayerASIO::ASIOTicket {  \n22. public:  \n23.    //初始化構造  \n24.    ASIOSinkTicket(const ASIOSessionHandle& session, Date_t expiration, const Message& msg);  \n25.protected:  \n26. //數據發送Impl  \n27.    void fillImpl() final;  \n28.private:  \n29.    //發送數據完成的回調處理  \n30.    void _sinkCallback(const std::error_code& ec, size_t size);  \n31.    //需要發送的數據message信息  \n32.    Message _msgToSend;  \n33.}; "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"    從上面的代碼實現可以看出,ASIOSinkTicket 和ASIOSourceTicket 類接口及成員實現幾乎意義,只是具體的實現方法不同,下面對ASIOSourceTicket和ASIOSinkTicket 相關核心代碼實現進行分析。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.2 ASIOSourceTicket 數據接收核心代碼實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 數據接收過程核心代碼如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//數據接收的fillImpl接口實現  \n2.void TransportLayerASIO::ASIOSourceTicket::fillImpl() {    \n3.    //獲取對應session信息  \n4.    auto session = getSession();  \n5.    if (!session)  \n6.        return;  \n7.    //收到讀取mongodb頭部數據,頭部數據長度是固定的kHeaderSize字節  \n8.    const auto initBufSize = kHeaderSize;  \n9.    _buffer = SharedBuffer::allocate(initBufSize);  \n10.  \n11.    //調用TransportLayerASIO::ASIOSession::read讀取底層數據存入_buffer  \n12.    //讀完頭部數據後執行對應的_headerCallback回調函數  \n13.    session->read(isSync(),  \n14.                  asio::buffer(_buffer.get(), initBufSize), //先讀取頭部字段出來  \n15.                  [this](const std::error_code& ec, size_t size) { _headerCallback(ec, size); });  \n16.}  \n17.  \n18.//讀取到mongodb header頭部信息後的回調處理  \n19.void TransportLayerASIO::ASIOSourceTicket::_headerCallback(const std::error_code& ec, size_t size) {  \n20.    ......  \n21.    //獲取session信息  \n22.    auto session = getSession();  \n23.    if (!session)  \n24.        return;  \n25.    //從_buffer中獲取頭部信息  \n26.    MSGHEADER::View headerView(_buffer.get());  \n27.    //獲取message長度  \n28.    auto msgLen = static_cast(headerView.getMessageLength());  \n29.    //長度太小或者太大,直接報錯  \n30.    if (msgLen  MaxMessageSizeBytes) {  \n31.        .......  \n32.        return;  \n33.    }  \n34.    ....  \n35.   //內容還不夠一個mongo協議報文,繼續讀取body長度字節的數據,讀取完畢後開始body處理  \n36.   //注意這裏是realloc,保證頭部和body在同一個buffer中  \n37.    _buffer.realloc(msgLen);   \n38.    MsgData::View msgView(_buffer.get());  \n39.  \n40.    //調用底層TransportLayerASIO::ASIOSession::read讀取數據body   \n41.    session->read(isSync(),  \n42.      //數據讀取到該buffer                  \n43.      asio::buffer(msgView.data(), msgView.dataLen()),  \n44.      //讀取成功後的回調處理  \n45.      [this](const std::error_code& ec, size_t size) { _bodyCallback(ec, size); });  \n46.}  \n47.  \n48.//_headerCallback對header讀取後解析header頭部獲取到對應的msg長度,然後開始body部分處理  \n49.void TransportLayerASIO::ASIOSourceTicket::_bodyCallback(const std::error_code& ec, size_t size) {  \n50.    ......  \n51.    //buffer轉存到_target中  \n52.    _target->setData(std::move(_buffer));  \n53.    //流量統計  \n54.    networkCounter.hitPhysicalIn(_target->size());  \n55.    //TransportLayerASIO::ASIOTicket::finishFill    \n56.    finishFill(Status::OK()); //包體內容讀完後,開始下一階段的處理    \n57.    //報文讀取完後的下一階段就是報文內容處理,開始走ServiceStateMachine::_processMessage  \n58.}  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Mongodb協議由msg header + msg body組成,一個完整的mongodb報文內容格式如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/aa/aa6ca4e0c1ac1cb244059055d43d6b25.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 上圖所示各個字段及body內容部分功能說明如下表:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/64/647ce0dc2e22635633a6d1f9aa5d92c5.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"    ASIOSourceTicket類的幾個核心接口都是圍繞這一原則展開,整個mongodb數據接收流程如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. 讀取mongodb頭部header數據,解析出header中的messageLength字段。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. 檢查messageLength字段是否在指定的合理範圍,該字段不能小於Header整個頭部大小,也不能超過MaxMessageSizeBytes最大長度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3. Header len檢查通過,說明讀取header數據完成,於是執行_headerCallback回調。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4. realloc更多的空間來存儲body內容。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5. 繼續讀取body len長度數據,讀取body完成後,執行_bodyCallback回調處理。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.3 ASIOSinkTicket數據發送類核心代碼實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     ASIOSinkTicket發送類相比接收類,沒有數據解析相關的流程,因此實現過程會更加簡單,具體源碼實現如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//發送數據成功後的回調處理  \n2.void TransportLayerASIO::ASIOSinkTicket::_sinkCallback(const std::error_code& ec, size_t size) {  \n3.    //發送的網絡字節數統計  \n4.    networkCounter.hitPhysicalOut(_msgToSend.size());   \n5.    //執行SSM中對應的狀態流程  \n6.    finishFill(ec ? errorCodeToStatus(ec) : Status::OK());  \n7.}  \n8.  \n9.//發送數據的fillImpl  \n10.void TransportLayerASIO::ASIOSinkTicket::fillImpl() {  \n11.    //獲取對應session  \n12.    auto session = getSession();  \n13.    if (!session)  \n14.        return;  \n15.  \n16.    //調用底層TransportLayerASIO::ASIOSession::write發送數據,發送成功後執行_sinkCallback回調   \n17.    session->write(isSync(),  \n18.       asio::buffer(_msgToSend.buf(), _msgToSend.size()),  \n19.       //發送數據成功後的callback回調  \n20.       [this](const std::error_code& ec, size_t size) { _sinkCallback(ec, size); });  \n21.}  "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.2總結"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從上面的分析可以看出,Ticket數據收發模塊主要調用session會話模塊來進行底層數據的讀寫、解析,當讀取或者發送一個完整的mongodb報文後最終交由SSM服務狀態機模塊調度處理。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ticket模塊主要接口功能總結如下表所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/95/95d5fff28a72434f0034a6770e1e6aec.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 前面的分析也可以看出,Ticket數據收發模塊會調用session處理模塊來進行真正的數據讀寫,同時接收或者發送完一個完整mongodb報文後的後續回調處理講交由SSM服務狀態機模塊處理。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4. Session會話子模塊"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Session會話模塊功能主要如下:負責記錄HostAndPort、和底層asio庫直接互動,實現數據的同步或者異步收發。一個新連接fd對應一個唯一的session,對fd的操作直接映射爲session操作。Session會話子模塊主要代碼實現相關文件如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/7e/7ebbdfb2c99b28c2c01b529060bbccde.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"4.1 session會話子模塊核心代碼實現"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.class TransportLayerASIO::ASIOSession : public Session {  \n2.    //初始化構造  \n3.    ASIOSession(TransportLayerASIO* tl, GenericSocket socket);  \n4.    //獲取本session使用的tl  \n5.    TransportLayer* getTransportLayer();  \n6.    //以下四個接口套接字相關,本端/對端地址獲取,獲取fd,關閉fd等  \n7.    const HostAndPort& remote();  \n8.    const HostAndPort& local();  \n9.    GenericSocket& getSocket();  \n10.    void shutdown();  \n11.  \n12.    //以下四個接口調用asio網絡庫實現數據的同步收發和異步收發  \n13.    void read(...)  \n14.    void write(...)  \n15.    void opportunisticRead(...)  \n16.    void opportunisticWrite(...)  \n17.  \n18.    //遠端地址信息  \n19.    HostAndPort _remote;  \n20.    //本段地址信息  \n21.    HostAndPort _local;  \n22.    //賦值見TransportLayerASIO::_acceptConnection  \n23.    //也就是fd,一個新連接對應一個_socket  \n24.    GenericSocket _socket;  \n25.    //SSL相關不做分析,  \n26.#ifdef MONGO_CONFIG_SSL  \n27.    boost::optional<:ssl::stream>> _sslSocket;  \n28.    bool _ranHandshake = false;  \n29.#endif  \n30.  \n31.    //本套接字對應的tl,賦值建TransportLayerASIO::_acceptConnection(...)  \n32.    TransportLayerASIO* const _tl;  \n33.} "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 該類最核心的三個接口ASIOSession(...)、opportunisticRead(...)、opportunisticWrite(..)分別完成套接字處理、調用asio庫接口實現底層數據讀和底層數據寫。這三個核心接口源碼實現如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//初始化構造 TransportLayerASIO::_acceptConnection調用  \n2.ASIOSession(TransportLayerASIO* tl, GenericSocket socket)  \n3.    //fd描述符及TL初始化賦值  \n4.    : _socket(std::move(socket)), _tl(tl) {  \n5.    std::error_code ec;  \n6.  \n7.    //異步方式設置爲非阻塞讀  \n8.    _socket.non_blocking(_tl->_listenerOptions.transportMode == Mode::kAsynchronous, ec);  \n9.    fassert(40490, ec.value() == 0);  \n10.  \n11.    //獲取套接字的family  \n12.    auto family = endpointToSockAddr(_socket.local_endpoint()).getType();  \n13.    //滿足AF_INET\n14.    if (family == AF_INET || family == AF_INET6) {  \n15.        //no_delay keep_alive套接字系統參數設置  \n16.        _socket.set_option(asio::ip::tcp::no_delay(true));  \n17.        _socket.set_option(asio::socket_base::keep_alive(true));  \n18.        //KeepAlive系統參數設置  \n19.        setSocketKeepAliveParams(_socket.native_handle());  \n20.    }  \n21.  \n22.    //獲取本端和對端地址  \n23.    _local = endpointToHostAndPort(_socket.local_endpoint());  \n24.    _remote = endpointToHostAndPort(_socket.remote_endpoint(ec));  \n25.    if (ec) {  \n26.        LOG(3) < 0) {  \n13.            asyncBuffers += size; //buffer offset向後移動  \n14.        }  \n15.  \n16.        //繼續異步方式讀取數據,讀取到指定長度數據後執行handler回調處理  \n17.        asio::async_read(stream, asyncBuffers, std::forward(handler));  \n18.    } else {   \n19.        //阻塞方式讀取read返回後可以保證讀取到了size字節長度的數據  \n20.        //直接read獲取到size字節數據,則直接執行handler   \n21.        handler(ec, size);  \n22.    }  \n23.}  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" opportunisticRead首先調用asio::read(stream, buffers, ec)讀取buffers對應size長度的數據,buffers空間大小就是需要讀取的數據size大小。如果是同步線程模型,則這裏爲阻塞式讀取,直到讀到size字節纔會返回;如果是異步線程模型,這這裏是非阻塞讀取,非阻塞讀取當內核網絡協議棧數據讀取完畢後,如果還沒有讀到size字節,則繼續進行async_read異步讀取。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 當buffers指定的size字節全部讀取完整後,不管是同步模式還是異步模式,最終都會執行handler回調,開始後續的數據解析及處理流程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 發送數據核心代碼實現如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//發送數據  \n2.void opportunisticWrite(...) {  \n3.    std::error_code ec;  \n4.    //如果是同步模式,則阻塞寫,直到全部寫成功。異步模式則非阻塞寫  \n5.    auto size = asio::write(stream, buffers, ec);   \n6.  \n7.    //異步寫如果返回try_again說明數據還沒有發送完,則繼續異步寫發送  \n8.    if ((ec == asio::error::would_block || ec == asio::error::try_again) && !sync) {  \n9.        ConstBufferSequence asyncBuffers(buffers);  \n10.        if (size > 0) {  //buffer中數據指針偏移計數\n11.            asyncBuffers += size;  \n12.        }  \n13.        //異步寫發送完成,執行handler回調  \n14.        asio::async_write(stream, asyncBuffers, std::forward(handler));  \n15.    } else {  \n16.        //同步寫成功,則直接執行handler回調處理  \n17.        handler(ec, size);  \n18.    }  \n19.}  "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 數據發送流程和數據接收流程類似,也分位同步模式發送和異步模式發送,同步模式發送爲阻塞式寫,只有當所有數據通過asio::write()發送成功後才返回;異步模式發送爲非阻塞寫,asio::write()不一定全部發送出去,因此需要再次調用asio庫的asio::async_write()進行異步發送。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 不管是同步模式還是異步模式發送數據,最終數據發送成功後,都會調用handler()回調來執行後續的流程。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"4.2總結"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 從上面的代碼分析可以看出,session會話模塊最終直接和asio網絡庫交互實現數據的讀寫操作。該模塊核心接口功能總結如下表:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/43/43cd025d45487b2ca4edf3c54b3080ea.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"5. 總結"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     "},{"type":"link","attrs":{"href":"https://my.oschina.net/u/4087916/blog/4295038","title":null},"content":[{"type":"text","marks":[{"type":"underline"}],"text":"《Mongodb網絡傳輸處理源碼實現及性能調優-體驗內核性能極致設計》"}]},{"type":"text","text":"一文對mongodb網絡傳輸模塊中的ASIO網絡庫實現、service_executor服務運行子模塊(即線程模型子模塊)進行了詳細的分析,加上本文的transport_layer套接字處理及傳輸層管理子模塊、session會話子模塊、Ticket數據收發子模塊、service_entry_point服務入口點子模塊。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     transport_layer套接字處理及傳輸層管理子模塊主要由transport_layer_manager和transport_layer_asio兩個核心類組成。分別完成net相關的配置文件初始化操作,套接字初始化及處理,最終transport_layer_asio的相應接口實現了和ticket數據分發子模塊、服務入口點子模塊的關聯。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     服務入口子模塊主要由ServiceEntryPointImpl類和service_entry_point_utils中的線程創建函數組成,實現新連接accept處理及控制。該模塊通過startSession()讓服務入口子模塊、session會話子模塊、ssm狀態機子模塊關聯起來。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     ticket數據收發子模塊主要功能如下:調用session子模塊進行底層asio庫處理、拆分數據接收和數據發送到兩個類、完整mongodb報文讀取 、接收或者發送mongodb報文後的回調處理,回調處理由SSM服務狀態機模塊管理,當讀取或者發送一個完整的mongodb報文後最終交由SSM服務狀態機模塊調度處理。。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"     Session會話模塊功能主要如下:負責記錄HostAndPort、和底層asio庫直接互動,實現數據的同步或者異步收發。一個新連接fd對應一個唯一的session,對fd的操作直接映射爲session操作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"到這裏,整個mongodb網絡傳輸層模塊分析只差service_state_machine狀態機調度子模塊,狀態機調度子模塊相比本文分析的幾個子模塊更加複雜,因此將在下期《mongodb網絡傳輸層模塊源碼分析三》中單獨分析。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 本文所有源碼註釋分析詳見如下鏈接:"},{"type":"link","attrs":{"href":"https://github.com/y123456yz/reading-and-annotate-mongodb-3.6.1","title":null},"content":[{"type":"text","marks":[{"type":"underline"}],"text":"mongodb網絡傳輸模塊詳細源碼分析"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章