mongodb源码实现系列-网络传输层模块实现二
{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"关于作者"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 前滴滴出行技术专家,现任OPPO文档数据库mongodb负责人,负责oppo千万级峰值TPS/十万亿级数据量文档数据库mongodb内核研发及运维工作,一直专注于分布式缓存、高性能服务端、数据库、中间件等相关研发。后续持续分享《MongoDB内核源码设计、性能优化、最佳运维实践》,Github账号地址:"},{"type":"link","attrs":{"href":"https://github.com/y123456yz","title":null},"content":[{"type":"text","marks":[{"type":"underline"}],"text":"https://github.com/y123456yz"}]}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"1. 说明"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" mongodb源码实现系列文章有前后逻辑关系,阅读本文前,请提前阅读<>"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 在之前的<>一文中分析了如何阅读百万级大工程源码、Asio网络库实现、transport传输层网络模块中线程模型实现,但是由于篇幅原因,传输层网络模块中的以下模块实现原理没有分析,本文降将继续分析遗留的以下子模块:"}]},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"transport_layer套接字处理及传输层管理子模块"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"session会话子模块"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"Ticket数据收发子模块"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"service_entry_point服务入口点子模块"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"service_state_machine状态机子模块(该《模块在网络传输层模块源码实现三》中分析)"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":6,"align":null,"origin":null},"content":[{"type":"text","text":"service_executor线程模型子模块(该《模块在网络传输层模块源码实现四》中分析)"}]}]}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"2. transport_layer套接字处理及传输层管理子模块"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" transport_layer套接字处理及传输层管理子模块功能包括套接字相关初始化处理、结合asio库实现异步accept处理、不同线程模型管理及初始化等,该模块的源码实现主要由以下几个文件实现:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/44/4423735ecfa73dd118536a03966e8005.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 上图是套接字处理及传输层管理子模块源码实现的相关文件,其中mock和test文件主要用于模拟测试等,所以真正核心的代码实现只有下表的几个文件,对应源码文件功能说明如下表所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/79/79f0f5e3bd14376fe79ff6eae812fba5.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.1核心代码实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 该子模块核心代码主要由TransportLayerManager类和TransportLayerASIO类相关接口实现。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.1.1 TransportLayerManager类核心代码实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" TransportLayerManager类主要成员及接口如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//网络会话链接,消息处理管理相关的类,在createWithConfig构造该类存入_tls \n2.class TransportLayerManager final : public TransportLayer { \n3. //以下四个接口真正实现在TransportLayerASIO类中具体实现 \n4. Ticket sourceMessage(...) override; \n5. Ticket sinkMessage(...) override; \n6. Status wait(Ticket&& ticket) override; \n7. void asyncWait(...) override; \n8. //配置初始化实现 \n9. std::unique_ptr createWithConfig(...); \n10. \n11. //createWithConfig中赋值,对应TransportLayerASIO, \n12. //实际上容器中就一个成员,就是TransportLayerASIO \n13. std::vector<:unique_ptr>> _tls; \n14.}; "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"TransportLayerManager类包含一个_tls成员,该类最核心的createWithConfig接口代码实现如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"15.//根据配置构造相应类信息 _initAndListen中调用 \n16.std::unique_ptr TransportLayerManager::createWithConfig(...) { \n17. std::unique_ptr transportLayer; \n18. //服务类型,也就是本实例是mongos还是mongod \n19. //mongos对应ServiceEntryPointMongod,mongod对应ServiceEntryPointMongos \n20. auto sep = ctx->getServiceEntryPoint(); \n21. //net.transportLayer配置模式,默认asio, legacy模式已淘汰 \n22. if (config->transportLayer == \"asio\") { \n23. //同步方式还是异步方式,默认synchronous \n24. if (config->serviceExecutor == \"adaptive\") { \n25. //动态线程池模型,也就是异步模式 \n26. opts.transportMode = transport::Mode::kAsynchronous; \n27. } else if (config->serviceExecutor == \"synchronous\") { \n28. //一个链接一个线程模型,也就是同步模式 \n29. opts.transportMode = transport::Mode::kSynchronous; \n30. } \n31. //如果配置是asio,构造TransportLayerASIO类 \n32. auto transportLayerASIO = stdx::make_unique<:transportlayerasio>(opts, sep); \n33. if (config->serviceExecutor == \"adaptive\") { //异步方式 \n34. //构造动态线程模型对应的执行器ServiceExecutorAdaptive \n35. ctx->setServiceExecutor(stdx::make_unique( \n36. ctx, transportLayerASIO->getIOContext())); \n37. } else if (config->serviceExecutor == \"synchronous\") { //同步方式 \n38. //构造一个链接一个线程模型对应的执行器ServiceExecutorSynchronous \n39. ctx->setServiceExecutor(stdx::make_unique(ctx)); \n40. } \n41. //transportLayerASIO转换为transportLayer类 \n42. transportLayer = std::move(transportLayerASIO); \n43. } \n44. //transportLayer转存到对应retVector数组中并返回 \n45. std::vector<:unique_ptr>> retVector; \n46. retVector.emplace_back(std::move(transportLayer)); \n47. return stdx::make_unique(std::move(retVector)); \n48.} "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" createWithConfig函数根据配置文件来确定对应的TransportLayer,如果net.transportLayer配置为”asio”,则选用TransportLayerASIO类来进行底层的网络IO处理,如果配置为”legacy”,则选用TransportLayerLegacy。”legacy”模式当前已淘汰,本文只分析”asio”模式实现。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" “asio”模式包含两种线程模型:adaptive(动态线程模型)和synchronous(同步线程模型)。adaptive模式线程设计采用动态线程方式,线程数和mongodb压力直接相关,如果mongodb压力大,则线程数增加;如果mongodb压力变小,则线程数自动减少。同步线程模式也就是一个链接一个线程模型,线程数的多少和链接数的多少成正比,链接数越多则线程数也越大。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Mongodb内核实现中通过opts.transportMode来标记asio的线程模型,这两种模型对应标记如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/29/2961bc42399f91357e46bfcee43efcfb.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 说明:adaptive线程模型被标记为KAsynchronous,synchronous被标记为KSynchronous是有原因的,adaptive动态线程模型网络IO处理借助epoll异步实现,而synchronous一个链接一个线程模型网络IO处理是同步读写操作。Mongodb网络线程模型具体实现及各种优缺点可以参考:"},{"type":"link","attrs":{"href":"https://my.oschina.net/u/4087916/blog/4295038","title":null},"content":[{"type":"text","marks":[{"type":"underline"}],"text":"Mongodb网络传输处理源码实现及性能调优-体验内核性能极致设计"}]}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.1.2 TransportLayerASIO类核心代码实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" TransportLayerASIO类核心成员及接口如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.class TransportLayerASIO final : public TransportLayer { \n2. //以下四个接口主要和套接字数据读写相关 \n3. Ticket sourceMessage(...); \n4. Ticket sinkMessage(...); \n5. Status wait(Ticket&& ticket); \n6. void asyncWait(Ticket&& ticket, TicketCallback callback); \n7. void end(const SessionHandle& session); \n8. //新链接处理 \n9. void _acceptConnection(GenericAcceptor& acceptor); \n10. \n11. //adaptive线程模型网络IO上下文处理 \n12. std::shared_ptr<:io_context> _workerIOContext; \n13. //accept接收客户端链接对应的IO上下文 \n14. std::unique_ptr<:io_context> _acceptorIOContext; \n15. //bindIp配置中的ip地址列表,用于bind监听,accept客户端请求 \n16. std::vector<:pair>> _acceptors; \n17. //listener线程负责接收客户端新链接 \n18. stdx::thread _listenerThread; \n19. //服务类型,也就是本实例是mongos还是mongod \n20. //mongos对应ServiceEntryPointMongod,mongod对应ServiceEntryPointMongos \n21. ServiceEntryPoint* const _sep = nullptr; \n22. //当前运行状态 \n23. AtomicWord _running{false}; \n24. //listener处理相关的配置信息 \n25. Options _listenerOptions; \n26.} "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 从上面的类结构可以看出,该类主要通过listenerThread线程完成bind绑定及listen监听操作,同时部分接口实现新连接上的数据读写。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 套接字初始化代码实现如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.Status TransportLayerASIO::setup() { \n2. std::vector<:string> listenAddrs; \n3. //如果没有配置bindIp,则默认监听\"127.0.0.1:27017\"\n4. if (_listenerOptions.ipList.empty()) { \n5. listenAddrs = {\"127.0.0.1\"}; \n6. } else { \n7. //配置文件中的bindIp:1.1.1.1,2.2.2.2,以逗号分隔符获取ip列表存入ipList \n8. boost::split(listenAddrs, _listenerOptions.ipList, boost::is_any_of(\",\"), boost::token_compress_on); \n9. } \n10. //遍历ip地址列表 \n11. for (auto& ip : listenAddrs) { \n12. //根据IP和端口构造对应SockAddr结构 \n13. const auto addrs = SockAddr::createAll( \n14. ip, _listenerOptions.port, _listenerOptions.enableIPv6 ? AF_UNSPEC : AF_INET); \n15. ...... \n16. //根据addr构造endpoint \n17. asio::generic::stream_protocol::endpoint endpoint(addr.raw(), addr.addressSize); \n18. //_acceptorIOContext和_acceptors关联 \n19. GenericAcceptor acceptor(*_acceptorIOContext); \n20. //epoll注册,也就是fd和epoll关联 \n21. //basic_socket_acceptor::open \n22. acceptor.open(endpoint.protocol()); \n23. //SO_REUSEADDR配置 basic_socket_acceptor::set_option \n24. acceptor.set_option(GenericAcceptor::reuse_address(true)); \n25. //非阻塞设置 basic_socket_acceptor::non_blocking \n26. acceptor.non_blocking(true, ec); \n27. //bind绑定 \n28. acceptor.bind(endpoint, ec); \n29. if (ec) { \n30. return errorCodeToStatus(ec); \n31. } \n32. } \n}"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 从上面的分析可以看出,代码实现首先解析出配置文件中bindIP中的ip:port列表,然后遍历列表绑定所有服务端需要监听的ip:port,每个ip:port对应一个GenericAcceptor ,所有acceptor和全局accept IO上下文_acceptorIOContext关联,同时bind()绑定所有ip:port。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Bind()绑定所有配置文件中的Ip:port后,然后通过TransportLayerASIO::start()完成后续处理,该接口代码实现如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//_initAndListen中调用执行 \n2.Status TransportLayerASIO::start() { //listen线程处理 \n3. ...... \n4. //这里专门起一个线程做listen相关的accept事件处理 \n5. _listenerThread = stdx::thread([this] { \n6. //修改线程名 \n7. setThreadName(\"listener\"); \n8. //该函数中循环处理accept事件 \n9. while (_running.load()) { \n10. asio::io_context::work work(*_acceptorIOContext); \n11. try { \n12. //accept事件调度处理 \n13. _acceptorIOContext->run(); \n14. } catch (...) { //异常处理 \n15. severe() <run()接口来调度,当有新链接到来的时候,就会执行相应的accept回调处理,accept回调注册到io"},{"type":"text","text":"context的流程由acceptConnection()完成,该接口核心源码实现如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//accept新连接到来的回调注册 \n2.void TransportLayerASIO::_acceptConnection(GenericAcceptor& acceptor) { \n3. //新链接到来时候的回调函数,服务端接收到新连接都会执行该回调\n4. //注意这里面是递归执行,保证所有accept事件都会一次处理完毕\n5. auto acceptCb = [this, &acceptor](const std::error_code& ec, GenericSocket peerSocket) mutable { \n6. if (!_running.load()) \n7. return; \n8. \n9. ...... \n10. //每个新的链接都会new一个新的ASIOSession \n11. std::shared_ptr session(new ASIOSession(this, std::move(peerSocket))); \n12. //新的链接处理ServiceEntryPointImpl::startSession, \n13. //和ServiceEntryPointImpl服务入口点模块关联起来 \n14. _sep->startSession(std::move(session)); \n15. //递归,直到处理完所有的网络accept事件 \n16. _acceptConnection(acceptor); \n17. }; \n18. //accept新连接到来后服务端的回调处理在这里注册 \n19. acceptor.async_accept(*_workerIOContext, std::move(acceptCb)); \n20.} "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" TransportLayerASIO::_acceptConnection的新连接处理过程借助ASIO库实现,通过acceptor.async_accept实现所有监听的acceptor回调异步注册。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 当服务端接收到客户端新连接事件通知后,会触发执行acceptCb()回调,该回调中底层ASIO库通过 epoll_wait获取到所有的accept事件,每获取到一个accept事件就代表一个新的客户端链接,然后调用ServiceEntryPointImpl::startSession()接口处理这个新的链接事件,整个过程递归执行,保证一次可以处理所有的客户端accept请求信息。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 每个链接都会构造一个唯一的session信息,该session就代表一个唯一的新连接,链接和session一一对应。此外,最终会调用ServiceEntryPointImpl::startSession()进行真正的accept()处理,从而获取到一个新的链接。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 注意"},{"type":"text","text":":TransportLayerASIO::_acceptConnection()中实现了TransportLayerASIO类和ServiceEntryPointImpl类的关联,这两个类在该接口实现了关联。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 此外,从前面的TransportLayerASIO类结构中可以看出,该类还包含如下四个接口:sourceMessage(...)、sinkMessage(...)、wait(Ticket&& ticket)、asyncWait(Ticket&& ticket, TicketCallback callback),这四个接口入参都和Ticket数据分发子模块相关联,具体核心代码实现如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//根据asioSession, expiration, message三个信息构造数据接收类ASIOSourceTicket \n2.Ticket TransportLayerASIO::sourceMessage(...) { \n3. ...... \n4. auto asioSession = checked_pointer_cast(session); \n5. //根据asioSession, expiration, message三个信息构造ASIOSourceTicket \n6. auto ticket = stdx::make_unique(asioSession, expiration, message); \n7. return {this, std::move(ticket)}; \n8.} \n9. \n10.//根据asioSession, expiration, message三个信息构造数据发送类ASIOSinkTicket \n11.Ticket TransportLayerASIO::sinkMessage(...) { \n12. auto asioSession = checked_pointer_cast(session); \n13. auto ticket = stdx::make_unique(asioSession, expiration, message); \n14. return {this, std::move(ticket)}; \n15.} \n16. \n17.//同步接收或者发送,最终调用ASIOSourceTicket::fill 或者 ASIOSinkTicket::fill \n18.Status TransportLayerASIO::wait(Ticket&& ticket) { \n19. //获取对应Ticket,接收对应ASIOSourceTicket,发送对应ASIOSinkTicket \n20. auto ownedASIOTicket = getOwnedTicketImpl(std::move(ticket)); \n21. auto asioTicket = checked_cast(ownedASIOTicket.get()); \n22. ...... \n23. //调用对应fill接口 同步接收ASIOSourceTicket::fill 或者 同步发送ASIOSinkTicket::fill \n24. asioTicket->fill(true, [&waitStatus](Status result) { waitStatus = result; }); \n25. return waitStatus; \n26.} \n27.//异步接收或者发送,最终调用ASIOSourceTicket::fill 或者 ASIOSinkTicket::fill \n28.void TransportLayerASIO::asyncWait(Ticket&& ticket, TicketCallback callback) { \n29. //获取对应数据收发的Ticket,接收对应ASIOSourceTicket,发送对应ASIOSinkTicket \n30. auto ownedASIOTicket = std::shared_ptr(getOwnedTicketImpl(std::move(ticket))); \n31. auto asioTicket = checked_cast(ownedASIOTicket.get()); \n32. \n33. //调用对应ASIOTicket::fill \n34. asioTicket->fill( \n35. false, [ callback = std::move(callback), \n36. ownedASIOTicket = std::move(ownedASIOTicket) ](Status status) { callback(status); }); \n37.} "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 上面四个接口中的前两个接口主要通过Session, expiration, message这三个参数来获取对应的Ticket 信息,实际上mongodb内核实现中把接收数据的Ticket和发送数据的Ticket分别用不同的继承类ASIOSourceTicket和ASIOSinkTicket来区分,三个参数的作用如下表所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/6e/6eafa3d9d2dbc8d209001fc367010c4f.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 数据收发包括同步收发和异步收发,同步收发通过TransportLayerASIO::wait()实现,异步收发通过TransportLayerASIO::asyncWait()实现。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 注意:"},{"type":"text","text":"以上四个接口把TransportLayerASIO类和Ticket 数据收发类的关联。 "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2.2总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" "},{"type":"text","text":" transport_layer套接字处理及传输层管理子模块主要由transport_layer_manager和transport_layer_asio两个核心类组成,这两个类的核心接口功能总结如下表所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/04/04c5f0bc85ba598f4dcda2e11c2e9ae7.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Transport_layer_manager中初始化TransportLayer和serviceExecutor,net.TransportLayer配置可以为legacy和asio,其中legacy已经淘汰,当前内核只支持asio模式。asio配置对应的TransportLayer由TransportLayerASIO实现,对应的serviceExecutor线程模型可以是adaptive动态线程模型,也可以是synchronous同步线程模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 套接字创建、bind()绑定、listen()监听、accept事件注册等都由本类实现,同时数据分发Ticket模块也与本模块关联,一起配合完成整个后续Ticket模块模块的同步及异步数据读写流程。此外,本模块还通过ServiceEntryPoint服务入口子模块联动,保证了套接字初始化、accept事件注册完成后,服务入口子模块能有序的进行新连接接收处理。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 接下来继续分析本模块相关联的ServiceEntryPoint服务入口子模块和Ticket数据分发子模块实现。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"3. service_entry_point服务入口点子模块"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" service_entry_point服务入口点子模块主要负责如下功能:新连接处理、Session会话管理、接收到一个完整报文后的回调处理(含报文解析、认证、引擎层处理等)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 该模块的源码实现主要包含以下几个文件:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/cf/cffc5b74689499f249cb454bb8c0d845.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" service_entry_point开头的代码文件都和本模块相关,其中service_entry_point_utils*负责工作线程创建,service_entry_point_impl*完成新链接回调处理及sesseion会话管理。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.1核心源码实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 服务入口子模块相关代码实现比较简洁,主要由ServiceEntryPointImpl类和service_entry_point_utils中的线程创建函数组成。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.1 ServiceEntryPointImpl类核心代码实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ServiceEntryPointImpl类主要成员和接口如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.class ServiceEntryPointImpl : public ServiceEntryPoint { \n2. MONGO_DISALLOW_COPYING(ServiceEntryPointImpl); \n3.public: \n4. //构造函数 \n5. explicit ServiceEntryPointImpl(ServiceContext* svcCtx); \n6. //以下三个接口进行session会话处理控制 \n7. void startSession(transport::SessionHandle session) final; \n8. void endAllSessions(transport::Session::TagMask tags) final; \n9. bool shutdown(Milliseconds timeout) final; \n10. //session会话统计 \n11. Stats sessionStats() const final; \n12. ...... \n13.private: \n14. //该list结构管理所有的ServiceStateMachine信息 \n15. using SSMList = stdx::list<:shared_ptr>>; \n16. //SSMList对应的迭代器 \n17. using SSMListIterator = SSMList::iterator; \n18. //赋值ServiceEntryPointImpl::ServiceEntryPointImpl \n19. //对应ServiceContextMongoD(mongod)或者ServiceContextNoop(mongos)类 \n20. ServiceContext* const _svcCtx; \n21. //该成员变量在代码中没有使用 \n22. AtomicWord<:size_t> _nWorkers; \n23. //锁 \n24. mutable stdx::mutex _sessionsMutex; \n25. //一个新链接对应一个ssm保存到ServiceEntryPointImpl._sessions中 \n26. SSMList _sessions; \n27. //最大链接数控制 \n28. size_t _maxNumConnections{DEFAULT_MAX_CONN}; \n29. //当前的总链接数,不包括关闭的链接 \n30. AtomicWord _currentConnections{0}; \n31. //所有的链接,包括已经关闭的链接 \n32. AtomicWord _createdConnections{0}; \n33.}; "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 该类的几个接口主要是session相关控制处理,该类中的变量成员说明如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/4d/4d55c0b95a1708b085050565238e27bc.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ServiceEntryPointImpl类最核心的startSession()接口负责每个新连接到来后的内部回调处理,具体实现如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//新链接到来后的回调处理 \n2.void ServiceEntryPointImpl::startSession(transport::SessionHandle session) { \n3. //获取该新连接对应的服务端和客户端地址信息 \n4. const auto& remoteAddr = session->remote().sockAddr(); \n5. const auto& localAddr = session->local().sockAddr(); \n6. //服务端和客户端地址记录到session中 \n7. auto restrictionEnvironment = stdx::make_unique(*remoteAddr, *localAddr); \n8. RestrictionEnvironment::set(session, std::move(restrictionEnvironment)); \n9. ...... \n10. \n11. //获取transportMode,kAsynchronous或者kSynchronous \n12. auto transportMode = _svcCtx->getServiceExecutor()->transportMode(); \n13. //构造ssm \n14. auto ssm = ServiceStateMachine::create(_svcCtx, session, transportMode); \n15. {//该{}体内实现链接计数,同时把ssm统一添加到_sessions列表管理 \n16. stdx::lock_guard lk(_sessionsMutex); \n17. connectionCount = _sessions.size() + 1; //连接数自增 \n18. if (connectionCount <= _maxNumConnections) { \n19. //新来的链接对应的session保存到_sessions链表 \n20. //一个新链接对应一个ssm保存到ServiceEntryPointImpl._sessions中 \n21. ssmIt = _sessions.emplace(_sessions.begin(), ssm); \n22. _currentConnections.store(connectionCount); \n23. _createdConnections.addAndFetch(1); \n24. } \n25. } \n26. //链接超限,直接退出 \n27. if (connectionCount > _maxNumConnections) { \n28. ...... \n29. return; \n30. } \n31. //链接关闭的回收处理 \n32. ssm->setCleanupHook([ this, ssmIt, session = std::move(session) ] { \n33. ...... \n34. }); \n35. //获取transport模式为同步模式还是异步模式,也就是adaptive线程模式还是synchronous线程模式 \n36. auto ownership = ServiceStateMachine::Ownership::kOwned; \n37. if (transportMode == transport::Mode::kSynchronous) { \n38. ownership = ServiceStateMachine::Ownership::kStatic; \n39. } \n40. //ServiceStateMachine::start,这里和服务状态机模块衔接起来 \n41. ssm->start(ownership); \n42.} "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 该接口拿到该链接对应的服务端和客户端地址后,记录到该链接对应session中,然后根据该session、transportMode、_svcCtx构建一个服务状态机ssm(ServiceStateMachine)。一个新链接对应一个唯一session,一个session对应一个唯一的服务状态机ssm,这三者保持唯一的一对一关系。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 最终,startSession()让服务入口子模块、session会话子模块、ssm状态机子模块关联起来。 "}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.2 service_entry_point_utils核心代码实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" service_entry_point_utils源码文件只有launchServiceWorkerThread一个接口,该接口主要负责工作线程创建,并设置每个工作线程的线程栈大小,如果系统默认栈大于1M,则每个工作线程的线程栈大小设置为1M,如果系统栈大小小于1M,则以系统堆栈大小为准,同时warning打印提示。该函数实现如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.Status launchServiceWorkerThread(stdx::function task) { \n2. static const size_t kStackSize = 1024 * 1024; //1M \n3. struct rlimit limits; \n4. //或者系统堆栈大小 \n5. invariant(getrlimit(RLIMIT_STACK, &limits) == 0); \n6. //如果系统堆栈大小大于1M,则默认设置线程栈大小为1M \n7. if (limits.rlim_cur > kStackSize) { \n8. size_t stackSizeToSet = kStackSize; \n9. int failed = pthread_attr_setstacksize(&attrs, stackSizeToSet); \n10. if (failed) { \n11. const auto ewd = errnoWithDescription(failed); \n12. warning() <>(std::move(task)); \n21. int failed = pthread_create(&thread, &attrs, runFunc, ctx.get()); \n22. ...... \n23.} "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.2总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" service_entry_point服务入口点子模块主要负责新连接后的回调处理及工作线程创建,该模块和后续的session会话模块、SSM服务状态机模块衔接配合,完成数据收发的正常逻辑转换处理。上面的分析只列出了服务入口点子模块的核心接口实现,下表总结该模块所有的接口功能:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/36/361c2f04b4b66c616753bd12a4ffa8ae.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"3. Ticket数据收发子模块"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Ticket数据收发子模块主要功能如下:调用session子模块进行底层asio库处理、拆分数据接收和数据发送到两个类、完整mongodb报文读取 、接收或者发送mongodb报文后的回调处理。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.1 ASIOTicket类核心代码实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Ticket数据收发模块相关实现主要由ASIOTicket类完成,该类结构如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//下面的ASIOSinkTicket和ASIOSourceTicket继承该类,用于控制数据的发送和接收 \n2.class TransportLayerASIO::ASIOTicket : public TicketImpl { \n3.public: \n4. //初始化构造 \n5. explicit ASIOTicket(const ASIOSessionHandle& session, Date_t expiration); \n6. //获取sessionId \n7. SessionId sessionId() const final { \n8. return _sessionId; \n9. } \n10. //asio模式没用,针对legacy模型 \n11. Date_t expiration() const final { \n12. return _expiration; \n13. } \n14.\n15. //以下四个接口用于数据收发相关处理 \n16. void fill(bool sync, TicketCallback&& cb); \n17.protected: \n18. void finishFill(Status status); \n19. bool isSync() const; \n20. virtual void fillImpl() = 0; \n21.private: \n22. //会话信息,一个链接一个session \n23. std::weak_ptr _session; \n24. //每个session有一个唯一id \n25. const SessionId _sessionId; \n26. //asio模型没用,针对legacy生效 \n27. const Date_t _expiration; \n28. //数据发送或者接收成功后的回调处理 \n29. TicketCallback _fillCallback; \n30. //同步方式还是异步方式进行数据处理,默认异步 \n31. bool _fillSync; \n32.}; "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"该类保护多个成员变量,这些成员变量功能说明如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/9f/9f0008d66dbab9b0c1712377fde9b03f.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" mongodb在具体实现上,数据接收和数据发送分开实现,分别是数据接收类ASIOSourceTicket和数据发送类ASIOSinkTicket,这两个类都继承自ASIOTicket类,这两个类的主要结构如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//数据接收的ticket \n2.class TransportLayerASIO::ASIOSourceTicket : public TransportLayerASIO::ASIOTicket { \n3.public: \n4. //初始化构造 \n5. ASIOSourceTicket(const ASIOSessionHandle& session, Date_t expiration, Message* msg); \n6.protected: \n7. //数据接收Impl \n8. void fillImpl() final; \n9.private: \n10. //接收到mongodb头部数据后的回调处理 \n11. void _headerCallback(const std::error_code& ec, size_t size); \n12. //接收到mongodb包体数据后的回调处理 \n13. void _bodyCallback(const std::error_code& ec, size_t size); \n14. \n15. //存储数据的buffer,网络IO读取到的原始数据内容 \n16. SharedBuffer _buffer; \n17. //数据Message管理,数据来源为_buffer \n18. Message* _target; \n19.}; \n1.\n2. \n20.//数据发送的ticket \n21.class TransportLayerASIO::ASIOSinkTicket : public TransportLayerASIO::ASIOTicket { \n22. public: \n23. //初始化构造 \n24. ASIOSinkTicket(const ASIOSessionHandle& session, Date_t expiration, const Message& msg); \n25.protected: \n26. //数据发送Impl \n27. void fillImpl() final; \n28.private: \n29. //发送数据完成的回调处理 \n30. void _sinkCallback(const std::error_code& ec, size_t size); \n31. //需要发送的数据message信息 \n32. Message _msgToSend; \n33.}; "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 从上面的代码实现可以看出,ASIOSinkTicket 和ASIOSourceTicket 类接口及成员实现几乎意义,只是具体的实现方法不同,下面对ASIOSourceTicket和ASIOSinkTicket 相关核心代码实现进行分析。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.2 ASIOSourceTicket 数据接收核心代码实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 数据接收过程核心代码如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//数据接收的fillImpl接口实现 \n2.void TransportLayerASIO::ASIOSourceTicket::fillImpl() { \n3. //获取对应session信息 \n4. auto session = getSession(); \n5. if (!session) \n6. return; \n7. //收到读取mongodb头部数据,头部数据长度是固定的kHeaderSize字节 \n8. const auto initBufSize = kHeaderSize; \n9. _buffer = SharedBuffer::allocate(initBufSize); \n10. \n11. //调用TransportLayerASIO::ASIOSession::read读取底层数据存入_buffer \n12. //读完头部数据后执行对应的_headerCallback回调函数 \n13. session->read(isSync(), \n14. asio::buffer(_buffer.get(), initBufSize), //先读取头部字段出来 \n15. [this](const std::error_code& ec, size_t size) { _headerCallback(ec, size); }); \n16.} \n17. \n18.//读取到mongodb header头部信息后的回调处理 \n19.void TransportLayerASIO::ASIOSourceTicket::_headerCallback(const std::error_code& ec, size_t size) { \n20. ...... \n21. //获取session信息 \n22. auto session = getSession(); \n23. if (!session) \n24. return; \n25. //从_buffer中获取头部信息 \n26. MSGHEADER::View headerView(_buffer.get()); \n27. //获取message长度 \n28. auto msgLen = static_cast(headerView.getMessageLength()); \n29. //长度太小或者太大,直接报错 \n30. if (msgLen MaxMessageSizeBytes) { \n31. ....... \n32. return; \n33. } \n34. .... \n35. //内容还不够一个mongo协议报文,继续读取body长度字节的数据,读取完毕后开始body处理 \n36. //注意这里是realloc,保证头部和body在同一个buffer中 \n37. _buffer.realloc(msgLen); \n38. MsgData::View msgView(_buffer.get()); \n39. \n40. //调用底层TransportLayerASIO::ASIOSession::read读取数据body \n41. session->read(isSync(), \n42. //数据读取到该buffer \n43. asio::buffer(msgView.data(), msgView.dataLen()), \n44. //读取成功后的回调处理 \n45. [this](const std::error_code& ec, size_t size) { _bodyCallback(ec, size); }); \n46.} \n47. \n48.//_headerCallback对header读取后解析header头部获取到对应的msg长度,然后开始body部分处理 \n49.void TransportLayerASIO::ASIOSourceTicket::_bodyCallback(const std::error_code& ec, size_t size) { \n50. ...... \n51. //buffer转存到_target中 \n52. _target->setData(std::move(_buffer)); \n53. //流量统计 \n54. networkCounter.hitPhysicalIn(_target->size()); \n55. //TransportLayerASIO::ASIOTicket::finishFill \n56. finishFill(Status::OK()); //包体内容读完后,开始下一阶段的处理 \n57. //报文读取完后的下一阶段就是报文内容处理,开始走ServiceStateMachine::_processMessage \n58.} "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Mongodb协议由msg header + msg body组成,一个完整的mongodb报文内容格式如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/aa/aa6ca4e0c1ac1cb244059055d43d6b25.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 上图所示各个字段及body内容部分功能说明如下表:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/64/647ce0dc2e22635633a6d1f9aa5d92c5.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ASIOSourceTicket类的几个核心接口都是围绕这一原则展开,整个mongodb数据接收流程如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. 读取mongodb头部header数据,解析出header中的messageLength字段。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. 检查messageLength字段是否在指定的合理范围,该字段不能小于Header整个头部大小,也不能超过MaxMessageSizeBytes最大长度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3. Header len检查通过,说明读取header数据完成,于是执行_headerCallback回调。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4. realloc更多的空间来存储body内容。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5. 继续读取body len长度数据,读取body完成后,执行_bodyCallback回调处理。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.3 ASIOSinkTicket数据发送类核心代码实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ASIOSinkTicket发送类相比接收类,没有数据解析相关的流程,因此实现过程会更加简单,具体源码实现如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//发送数据成功后的回调处理 \n2.void TransportLayerASIO::ASIOSinkTicket::_sinkCallback(const std::error_code& ec, size_t size) { \n3. //发送的网络字节数统计 \n4. networkCounter.hitPhysicalOut(_msgToSend.size()); \n5. //执行SSM中对应的状态流程 \n6. finishFill(ec ? errorCodeToStatus(ec) : Status::OK()); \n7.} \n8. \n9.//发送数据的fillImpl \n10.void TransportLayerASIO::ASIOSinkTicket::fillImpl() { \n11. //获取对应session \n12. auto session = getSession(); \n13. if (!session) \n14. return; \n15. \n16. //调用底层TransportLayerASIO::ASIOSession::write发送数据,发送成功后执行_sinkCallback回调 \n17. session->write(isSync(), \n18. asio::buffer(_msgToSend.buf(), _msgToSend.size()), \n19. //发送数据成功后的callback回调 \n20. [this](const std::error_code& ec, size_t size) { _sinkCallback(ec, size); }); \n21.} "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.2总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 从上面的分析可以看出,Ticket数据收发模块主要调用session会话模块来进行底层数据的读写、解析,当读取或者发送一个完整的mongodb报文后最终交由SSM服务状态机模块调度处理。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ticket模块主要接口功能总结如下表所示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/95/95d5fff28a72434f0034a6770e1e6aec.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 前面的分析也可以看出,Ticket数据收发模块会调用session处理模块来进行真正的数据读写,同时接收或者发送完一个完整mongodb报文后的后续回调处理讲交由SSM服务状态机模块处理。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4. Session会话子模块"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Session会话模块功能主要如下:负责记录HostAndPort、和底层asio库直接互动,实现数据的同步或者异步收发。一个新连接fd对应一个唯一的session,对fd的操作直接映射为session操作。Session会话子模块主要代码实现相关文件如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/7e/7ebbdfb2c99b28c2c01b529060bbccde.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"4.1 session会话子模块核心代码实现"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.class TransportLayerASIO::ASIOSession : public Session { \n2. //初始化构造 \n3. ASIOSession(TransportLayerASIO* tl, GenericSocket socket); \n4. //获取本session使用的tl \n5. TransportLayer* getTransportLayer(); \n6. //以下四个接口套接字相关,本端/对端地址获取,获取fd,关闭fd等 \n7. const HostAndPort& remote(); \n8. const HostAndPort& local(); \n9. GenericSocket& getSocket(); \n10. void shutdown(); \n11. \n12. //以下四个接口调用asio网络库实现数据的同步收发和异步收发 \n13. void read(...) \n14. void write(...) \n15. void opportunisticRead(...) \n16. void opportunisticWrite(...) \n17. \n18. //远端地址信息 \n19. HostAndPort _remote; \n20. //本段地址信息 \n21. HostAndPort _local; \n22. //赋值见TransportLayerASIO::_acceptConnection \n23. //也就是fd,一个新连接对应一个_socket \n24. GenericSocket _socket; \n25. //SSL相关不做分析, \n26.#ifdef MONGO_CONFIG_SSL \n27. boost::optional<:ssl::stream>> _sslSocket; \n28. bool _ranHandshake = false; \n29.#endif \n30. \n31. //本套接字对应的tl,赋值建TransportLayerASIO::_acceptConnection(...) \n32. TransportLayerASIO* const _tl; \n33.} "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 该类最核心的三个接口ASIOSession(...)、opportunisticRead(...)、opportunisticWrite(..)分别完成套接字处理、调用asio库接口实现底层数据读和底层数据写。这三个核心接口源码实现如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//初始化构造 TransportLayerASIO::_acceptConnection调用 \n2.ASIOSession(TransportLayerASIO* tl, GenericSocket socket) \n3. //fd描述符及TL初始化赋值 \n4. : _socket(std::move(socket)), _tl(tl) { \n5. std::error_code ec; \n6. \n7. //异步方式设置为非阻塞读 \n8. _socket.non_blocking(_tl->_listenerOptions.transportMode == Mode::kAsynchronous, ec); \n9. fassert(40490, ec.value() == 0); \n10. \n11. //获取套接字的family \n12. auto family = endpointToSockAddr(_socket.local_endpoint()).getType(); \n13. //满足AF_INET\n14. if (family == AF_INET || family == AF_INET6) { \n15. //no_delay keep_alive套接字系统参数设置 \n16. _socket.set_option(asio::ip::tcp::no_delay(true)); \n17. _socket.set_option(asio::socket_base::keep_alive(true)); \n18. //KeepAlive系统参数设置 \n19. setSocketKeepAliveParams(_socket.native_handle()); \n20. } \n21. \n22. //获取本端和对端地址 \n23. _local = endpointToHostAndPort(_socket.local_endpoint()); \n24. _remote = endpointToHostAndPort(_socket.remote_endpoint(ec)); \n25. if (ec) { \n26. LOG(3) < 0) { \n13. asyncBuffers += size; //buffer offset向后移动 \n14. } \n15. \n16. //继续异步方式读取数据,读取到指定长度数据后执行handler回调处理 \n17. asio::async_read(stream, asyncBuffers, std::forward(handler)); \n18. } else { \n19. //阻塞方式读取read返回后可以保证读取到了size字节长度的数据 \n20. //直接read获取到size字节数据,则直接执行handler \n21. handler(ec, size); \n22. } \n23.} "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" opportunisticRead首先调用asio::read(stream, buffers, ec)读取buffers对应size长度的数据,buffers空间大小就是需要读取的数据size大小。如果是同步线程模型,则这里为阻塞式读取,直到读到size字节才会返回;如果是异步线程模型,这这里是非阻塞读取,非阻塞读取当内核网络协议栈数据读取完毕后,如果还没有读到size字节,则继续进行async_read异步读取。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 当buffers指定的size字节全部读取完整后,不管是同步模式还是异步模式,最终都会执行handler回调,开始后续的数据解析及处理流程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 发送数据核心代码实现如下:"}]},{"type":"codeblock","attrs":{"lang":"cpp"},"content":[{"type":"text","text":"1.//发送数据 \n2.void opportunisticWrite(...) { \n3. std::error_code ec; \n4. //如果是同步模式,则阻塞写,直到全部写成功。异步模式则非阻塞写 \n5. auto size = asio::write(stream, buffers, ec); \n6. \n7. //异步写如果返回try_again说明数据还没有发送完,则继续异步写发送 \n8. if ((ec == asio::error::would_block || ec == asio::error::try_again) && !sync) { \n9. ConstBufferSequence asyncBuffers(buffers); \n10. if (size > 0) { //buffer中数据指针偏移计数\n11. asyncBuffers += size; \n12. } \n13. //异步写发送完成,执行handler回调 \n14. asio::async_write(stream, asyncBuffers, std::forward(handler)); \n15. } else { \n16. //同步写成功,则直接执行handler回调处理 \n17. handler(ec, size); \n18. } \n19.} "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 数据发送流程和数据接收流程类似,也分位同步模式发送和异步模式发送,同步模式发送为阻塞式写,只有当所有数据通过asio::write()发送成功后才返回;异步模式发送为非阻塞写,asio::write()不一定全部发送出去,因此需要再次调用asio库的asio::async_write()进行异步发送。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 不管是同步模式还是异步模式发送数据,最终数据发送成功后,都会调用handler()回调来执行后续的流程。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"4.2总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 从上面的代码分析可以看出,session会话模块最终直接和asio网络库交互实现数据的读写操作。该模块核心接口功能总结如下表:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/43/43cd025d45487b2ca4edf3c54b3080ea.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"5. 总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "},{"type":"link","attrs":{"href":"https://my.oschina.net/u/4087916/blog/4295038","title":null},"content":[{"type":"text","marks":[{"type":"underline"}],"text":"《Mongodb网络传输处理源码实现及性能调优-体验内核性能极致设计》"}]},{"type":"text","text":"一文对mongodb网络传输模块中的ASIO网络库实现、service_executor服务运行子模块(即线程模型子模块)进行了详细的分析,加上本文的transport_layer套接字处理及传输层管理子模块、session会话子模块、Ticket数据收发子模块、service_entry_point服务入口点子模块。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" transport_layer套接字处理及传输层管理子模块主要由transport_layer_manager和transport_layer_asio两个核心类组成。分别完成net相关的配置文件初始化操作,套接字初始化及处理,最终transport_layer_asio的相应接口实现了和ticket数据分发子模块、服务入口点子模块的关联。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 服务入口子模块主要由ServiceEntryPointImpl类和service_entry_point_utils中的线程创建函数组成,实现新连接accept处理及控制。该模块通过startSession()让服务入口子模块、session会话子模块、ssm状态机子模块关联起来。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ticket数据收发子模块主要功能如下:调用session子模块进行底层asio库处理、拆分数据接收和数据发送到两个类、完整mongodb报文读取 、接收或者发送mongodb报文后的回调处理,回调处理由SSM服务状态机模块管理,当读取或者发送一个完整的mongodb报文后最终交由SSM服务状态机模块调度处理。。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Session会话模块功能主要如下:负责记录HostAndPort、和底层asio库直接互动,实现数据的同步或者异步收发。一个新连接fd对应一个唯一的session,对fd的操作直接映射为session操作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"到这里,整个mongodb网络传输层模块分析只差service_state_machine状态机调度子模块,状态机调度子模块相比本文分析的几个子模块更加复杂,因此将在下期《mongodb网络传输层模块源码分析三》中单独分析。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 本文所有源码注释分析详见如下链接:"},{"type":"link","attrs":{"href":"https://github.com/y123456yz/reading-and-annotate-mongodb-3.6.1","title":null},"content":[{"type":"text","marks":[{"type":"underline"}],"text":"mongodb网络传输模块详细源码分析"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.