mongodb之mongod启动,各核心类之间的关系

mongodb之mongod启动,各核心类之间的关系

目录

mongodb之mongod启动

main函数位置

mongodb核心类:TransportLayerASIO

类ServiceEntryPointMongod负责处理客户端的命令

mongodb核心类 ServiceStateMachine

ServiceStateMachine之接收信息_sourceMessage

ServiceStateMachine之处理消息_processMessage

mongodb核心类:任务调度Executor之ServiceExecutorReserved


本文主要分析了mongodb启动过程,分析了网络层,任务线程池,服务点,服务状态机,客户端会话的初始化与执行过程。故意忽略了mongos代理,复制集,分片等核心内容。

  • TransportLayerASIO:异步IO接收客户端请求并收发包
  • ServiceStateMachine:维护一个客户端会话的状态:接收包,处理包,发送包。其中处理包调用ServiceEntryPointMongod来处理,接收和发送包调用TransportLayerASIO来处理。 同时使用ServiceExecutorReserved中的线程来执行以上三种任务。
  • ServiceEntryPoint主要完成客户端命令执行。支持各种指令。

main函数位置

src/mongo/db/mongod_main.cpp

int mongod_main(int argc, char* argv[]) {
    //禁止多线程
    ThreadSafetyContext::getThreadSafetyContext()->forbidMultiThreading();

    registerShutdownTask(shutdownTask);
    //信号处理
    setupSignalHandlers();

    srand(static_cast<unsigned>(curTimeMicros64()));

    Status status = mongo::runGlobalInitializers(std::vector<std::string>(argv, argv + argc));
    //创建了最最核心的对象ServiceContext对象,并把它设置在全局变量里
    auto* service = [] {
        try {
            auto serviceContextHolder = ServiceContext::make();
            auto* serviceContext = serviceContextHolder.get();
            setGlobalServiceContext(std::move(serviceContextHolder));

            return serviceContext;
        } catch (...) {
            auto cause = exceptionToStatus();
            quickExit(EXIT_FAILURE);
        }
    }();

    setUpCollectionShardingState(service);
    setUpCatalog(service);
    setUpReplication(service);
    setUpObservers(service);
    //能提供的服务,也是说支持的命令,是ServiceEntryPointMongod类来提供
    service->setServiceEntryPoint(std::make_unique<ServiceEntryPointMongod>(service));
    startSignalProcessingThread();
    //ReadWrite Concern
    ReadWriteConcernDefaults::create(service, readWriteConcernDefaultsCacheLookupMongoD);
    //初始化并监听端口
    ExitCode exitCode = initAndListen(service, serverGlobalParams.port);
    exitCleanly(exitCode);
    return 0;
}

src/mongo/db/mongod_main.cpp:main函数中的initAndListen直接调用了_initAndListen(service, listenPort);

监听网络端口,初始化ServiceEntryPoint, TransportLayer。其中ServiceEntryPoint->start()是启动多个线程。TransportLayer->start()是开始接收请求

ExitCode _initAndListen(ServiceContext* serviceContext, int listenPort) {
    Client::initThread("initandlisten"); //初始化了一个Client,但是没有SessionHandle

    initWireSpec();

    //调用消耗小但是不精确的时钟。这里只精确到10ms
    serviceContext->setFastClockSource(FastClockSourceFactory::create(Milliseconds(10)));

    DBDirectClientFactory::get(serviceContext).registerImplementation([](OperationContext* opCtx) {
        return std::unique_ptr<DBClientBase>(new DBDirectClient(opCtx));
    });

    const repl::ReplSettings& replSettings =
        repl::ReplicationCoordinator::get(serviceContext)->getSettings();
    //真正提供服务的类ServiceEntryPointMongod
    serviceContext->setServiceEntryPoint(std::make_unique<ServiceEntryPointMongod>(serviceContext));
    //操作上下文
    auto startupOpCtx = serviceContext->makeOperationContext(&cc());
    //周期运行的方法
    auto runner = makePeriodicRunner(serviceContext);
    serviceContext->setPeriodicRunner(std::move(runner));
    //启用线程池
    OCSPManager::get()->startThreadPool();
 if (!storageGlobalParams.repair) { //不是启用维护模式,那就打开端口
        auto tl =
                        
  transport::TransportLayerManager::createWithConfig(&serverGlobalParams, serviceContext);
        auto res = tl->setup();     //这里监听端口
        if (!res.isOK()) {
            return EXIT_NET_ERROR;
        }
        serviceContext->setTransportLayer(std::move(tl));
    }
    FlowControl::set(serviceContext,
                     std::make_unique<FlowControl>(
                         serviceContext, 
    repl::ReplicationCoordinator::get(serviceContext)));

    initializeStorageEngine(serviceContext, StorageEngineInitFlags::kNone);
    StorageControl::startStorageControls(serviceContext);
        initializeSNMP();

    startWatchdog(serviceContext);
    //任务调度器来调度每个请求
     
    auto start = serviceContext->getServiceExecutor()->start();
    //所有服务点开始工作,同时启用executor线程:std::unique_ptr<transport::ServiceExecutorReserved> _adminInternalPool;
     start = serviceContext->getServiceEntryPoint()->start();
    //网络传输层开始接受客户端
    start = serviceContext->getTransportLayer()->start();
    serviceContext->notifyStartupComplete();
    return waitForShutdown();
}

疑问:为什么在mongod_main中    service->setServiceEntryPoint(std::make_unique<ServiceEntryPointMongod>

在_initAndListen中也有:serviceContext->setServiceEntryPoint(std::make_unique<ServiceEntryPointMongod>(serviceContext));。 设置了两遍?

 

 

使用类mongo::transport::TransportLayerASIO来监听端口,接受客户端连接,并收发数据。直接使用的异步IO.同时这里传入ServiceEntryPoint也就是ServiceEntryPointMongod对象sep给TransportLatyerASIO, 最终会收发数据并调用sep->handleRequest()

//_initAndListen中auto tl =
            transport::TransportLayerManager::createWithConfig(&serverGlobalParams, serviceContext);是如下实现:

std::unique_ptr<TransportLayer> TransportLayerManager::createWithConfig(
    const ServerGlobalParams* config, ServiceContext* ctx) {
    std::unique_ptr<TransportLayer> transportLayer;
    auto sep = ctx->getServiceEntryPoint();  //服务点
    transport::TransportLayerASIO::Options opts(config);
    //直接使用异步IO
    auto transportLayerASIO = std::make_unique<transport::TransportLayerASIO>(opts, sep); //创建传输层,并在setUp方法中监听端口。 这里传入sep,用来处理客户端命令sep->handleRequest

    transportLayer = std::move(transportLayerASIO);

    std::vector<std::unique_ptr<TransportLayer>> retVector;
    retVector.emplace_back(std::move(transportLayer));
    return std::make_unique<TransportLayerManager>(std::move(retVector));
}

 

mongodb核心类:TransportLayerASIO

监听端口,接收请求,创建ASIOSession来处理请求



Status TransportLayerASIO::start() {
    stdx::unique_lock lk(_mutex);

    // Make sure we haven't shutdown already
    invariant(!_isShutdown);

    if (_listenerOptions.isIngress()) {
         //这里runnListener就是接收客户端请求
        _listener.thread = stdx::thread([this] { _runListener(); });
        _listener.cv.wait(lk, [&] { return _isShutdown || _listener.active; });
        return Status::OK();
    }

    invariant(_acceptors.empty());
    return Status::OK();
}

void TransportLayerASIO::_runListener() noexcept {
    setThreadName("listener");

    stdx::unique_lock lk(_mutex);
    if (_isShutdown) {
        return;
    }

    for (auto& acceptor : _acceptors) {
        asio::error_code ec;
        acceptor.second.listen(serverGlobalParams.listenBacklog, ec); //监听端口
        _acceptConnection(acceptor.second); //接受请求,并处理客户端数据
        LOGV2(23015, "Listening on", "address"_attr = acceptor.first.getAddr());
    }

    const char* ssl = "off";
#ifdef MONGO_CONFIG_SSL
    if (_sslMode() != SSLParams::SSLMode_disabled) {
        ssl = "on";
    }
#endif
    LOGV2(23016, "Waiting for connections", "port"_attr = _listenerPort, "ssl"_attr = ssl);

    _listener.active = true;
    _listener.cv.notify_all();
    ON_BLOCK_EXIT([&] {
        _listener.active = false;
        _listener.cv.notify_all();
    });

    while (!_isShutdown) {
        lk.unlock();
        _acceptorReactor->run();  //循环运行
        lk.lock();
    }
}


void TransportLayerASIO::_acceptConnection(GenericAcceptor& acceptor) {
    //接收到请求的回调函数,在_acceptorReactor->run();  //循环运行  这里会回调过来
    auto acceptCb = [this, &acceptor](const std::error_code& ec, GenericSocket peerSocket) mutable {
        if (auto lk = stdx::lock_guard(_mutex); _isShutdown) {
            return;
        }

        if (ec) {
            _acceptConnection(acceptor);
            return;
        }

        try {
            //创建并开始一个新session,这里一个session代表了一个客户端
            std::shared_ptr<ASIOSession> session(
                new ASIOSession(this, std::move(peerSocket), true)); //session引用TransportLayerASIO
            _sep->startSession(std::move(session)); //这里_sep就是initAndListen里传入的ServiceEntryPointMongod
        } catch (const DBException& e) {
           
        }
        //重新开始接收连接,构成循环,也就是当接收一个连接,让executor去掉度,又把自己注册到异步IOService中去。
        _acceptConnection(acceptor);
    }; //这里还是在回调函数里
    //异步接收,当收到连接后,会调用acceptCb
    acceptor.async_accept(*_ingressReactor, std::move(acceptCb));
}

 

 

 

类ServiceEntryPointMongod负责处理客户端的命令

TransportLayerASIO在接受客户端连接后,创建了ASIOSession,并交由ServiceEntryPointMongod负责start, ServiceEntryPointMongod继承了ServiceEntryPointImpl,所以最后使用ServiceEntryPointImpl->StartSession()来处理这个session. 在StartSession中,创建了ServiceStateMachine状态机来处理客户端连接

最终使用ServiceEntryPointCommon::handleRequest(opCtx, msg, Hooks{})来处理命令,并返回处理结果。DBResponse


class ServiceEntryPointImpl : public ServiceEntryPoint {
    //不能拷贝
    ServiceEntryPointImpl(const ServiceEntryPointImpl&) = delete;
    ServiceEntryPointImpl& operator=(const ServiceEntryPointImpl&) = delete;

public:
    /*explicit*/ ServiceEntryPointImpl(ServiceContext* svcCtx);
    //开始一个session, TransportLayerASIO里,_acceptConnection后,会创建ASIOSession并调用此函数来开始收发数据
    void startSession(transport::SessionHandle session) override;

    void endAllSessions(transport::Session::TagMask tags) final;

    Status start() final;
    bool shutdown(Milliseconds timeout) final;

    void appendStats(BSONObjBuilder* bob) const final;

    size_t numOpenSessions() const final {
        return _currentConnections.load();
    }

private:
    //ServiceStateMachine控制当前连接状态:接收,处理,发送
    using SSMList = std::list<std::shared_ptr<ServiceStateMachine>>;
    using SSMListIterator = SSMList::iterator;

    ServiceContext* const _svcCtx;    //服务上下文,代表整个服务
    AtomicWord<std::size_t> _nWorkers;
    //会话锁
    mutable Mutex _sessionsMutex =
        MONGO_MAKE_LATCH(HierarchicalAcquisitionLevel(0), "ServiceEntryPointImpl::_sessionsMutex");
    stdx::condition_variable _shutdownCondition;
    SSMList _sessions;  //所有的会话,也就是一个个ServiceStateMachine

    size_t _maxNumConnections{DEFAULT_MAX_CONN};
    AtomicWord<size_t> _currentConnections{0};
    AtomicWord<size_t> _createdConnections{0};

    std::unique_ptr<transport::ServiceExecutorReserved> _adminInternalPool;
};

//开始一次会话
void ServiceEntryPointImpl::startSession(transport::SessionHandle session) {
    // Setup the restriction environment on the Session, if the Session has local/remote Sockaddrs
    const auto& remoteAddr = session->remoteAddr();
    const auto& localAddr = session->localAddr();
    invariant(remoteAddr.isValid() && localAddr.isValid());
    auto restrictionEnvironment = std::make_unique<RestrictionEnvironment>(remoteAddr, localAddr);
    RestrictionEnvironment::set(session, std::move(restrictionEnvironment));

    SSMListIterator ssmIt;

    const bool quiet = serverGlobalParams.quiet.load();
    size_t connectionCount;
    auto transportMode = _svcCtx->getServiceExecutor()->transportMode();
    //创建一个ServiceStatmeMachine,并添加到_sessions list中
    auto ssm = ServiceStateMachine::create(_svcCtx, session, transportMode);
    auto usingMaxConnOverride = false;
    {
        stdx::lock_guard<decltype(_sessionsMutex)> lk(_sessionsMutex);
        connectionCount = _sessions.size() + 1;
        if (connectionCount > _maxNumConnections) {
            usingMaxConnOverride =
                shouldOverrideMaxConns(session, serverGlobalParams.maxConnsOverride);
        }

        if (connectionCount <= _maxNumConnections || usingMaxConnOverride) {
            ssmIt = _sessions.emplace(_sessions.begin(), ssm);
            _currentConnections.store(connectionCount);
            _createdConnections.addAndFetch(1);
        }
    }

    // Checking if we successfully added a connection above. Separated from the lock so we don't log
    // while holding it.
    if (connectionCount > _maxNumConnections && !usingMaxConnOverride) {
        if (!quiet) {
            LOGV2(22942,
                  "connection refused because too many open connections",
                  "connectionCount"_attr = connectionCount);
        }
        return;
    } else if (usingMaxConnOverride && _adminInternalPool) {
        //选择一个线程来执行这个状态机
        ssm->setServiceExecutor(_adminInternalPool.get());
    }

    if (!quiet) {
        LOGV2(22943,
              "connection accepted",
              "remote"_attr = session->remote(),
              "sessionId"_attr = session->id(),
              "connectionCount"_attr = connectionCount);
    }
    //状态机执行完后清理
    ssm->setCleanupHook([this, ssmIt, quiet, session = std::move(session)] {
        size_t connectionCount;
        auto remote = session->remote();
        {
            stdx::lock_guard<decltype(_sessionsMutex)> lk(_sessionsMutex);
            _sessions.erase(ssmIt);
            connectionCount = _sessions.size();
            _currentConnections.store(connectionCount);
        }
        _shutdownCondition.notify_one();
    });

    auto ownership = ServiceStateMachine::Ownership::kOwned;
    if (transportMode == transport::Mode::kSynchronous) {
        ownership = ServiceStateMachine::Ownership::kStatic;
    }
    //状态机开始运行,从这里开始,接收,处理,发送都由状态机来控制
    ssm->start(ownership);
}


namespace mongo {

class ServiceEntryPointMongod final : public ServiceEntryPointImpl {
    ServiceEntryPointMongod(const ServiceEntryPointMongod&) = delete;
    ServiceEntryPointMongod& operator=(const ServiceEntryPointMongod&) = delete;
public:
    using ServiceEntryPointImpl::ServiceEntryPointImpl;
    DbResponse handleRequest(OperationContext* opCtx, const Message& request) override;
private:
    class Hooks;
};
DbResponse ServiceEntryPointMongod::handleRequest(OperationContext* opCtx, const Message& m) {
    //最终使用ServiceEntryPointCommon类的handleRequest来处理
    return ServiceEntryPointCommon::handleRequest(opCtx, m, Hooks{});
}

//处理请求,返回结果
DbResponse ServiceEntryPointCommon::handleRequest(OperationContext* opCtx,
                                                  const Message& m,
                                                  const Hooks& behaviors);
} 

 

 

mongodb核心类 ServiceStateMachine

class ServiceStateMachine : public std::enable_shared_from_this<ServiceStateMachine>;
//所有状态如下:
 /*
     * Any state may transition to EndSession in case of an error, otherwise the valid state
     * transitions are:
     * Source -> SourceWait -> Process -> SinkWait -> Source (standard RPC)
     * Source -> SourceWait -> Process -> SinkWait -> Process -> SinkWait ... (exhaust)
     * Source -> SourceWait -> Process -> Source (fire-and-forget)
     */
    enum class State {
        Created,     // The session has been created, but no operations have been performed yet
        Source,      // Request a new Message from the network to handle
        SourceWait,  // Wait for the new Message to arrive from the network
        Process,     // Run the Message through the database
        SinkWait,    // Wait for the database result to be sent by the network
        EndSession,  // End the session - the ServiceStateMachine will be invalid after this
        Ended        // The session has ended. It is illegal to call any method besides
                     // state() if this is the current state.
    };


void ServiceStateMachine::start(Ownership ownershipModel) {
    _scheduleNextWithGuard(ThreadGuard(this),
                           transport::ServiceExecutor::kEmptyFlags,
                           transport::ServiceExecutorTaskName::kSSMStartSession,
                           ownershipModel);
}

void ServiceStateMachine::_scheduleNextWithGuard(ThreadGuard guard,
                                                 transport::ServiceExecutor::ScheduleFlags flags,
                                                 transport::ServiceExecutorTaskName taskName,
                                                 Ownership ownershipModel) {
    auto func = [ssm = shared_from_this(), ownershipModel] {
        ThreadGuard guard(ssm.get());
        if (ownershipModel == Ownership::kStatic)
            guard.markStaticOwnership();
        ssm->_runNextInGuard(std::move(guard));  //线程函数,就是运行下一个函数
    };
    guard.release();
    //调度这个func去执行
    Status status = _serviceExecutor->schedule(std::move(func), flags, taskName);
    if (status.isOK()) {
        return;
    }
}

//状态机状态跳转核心
void ServiceStateMachine::_runNextInGuard(ThreadGuard guard) {
    auto curState = state();
    dassert(curState != State::Ended);

    // If this is the first run of the SSM, then update its state to Source
    if (curState == State::Created) {
        curState = State::Source;
        _state.store(curState);
    }

    // Destroy the opCtx (already killed) here, to potentially use the delay between clients'
    // requests to hide the destruction cost.
    if (MONGO_likely(_killedOpCtx)) {
        _killedOpCtx.reset();
    }

    // Make sure the current Client got set correctly
    dassert(Client::getCurrent() == _dbClientPtr);
    try {
        switch (curState) {
            case State::Source:
                _sourceMessage(std::move(guard));
                break;
            case State::Process:
                _processMessage(std::move(guard));
                break;
            case State::EndSession:
                _cleanupSession(std::move(guard));
                break;
            default:
                MONGO_UNREACHABLE;
        }

        return;
    } catch (const DBException& e) {
    }
    if (!guard) {
        guard = ThreadGuard(this);
    }
    _state.store(State::EndSession);
    _cleanupSession(std::move(guard));
}

 

ServiceStateMachine之接收信息_sourceMessage

还是调用了session的接收,其实就是ASIOSession的接收

void ServiceStateMachine::_sourceMessage(ThreadGuard guard) {
    invariant(_inMessage.empty());
    invariant(_state.load() == State::Source);
    _state.store(State::SourceWait);
    guard.release();

    auto sourceMsgImpl = [&] {
        if (_transportMode == transport::Mode::kSynchronous) {
            MONGO_IDLE_THREAD_BLOCK;
            return Future<Message>::makeReady(_session()->sourceMessage());
        } else { //异步模式走这里,异步接收
            invariant(_transportMode == transport::Mode::kAsynchronous);
            return _session()->asyncSourceMessage();
        }
    };
    //这里调用上报lambda函数,并注册了接收成功的新的一个lambda函数,等待异步回调
    sourceMsgImpl().getAsync([this](StatusWith<Message> msg) {
        if (msg.isOK()) {
            _inMessage = std::move(msg.getValue());
            invariant(!_inMessage.empty());
        }
        _sourceCallback(msg.getStatus()); //收到消息就进入下一个状态,处理消息
    });
}

ServiceStateMachine之处理消息_processMessage

处理请求,并异步发送。

void ServiceStateMachine::_processMessage(ThreadGuard guard) {
    invariant(!_inMessage.empty());

    TrafficRecorder::get(_serviceContext)
        .observe(_sessionHandle, _serviceContext->getPreciseClockSource()->now(), _inMessage);
    //解压缩消息
    auto& compressorMgr = MessageCompressorManager::forSession(_session());

    _compressorId = boost::none;
    if (_inMessage.operation() == dbCompressed) {
        MessageCompressorId compressorId;
        auto swm = compressorMgr.decompressMessage(_inMessage, &compressorId);
        uassertStatusOK(swm.getStatus());
        _inMessage = swm.getValue();
        _compressorId = compressorId;
    }

    networkCounter.hitLogicalIn(_inMessage.size());
     //创建了新的操作上下文
    // Pass sourced Message to handler to generate response.
    auto opCtx = Client::getCurrent()->makeOperationContext();
    if (_inExhaust) {
        opCtx->markKillOnClientDisconnect();
    }

    //这里来处理请求,实际上是使用的ServiceEntryPointMongod的handleRequest
    DbResponse dbresponse = _sep->handleRequest(opCtx.get(), _inMessage);

    _serviceContext->killAndDelistOperation(opCtx.get(), ErrorCodes::OperationIsKilledAndDelisted);
    invariant(!_killedOpCtx);
    _killedOpCtx = std::move(opCtx);

    //构建一个响应包
    Message& toSink = dbresponse.response;
    if (!toSink.empty()) { //空的
        invariant(!OpMsg::isFlagSet(_inMessage, OpMsg::kMoreToCome));
        invariant(!OpMsg::isFlagSet(toSink, OpMsg::kChecksumPresent));

        // Update the header for the response message.
        toSink.header().setId(nextMessageId());
        toSink.header().setResponseToMsgId(_inMessage.header().getId());
        if (OpMsg::isFlagSet(_inMessage, OpMsg::kChecksumPresent)) {
#ifdef MONGO_CONFIG_SSL
            if (!SSLPeerInfo::forSession(_session()).isTLS) {
                OpMsg::appendChecksum(&toSink);
            }
#else
            OpMsg::appendChecksum(&toSink);
#endif
        }

        // If the incoming message has the exhaust flag set, then we bypass the normal RPC behavior.
        // We will sink the response to the network, but we also synthesize a new request, as if we
        // sourced a new message from the network. This new request is sent to the database once
        // again to be processed. This cycle repeats as long as the command indicates the exhaust
        // stream should continue.
        _inMessage = makeExhaustMessage(_inMessage, &dbresponse);
        _inExhaust = !_inMessage.empty();

        networkCounter.hitLogicalOut(toSink.size());

        if (_compressorId) {
            auto swm = compressorMgr.compressMessage(toSink, &_compressorId.value());
            uassertStatusOK(swm.getStatus());
            toSink = swm.getValue();
        }

        TrafficRecorder::get(_serviceContext)
            .observe(_sessionHandle, _serviceContext->getPreciseClockSource()->now(), toSink);
        //发送响应包
        _sinkMessage(std::move(guard), std::move(toSink));

    } else {
        _state.store(State::Source);
        _inMessage.reset();
        _inExhaust = false;
        //安排异步,进入接收消息状态
        return _scheduleNextWithGuard(std::move(guard),
                                      ServiceExecutor::kDeferredTask,
                                      transport::ServiceExecutorTaskName::kSSMSourceMessage);
    }
}

 

mongodb核心类:任务调度Executor之ServiceExecutorReserved

这是预留的executor,使用了线程池。调度任务就是在这里调度

Status ServiceExecutorReserved::schedule(Task task,
                                         ScheduleFlags flags,
                                         ServiceExecutorTaskName taskName) {
    if (!_stillRunning.load()) {
        return Status{ErrorCodes::ShutdownInProgress, "Executor is not running"};
    

    stdx::lock_guard<Latch> lk(_mutex);
    _readyTasks.push_back(std::move(task));
    _threadWakeup.notify_one(); //唤醒一个线程来执行任务

    return Status::OK();
}

Status ServiceExecutorReserved::_startWorker() {
    return launchServiceWorkerThread([this] { //lambda是线程函数
        stdx::unique_lock<Latch> lk(_mutex);
        _numRunningWorkerThreads.addAndFetch(1);
        auto numRunningGuard = makeGuard([&] {
            _numRunningWorkerThreads.subtractAndFetch(1);
            _shutdownCondition.notify_one();
        });

        _numStartingThreads--;
        _numReadyThreads++;

        while (_stillRunning.load()) { //死循环
            //没有任务的时候卡在这里,上边的_threadWakeup.notify_one();唤醒
            _threadWakeup.wait(lk, [&] { return (!_stillRunning.load() || !_readyTasks.empty()); }); 

            if (!_stillRunning.loadRelaxed()) {
                break;
            }

            if (_readyTasks.empty()) {
                continue;
            }

            auto task = std::move(_readyTasks.front()); //拿出任务
            _readyTasks.pop_front();
            _numReadyThreads -= 1;
            bool launchReplacement = false;
            if (_numReadyThreads + _numStartingThreads < _reservedThreads) {
                _numStartingThreads++;
                launchReplacement = true;
            }

            lk.unlock();

           //从全局队列放入本地队列
            _localWorkQueue.emplace_back(std::move(task));
            while (!_localWorkQueue.empty() && _stillRunning.loadRelaxed()) {
                _localRecursionDepth = 1;
                _localWorkQueue.front()(); //这里执行了真正的任务
                _localWorkQueue.pop_front();
            }

            lk.lock();
            if (_numReadyThreads + 1 > _reservedThreads) {
                break;
            } else {
                _numReadyThreads += 1;
            }
        }
    });
}

//启动线程
Status launchServiceWorkerThread(std::function<void()> task) {

    try {
#if defined(_WIN32)
        stdx::thread(std::move(task)).detach();
#else
        pthread_attr_t attrs;
        pthread_attr_init(&attrs);
        pthread_attr_setdetachstate(&attrs, PTHREAD_CREATE_DETACHED);

        static const rlim_t kStackSize =
            1024 * 1024;  // if we change this we need to update the warning

        struct rlimit limits;
        invariant(getrlimit(RLIMIT_STACK, &limits) == 0);
        if (limits.rlim_cur > kStackSize) {
            size_t stackSizeToSet = kStackSize;
#if !__has_feature(address_sanitizer)
            if (kDebugBuild)
                stackSizeToSet /= 2;
#endif
            int failed = pthread_attr_setstacksize(&attrs, stackSizeToSet);
            if (failed) {
                const auto ewd = errnoWithDescription(failed);
                LOGV2_WARNING(22949,
                              "pthread_attr_setstacksize failed: {error}",
                              "pthread_attr_setstacksize failed",
                              "error"_attr = ewd);
            }
        } else if (limits.rlim_cur < 1024 * 1024) {
            LOGV2_WARNING(22950,
                          "Stack size set to {stackSizeKiB}KiB. We suggest 1024KiB",
                          "Stack size not set to suggested 1024KiB",
                          "stackSizeKiB"_attr = (limits.rlim_cur / 1024));
        }

        // Wrap the user-specified `task` so it runs with an installed `sigaltstack`.
        task = [sigAltStackController = std::make_shared<stdx::support::SigAltStackController>(),
                f = std::move(task)] {
            auto sigAltStackGuard = sigAltStackController->makeInstallGuard();
            f();
        };

        pthread_t thread;
        auto ctx = std::make_unique<std::function<void()>>(std::move(task));
        ThreadSafetyContext::getThreadSafetyContext()->onThreadCreate();
        int failed = pthread_create(&thread, &attrs, runFunc, ctx.get());

        pthread_attr_destroy(&attrs);

        if (failed) {
            LOGV2(22948,
                  "pthread_create failed: {errno}",
                  "pthread_create failed",
                  "error"_attr = errnoWithDescription(failed));
            throw std::system_error(
                std::make_error_code(std::errc::resource_unavailable_try_again));
        }

        ctx.release();
#endif

    } catch (...) {
        return {ErrorCodes::InternalError, "failed to create service entry worker thread"};
    }

    return Status::OK();
}

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章