TCP Server實現
一
TCP server用於管理Acceptor類接受的TCP連接
TCP Server接受連接時的時序圖
二
muduo處理TcpConnection類的關閉採用的是被動關閉,即對方關閉連接,本地read(2)系統調用返回0,觸發關閉邏輯,連接關閉的時序圖如下
當TcpConnection從TcpServer中erase()後,如果用戶不持有該TcpConnection對象的shared_ptr指針,則其引用計數爲1(因爲Poller類持有的Channel還有該TcpConnection對象的week_ptr指針,在調用Channel::handleEvent()方法時,會將week_ptr提升爲shared_ptr,故引用計數爲1)
以下與TcpConnection相關
三
讀取數據時,將數據存放在Buffer類中,緩衝區初始化大小爲8+1024字節;通過Buffer::readfd()方法將socketfd中的數據讀取到緩衝區,如果初始緩衝區大小不夠,該方法會先將數據讀取的棧上的char extrabuf[65536]數組中,該數組保證了千兆網(1Gbps)滿載時500us的數據輸入量(),因此通常只需調用一次readv(2)。
ssize_t Buffer::readFd(int fd, int* savedErrno)
{
// saved an ioctl()/FIONREAD call to tell how much to read
char extrabuf[65536];
struct iovec vec[2];
const size_t writable = writableBytes();
vec[0].iov_base = begin()+writerIndex_;
vec[0].iov_len = writable;
vec[1].iov_base = extrabuf;
vec[1].iov_len = sizeof extrabuf;
// when there is enough space in this buffer, don't read into extrabuf.
// when extrabuf is used, we read 128k-1 bytes at most.
const int iovcnt = (writable < sizeof extrabuf) ? 2 : 1;
const ssize_t n = sockets::readv(fd, vec, iovcnt);
if (n < 0)
{
*savedErrno = errno;
}
else if (implicit_cast<size_t>(n) <= writable)
{
writerIndex_ += n;
}
else
{
writerIndex_ = buffer_.size();
append(extrabuf, n - writable);
}
// if (n == writable + sizeof extrabuf)
// {
// read again;
// }
return n;
}
四
調用shutdown()只關閉了本地的寫端,即將本地TcpConnection寫功能關閉(半關閉,仍支持讀),遠程的TcpConnection的寫關閉則根據第一節的方法觸發。
void TcpConnection::shutdown()
{
// FIXME: use compare and swap
if (state_ == kConnected)
{
setState(kDisconnecting);
// FIXME: shared_from_this()?
loop_->runInLoop(std::bind(&TcpConnection::shutdownInLoop, this));
}
}
void TcpConnection::shutdownInLoop()
{
loop_->assertInLoopThread();
if (!channel_->isWriting()) //發送緩衝區中還有數據,則不關閉寫端
{
// we are not writing
socket_->shutdownWrite();
}
}
五
多個重載的send方法最終調用了sendInLoop()方法。
發送數據時,先嚐試在eventloop中直接發送數據,如果未發送完,則將剩餘數據添加到outputBuffer_中,並且通過修改可寫狀態Channel::enableWriting()向poll中註冊POLLOUT事件。(注意觸發POLLOUT事件的情況)
void TcpConnection::send(const StringPiece& message)
{
if (state_ == kConnected)
{
if (loop_->isInLoopThread())
{
sendInLoop(message);
}
else
{
void (TcpConnection::*fp)(const StringPiece& message) = &TcpConnection::sendInLoop;
loop_->runInLoop(
std::bind(fp,
this, // FIXME
message.as_string()));
//std::forward<string>(message)));
}
}
}
void TcpConnection::sendInLoop(const StringPiece& message)
{
sendInLoop(message.data(), message.size());
}
void TcpConnection::sendInLoop(const void* data, size_t len)
{
loop_->assertInLoopThread();
ssize_t nwrote = 0;
size_t remaining = len;
bool faultError = false;
if (state_ == kDisconnected)
{
LOG_WARN << "disconnected, give up writing";
return;
}
// if no thing in output queue, try writing directly
if (!channel_->isWriting() && outputBuffer_.readableBytes() == 0)
{
nwrote = sockets::write(channel_->fd(), data, len);
if (nwrote >= 0)
{
remaining = len - nwrote;
if (remaining == 0 && writeCompleteCallback_)
{
loop_->queueInLoop(std::bind(writeCompleteCallback_, shared_from_this()));
}
}
else // nwrote < 0
{
nwrote = 0;
if (errno != EWOULDBLOCK)
{
LOG_SYSERR << "TcpConnection::sendInLoop";
if (errno == EPIPE || errno == ECONNRESET) // FIXME: any others?
{
faultError = true;
}
}
}
}
assert(remaining <= len);
if (!faultError && remaining > 0)
{
size_t oldLen = outputBuffer_.readableBytes();
if (oldLen + remaining >= highWaterMark_
&& oldLen < highWaterMark_
&& highWaterMarkCallback_)
{
loop_->queueInLoop(std::bind(highWaterMarkCallback_, shared_from_this(), oldLen + remaining));
}
outputBuffer_.append(static_cast<const char*>(data)+nwrote, remaining);
if (!channel_->isWriting())
{
channel_->enableWriting();
}
}
}
當觸發了POLLOUT事件後,根據TcpConnection構造時爲Channel註冊的WriteCallback函數TcpConnection::handleWrite處理outputBuffer_中的數據。
//構造TcpConnection
TcpConnection::TcpConnection(EventLoop* loop,
const string& nameArg,
int sockfd,
const InetAddress& localAddr,
const InetAddress& peerAddr)
: loop_(CHECK_NOTNULL(loop)),
name_(nameArg),
state_(kConnecting),
reading_(true),
socket_(new Socket(sockfd)),
channel_(new Channel(loop, sockfd)),
localAddr_(localAddr),
peerAddr_(peerAddr),
highWaterMark_(64*1024*1024)
{
channel_->setReadCallback(
std::bind(&TcpConnection::handleRead, this, _1));
channel_->setWriteCallback(
std::bind(&TcpConnection::handleWrite, this));
channel_->setCloseCallback(
std::bind(&TcpConnection::handleClose, this));
channel_->setErrorCallback(
std::bind(&TcpConnection::handleError, this));
LOG_DEBUG << "TcpConnection::ctor[" << name_ << "] at " << this
<< " fd=" << sockfd;
socket_->setKeepAlive(true);
}
//向Channel註冊的寫回調函數
void TcpConnection::handleWrite()
{
loop_->assertInLoopThread();
if (channel_->isWriting())
{
ssize_t n = sockets::write(channel_->fd(),
outputBuffer_.peek(),
outputBuffer_.readableBytes());
if (n > 0)
{
outputBuffer_.retrieve(n);
if (outputBuffer_.readableBytes() == 0)
{
channel_->disableWriting();
if (writeCompleteCallback_)
{
loop_->queueInLoop(std::bind(writeCompleteCallback_, shared_from_this()));
}
if (state_ == kDisconnecting)
{
shutdownInLoop();
}
}
}
else
{
LOG_SYSERR << "TcpConnection::handleWrite";
// if (state_ == kDisconnecting)
// {
// shutdownInLoop();
// }
}
}
else
{
LOG_TRACE << "Connection fd = " << channel_->fd()
<< " is down, no more writing";
}
}
六
對TcpConnection進行流量控制時,可使用WriteCompleteCallback和HighWaterMarkCallback,這兩個回調函數的作用位置分別爲WriteCompleteCallback:TcpConnection::handleWrite()和TcpConnection::sendInLoop()函數中;HighWaterMarkCallback:TcpConnection::sendInLoop()函數中。
七
實現多線程TcpServer的方式是用1+N個 EventLoop實現的。
1即使用TcpServer中的eventloop接受連接;
N(N>=0)則爲通過EventLoopThreadPool線程池,採用輪詢方法爲每個新的連接分配eventloop。