Thrift結構分析(時序圖畫的很棒,強烈推薦看懂!)及增加取客戶端IP功能實現

      獲取客戶端 IP 地址

爲取得客戶端IP,有三個辦法:

1)      thrift客戶端跟nginx通訊,nginx處做一個upstream插件,該插件接收到thrift請求後,解析thrift請求,增加一個參數,該參數的tagid=32767, 自己拼接上客戶端發來的ip地址,那麼我們用插件新增一個參數,勢必nginx使用thrift文件跟後端服務器使用同一個thrift文件,需要多一個IP參數,而客戶端跟nginx之間,使用少一個IP參數的thrift文件即可。

      2)       網上博文http://blog.csdn.net/hbuxiaoshe/article/details/38942869介紹的方法也是可行的,不過讓人有些糾結;

      3)       修改Thrift的實現,此方式,適合thrift server 和 thrift client直連。重載TServerEventHandler方式,服務器端在客戶端連接上來後,會觸發 

class ClientIPHandler : virtual public TServerEventHandler {
public:
    ClientIPHandler() {
    }
    virtual ~ClientIPHandler() {
    }
    std::string GetThriftClientIp() {
//        lock::MutexLock g(&mutex);
//        return thrift_client_ip[pthread_self()];
        return "";
    }
    virtual void preServe() {
        std::cout << " call preServe " << std::endl;
    }
    virtual void* createContext(boost::shared_ptr<TProtocol> input,
                                boost::shared_ptr<TProtocol> output) {
        std::cout << " call createContext " << std::endl;
        TFramedTransport *tbuf = dynamic_cast<TFramedTransport *>(input->getTransport().get());
        if ( !tbuf ) {
            std::cout << " tbuf == null" << std::endl;
        } else {
            TSocket *sock = dynamic_cast<TSocket *>(tbuf->getUnderlyingTransport().get());
            std::cout << " ip=" << sock->getPeerAddress() << std::endl;
        }
        // insert when connection open
//        TBufferedTransport *tbuf = dynamic_cast<TBufferedTransport *>(input->getTransport().get());
//        TSocket *sock = dynamic_cast<TSocket *>(tbuf->getUnderlyingTransport().get());
//        lock::MutexLock g(&mutex);
//        thrift_client_ip[pthread_self()] = sock->getPeerAddress();
        return NULL;
    }
    virtual void deleteContext(void* serverContext,
                               boost::shared_ptr<TProtocol>input,
                               boost::shared_ptr<TProtocol>output) {
        std::cout << " call deleteContext " << std::endl;
        // erase when connection close
//        lock::MutexLock g(&mutex);
//        thrift_client_ip.erase(pthread_self());
    }
    virtual void processContext(void* serverContext, boost::shared_ptr<TTransport> transport) {
        std::cout << " call processContext " << std::endl;

        TSocket *tsocket = static_cast<TSocket*>(transport.get());
        if(socket){
            struct sockaddr* addrPtr;
            socklen_t addrLen;
            addrPtr = tsocket->getCachedAddress(&addrLen);
            if (addrPtr){
                getnameinfo((sockaddr*)addrPtr,addrLen,(char*)serverContext,32,NULL,0,0) ;
                std::cout << "111111111---------------------------------------" << std::endl;
                std::cout << "serverContext=" << serverContext << std::endl;
                std::cout << "111111111---------------------------------------" << std::endl;
            }
            {
                TSocket *sock = static_cast<TSocket *>(transport.get());
                if (sock)
                {
                    std::cout << "222222222---------------------------------------" << std::endl;
                    std::cout << "getPeerAddress=" << sock->getPeerAddress() << std::endl;
                    std::cout << "getSocketInfo=" << sock->getSocketInfo() << std::endl;
                    std::cout << "222222222---------------------------------------" << std::endl;
                }
            }
        }
    }

private:
};
void tst_transfer_server_entry() {
    OutputDbgInfo tmpOut( "tst_transfer_server_entry begin", "tst_transfer_server_entry end" ) ;
    int srv_port = 9090;

//    tst_thrift_threadmanager_fun();
//    return ;

    /*common::*/CThriftServerHelper<PhotoHandler, PhotoProcessor> thrift_server_agent((new ClientIPHandler), false);
    thrift_server_agent.serve(srv_port);
    return;
}

 

 

1.       前言

分析Thrift的結構動機是爲了實現服務端能取到客戶端的IP,因此需要對它的結構、調用流程有些瞭解。另外,請注意本文針對的是TNonblockingServer,不包含TThreadPoolServer、TThreadedServer和TSimpleServer。

2.       示例Service

service EchoService
{
    void hello();
}
class EchoHandler : public EchoServiceIf  {
private:
    virtual void hello();
};

3.       網絡部分類圖

Thrift線程模型爲若干IO線程TNonblockingIOThread(負責收發TCP連接的數據),以及主線程(負責監聽TCP連接及接受連接請求)組成。

主線程不一定就是進程的主線程,哪個線程調用了TServer::run()或TServer::serve()就是本文所說的主線程。就當前最新版本(0.9.2)的Thrift而言,調用TServer::run()或TServer::serve()均可以,原因是TServer::run()除無條件的調用外TServer::serve(),沒有做任何其它事。對TServer::serve()的調用實際是對TServer的實現類TNonblockingServer的serve()的調用。

簡而言之,TNonblockingIOThread負責數據的收發,而TNonblockingServer負責接受連接請求。

在使用中需要注意,調用TServer::run()或TServer::serve()的線程或進程會被阻塞,阻塞進入libevent的死循環,Linux上是死循環調用epoll_wait()。

4.1.      啓動準備

準備的工作包括:

1)       啓動監聽連接

2)       啓動收發數據線程

3)       初始化運行環境

在這裏,可以看到第一次對TServerEventHandler的回調:

4.2.      接受連接

從接受連接的時序過程可以看出:在該連接TConnection接收數據之前,先調用了TServerEventHandler::createContext(),這個就是獲取客戶端IP的機會之一,但是當前的實現沒有將相關的信息作爲參數傳遞給TServerEventHandler::createContext()。

4.3.      收發數據:執行調用

這過程中對TServerEventHandler::processContext(connectionContext_, getTSocket())進行了回調,並傳遞了TSocket。

6.       TProtocol
TProtocol提供序列化和反序列化能力,定義了消息包的編碼和解碼協議,它的實現有以下幾種:

1) TBinaryProtocol 二進制編解碼

2) TDebugProtocol 用於調試的,可讀的文本編解碼

3) TJSONProtocol 基於json的編解碼

4) TCompactProtocol 壓縮的二進制編解碼

如果需要爲thrift增加一種數據類型,則需要修改TProtocol,增加對新數據類型的序列化和反序列化實現。7.       TTransport

TTransport負責收發數據,可以簡單的是對Socket的包裝,但是也支持非Socket,比如Pipe。其中TSocket爲TServerSocket使用的Transport。

8.      TProtocol&TTransport
對於TNonblockingServer默認使用的是輸入和輸出Transport,都是以TMemoryBuffer爲TTransport。

TProtocol本身沒有緩衝區等,它只是序列化和反序列化。然而它依賴於TTransport,通過TTransport發送數據。以TBinaryProtocol爲例:

template <class Transport_, class ByteOrder_>
uint32_t TBinaryProtocolT<Transport_, ByteOrder_>::writeI16(const int16_t i16) {
    int16_t net = (int16_t)ByteOrder_::toWire16(i16);
    this->trans_->write((uint8_t*)&net, 2);
    return 2;
}

對比看下TTransport::write的實現:

void TSocket::write(const uint8_t* buf, uint32_t len) {
    uint32_t sent = 0;

    while (sent < len) {
        uint32_t b = write_partial(buf + sent, len - sent); // write_partial內部調用send()
        if (b == 0) {
            // This should only happen if the timeout set with SO_SNDTIMEO expired.
            // Raise an exception.
            throw TTransportException(TTransportException::TIMED_OUT, "send timeout expired");
        }
        sent += b;
    }
}
uint32_t TSocket::write_partial(const uint8_t* buf, uint32_t len) {
  if (socket_ == THRIFT_INVALID_SOCKET) {
    throw TTransportException(TTransportException::NOT_OPEN, "Called write on non-open socket");
  }

  uint32_t sent = 0;

  int flags = 0;
#ifdef MSG_NOSIGNAL
  // Note the use of MSG_NOSIGNAL to suppress SIGPIPE errors, instead we
  // check for the THRIFT_EPIPE return condition and close the socket in that case
  flags |= MSG_NOSIGNAL;
#endif // ifdef MSG_NOSIGNAL

  int b = static_cast<int>(send(socket_, const_cast_sockopt(buf + sent), len - sent, flags));
  if (b < 0) {
    if (THRIFT_GET_SOCKET_ERROR == THRIFT_EWOULDBLOCK || THRIFT_GET_SOCKET_ERROR == THRIFT_EAGAIN) {
      return 0;
    }
    // Fail on a send error
    int errno_copy = THRIFT_GET_SOCKET_ERROR;
    GlobalOutput.perror("TSocket::write_partial() send() " + getSocketInfo(), errno_copy);

    if (errno_copy == THRIFT_EPIPE || errno_copy == THRIFT_ECONNRESET
        || errno_copy == THRIFT_ENOTCONN) {
      close();
      throw TTransportException(TTransportException::NOT_OPEN, "write() send()", errno_copy);
    }

    throw TTransportException(TTransportException::UNKNOWN, "write() send()", errno_copy);
  }

  // Fail on blocked send
  if (b == 0) {
    throw TTransportException(TTransportException::NOT_OPEN, "Socket send returned 0.");
  }
  return b;
}
uint32_t TSocket::write_partial(const uint8_t* buf, uint32_t len) {
    if (socket_ == THRIFT_INVALID_SOCKET) {
        throw TTransportException(TTransportException::NOT_OPEN, "Called write on non-open socket");
    }

    uint32_t sent = 0;

    int flags = 0;
#ifdef MSG_NOSIGNAL
    // Note the use of MSG_NOSIGNAL to suppress SIGPIPE errors, instead we
    // check for the THRIFT_EPIPE return condition and close the socket in that case
    flags |= MSG_NOSIGNAL;
#endif // ifdef MSG_NOSIGNAL

    int b = static_cast<int>(send(socket_, const_cast_sockopt(buf + sent), len - sent, flags));

    if (b < 0) {
        if (THRIFT_GET_SOCKET_ERROR == THRIFT_EWOULDBLOCK || THRIFT_GET_SOCKET_ERROR == THRIFT_EAGAIN) {
            return 0;
        }
        // Fail on a send error
        int errno_copy = THRIFT_GET_SOCKET_ERROR;
        GlobalOutput.perror("TSocket::write_partial() send() " + getSocketInfo(), errno_copy);

        if (errno_copy == THRIFT_EPIPE || errno_copy == THRIFT_ECONNRESET
                || errno_copy == THRIFT_ENOTCONN) {
            close();
            throw TTransportException(TTransportException::NOT_OPEN, "write() send()", errno_copy);
        }

        throw TTransportException(TTransportException::UNKNOWN, "write() send()", errno_copy);
    }

    // Fail on blocked send
    if (b == 0) {
        throw TTransportException(TTransportException::NOT_OPEN, "Socket send returned 0.");
    }
    return b;
}

9.      數據流向關係

客戶端發送數據,服務器端收到數據後觸發libevent事件,然後調用Transport收數據。收完整後,調用Protocol反序列化,接着就調用服務端的代碼。

前半部分在IO線程中完成,後半部分在工作線程中完成。

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章