Learn Some Framework-4 Binder And ServerManager

Learn Some Framework-4 Native Binder And ServerManager

Native Service

在之前的章節我們介紹了Android開機到HOME的過程,在介紹其他內容之前,我們有必要先了解一下Android的IPC機制。

其實對於每一個開發者,當被問及四大組件時,一定知道Service;而如果問及Android獨創的IPC機制,大家也會脫口而出:Binder。

那麼到底什麼是Binder?它是如何運作的?它與傳統的Linux IPC機制有差別麼?

在回答這些問題之前,我們先介紹一個概念:Native Service,從Native Service我們來看在Native層Binder如何封裝和工作,接下來回到JAVA時大家會發現,JAVA層的Binder不過是對Native Binder的一個概念封裝而已。

在之前的章節中,我們介紹了Init進程如何啓動Zygote,其實在叫起Zygote的同時,Init也會叫起在rc文件中註冊的service:

service vold /system/bin/vold \
        --blkid_context=u:r:blkid:s0 --blkid_untrusted_context=u:r:blkid_untrusted:s0 \
        --fsck_context=u:r:fsck:s0 --fsck_untrusted_context=u:r:fsck_untrusted:s0
    class core
    socket vold stream 0660 root mount
    socket cryptd stream 0660 root mount
    ioprio be 2

service netd /system/bin/netd
    class main
    socket netd stream 0660 root system
    socket dnsproxyd stream 0660 root inet
    socket mdns stream 0660 root system
    socket fwmarkd stream 0660 root inet

service debuggerd /system/bin/debuggerd
    class main
    writepid /dev/cpuset/system-background/tasks

service debuggerd64 /system/bin/debuggerd64
    class main
    writepid /dev/cpuset/system-background/tasks

service ril-daemon /system/bin/rild
    class main
    socket rild stream 660 root radio
    socket sap_uim_socket1 stream 660 bluetooth bluetooth
    socket rild-debug stream 660 radio system
    user root
    group radio cache inet misc audio log

service surfaceflinger /system/bin/surfaceflinger
    class core
    user system
    group graphics drmrpc
    onrestart restart zygote
    writepid /dev/cpuset/system-background/tasks

service drm /system/bin/drmserver
    class main
    user drm
    group drm system inet drmrpc

service media /system/bin/mediaserver
    class main
    user media
    group audio camera inet net_bt net_bt_admin net_bw_acct drmrpc mediadrm

這些service最終的形態就是Linux的進程,在接下來的介紹中會慢慢體會到與JAVA的Service不同,這些service不在是一個“載體”的概念,他們就是實實在在存在的進程單位。

這些進程就是Native Service。

我們以surfaceflinger爲例來看一下到底native service是什麼:

class SurfaceFlinger : public BnSurfaceComposer,
                       private IBinder::DeathRecipient,
                       private HWComposer::EventHandler

請注意,它繼承至一個叫BnSurfaceComposer的類,多觀察幾個service大家會發現,所有的Native Service都繼承至Bn<INTERFACE>這種pattern的類,我們再看BnSurfaceComposer:

class BnSurfaceComposer: public BnInterface<ISurfaceComposer>
/*
 * This class defines the Binder IPC interface for accessing various
 * SurfaceFlinger features.
 */
class ISurfaceComposer: public IInterface
請看註釋,源自IInterface的ISurfaceComposer其實定義了一個IPC的接口,也即是如果將Binder看成是IPC的連接者,那麼Binder兩端的“代理”都應該需要去實現ISurfaceComposer的協議,從而完成消息的互傳。

這些概念我們先放在這裏,如果暫時不明白,可以通讀全文,等到需要理解時再回頭來看。


ServiceManager

既然有Native Service,那麼就會有Service服務的對象,那麼服務對象是如何找到Service的呢?

我們剛纔有說,Native Service是進程的概念,對於其他進程要想知道自己需要的進程在哪裏這個工程量就會非常龐大,如果一一查詢,不僅效率低,如果遇上有上萬個service的情況,可能查詢就會要了命。

這個時候就好像打電話,A想要找到B,如果每個電話號碼去嘗試,這樣打電話的便利也就失去了意義,於是我們有了查號臺,A打電話去查號臺,詢問B的電話號碼,然後撥號給B,這種思想也被引入到Android中來,扮演查號臺角色的是一個叫做ServiceManager的進程,我們來看代碼:

service servicemanager /system/bin/servicemanager
    class core
    user system
    group system
    critical
    onrestart restart healthd
    onrestart restart zygote
    onrestart restart media
    onrestart restart surfaceflinger
    onrestart restart drm

這是servicemanager在rc文件中註冊的行,不難發現,其實ServiceManager也是一個service,只是它比較特殊,怎麼特殊?

我們看到剛纔介紹的surfaceflinger,它是用c++寫的,並且繼承至父類,而servicemanager是c語言寫的,也即是它只是一個service級別的進程,沒有OOO的概念。

並且,它就是我們所說的Binder的核心,爲什麼?我們來看它的main函數

int main(int argc, char **argv)
{
    struct binder_state *bs;

    bs = binder_open(128*1024);
    if (!bs) {
        ALOGE("failed to open binder driver\n");
        return -1;
    }

    if (binder_become_context_manager(bs)) {
        ALOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    }

    selinux_enabled = is_selinux_enabled();
    sehandle = selinux_android_service_context_handle();
    selinux_status_open(true);

    if (selinux_enabled > 0) {
        if (sehandle == NULL) {
            ALOGE("SELinux: Failed to acquire sehandle. Aborting.\n");
            abort();
        }

        if (getcon(&service_manager_context) != 0) {
            ALOGE("SELinux: Failed to acquire service_manager context. Aborting.\n");
            abort();
        }
    }

    union selinux_callback cb;
    cb.func_audit = audit_callback;
    selinux_set_callback(SELINUX_CB_AUDIT, cb);
    cb.func_log = selinux_log_callback;
    selinux_set_callback(SELINUX_CB_LOG, cb);

    binder_loop(bs, svcmgr_handler);

    return 0;
}

首先,是調用binder_open打開了一個binder_state的結構,緊接着通過binder_become_context_manager對binder對應設備的fd綁定,將binder fd作爲context,在一系列的selinux操作後,servicemanager進入了正題:一個叫做binder_loop的循環,在這個循環裏面,binder會不斷嘗試從fd裏面讀取數據,並回調svcmgr_handler這個方法處理。


我們先來看binder_open的實現:

struct binder_state *binder_open(size_t mapsize)

   struct binder_state *bs;
   struct binder_version vers;

    bs = malloc(sizeof(*bs));
    if (!bs) {
        errno = ENOMEM;
        return NULL;
    }

    bs->fd = open("/dev/binder", O_RDWR);
    if (bs->fd < 0) {
        fprintf(stderr,"binder: cannot open device (%s)\n",
                strerror(errno));
        goto fail_open;
    }

    if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
        (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
        fprintf(stderr,
                "binder: kernel driver version (%d) differs from user space version (%d)\n",
                vers.protocol_version, BINDER_CURRENT_PROTOCOL_VERSION);
        goto fail_open;
    }

    bs->mapsize = mapsize;
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
    if (bs->mapped == MAP_FAILED) {
        fprintf(stderr,"binder: cannot map device (%s)\n",
                strerror(errno));
        goto fail_map;
    }

    return bs;

fail_map:
    close(bs->fd);
fail_open:
    free(bs);
    return NULL;
}

這段代碼很容易理解,首先初始化了bs這個變量,這個便是我們之前提到了binder_state這個結構的返回值,

接下來打開了/dev/binder這個設備作爲binder的句柄,稍後這個句柄就會成爲context manager,

接下來bs的mapsize被設置爲入參,也即是128k,最重要的就是接下來的mmap,通過內存映射,爲bs的句柄分配了128k的內存,於是稍後我們可以通過句柄來操作這塊共享內存,之後便返回了bs,這樣我們常說的Binder算是初始化完畢了,其實這樣來看,Binder其實是在共享內存這種傳統IPC的基礎上衍生出來的。


接下來我們看一下svcmgr_handler都可以做些什麼:

int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;

    //ALOGI("target=%p code=%d pid=%d uid=%d\n",
    //      (void*) txn->target.ptr, txn->code, txn->sender_pid, txn->sender_euid);

    if (txn->target.ptr != BINDER_SERVICE_MANAGER)
        return -1;

    if (txn->code == PING_TRANSACTION)
        return 0;

    // Equivalent to Parcel::enforceInterface(), reading the RPC
    // header with the strict mode policy mask and the interface name.
    // Note that we ignore the strict_policy and don't propagate it
    // further (since we do no outbound RPCs anyway).
    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &len);
    if (s == NULL) {
        return -1;
    }

    if ((len != (sizeof(svcmgr_id) / 2)) ||
        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
        fprintf(stderr,"invalid id %s\n", str8(s, len));
        return -1;
    }

    if (sehandle && selinux_status_updated() > 0) {
        struct selabel_handle *tmp_sehandle = selinux_android_service_context_handle();
        if (tmp_sehandle) {
            selabel_close(sehandle);
            sehandle = tmp_sehandle;
        }
    }

    switch(txn->code) {
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE:
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
            return -1;
        }
        handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);
        if (!handle)
            break;
        bio_put_ref(reply, handle);
        return 0;

    case SVC_MGR_ADD_SERVICE:
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
            return -1;
        }
        handle = bio_get_ref(msg);
        allow_isolated = bio_get_uint32(msg) ? 1 : 0;
        if (do_add_service(bs, s, len, handle, txn->sender_euid,
            allow_isolated, txn->sender_pid))
            return -1;
        break;

    case SVC_MGR_LIST_SERVICES: {
        uint32_t n = bio_get_uint32(msg);

        if (!svc_can_list(txn->sender_pid)) {
            ALOGE("list_service() uid=%d - PERMISSION DENIED\n",
                    txn->sender_euid);
            return -1;
        }
        si = svclist;
        while ((n-- > 0) && si)
            si = si->next;
        if (si) {
            bio_put_string16(reply, si->name);
            return 0;
        }
        return -1;
    }
    default:
        ALOGE("unknown code %d\n", txn->code);
        return -1;
    }

    bio_put_uint32(reply, 0);
    return 0;
}

不難看出主要是處理四個事件:

enum {
    /* Must match definitions in IBinder.h and IServiceManager.h */
    PING_TRANSACTION  = B_PACK_CHARS('_','P','N','G'),
    SVC_MGR_GET_SERVICE = 1,
    SVC_MGR_CHECK_SERVICE,
    SVC_MGR_ADD_SERVICE,
    SVC_MGR_LIST_SERVICES,
};

我們先記住這四個事件的值,稍後會用到。


最後,我們看一下binder_loop都幹了什麼:

void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    uint32_t readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

        if (res < 0) {
            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
        if (res == 0) {
            ALOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }

很簡單,一個循環,不斷從之前打開的句柄對應的內存空間讀入,如果讀到內容,便通過binder_parse方法解析並回調給註冊的回調函數,在這個例子中便是svcmgr_handler.

這樣一來我們就明白了,原來servicemanager就是binder的核心進程,它的實現就是開闢一塊共享內存,並註冊一個句柄,通過句柄不斷讀取內存的內容並處理相應的請求,這就是Binder的實質,很簡單,一塊共享內存,一端讀,一端寫。


Add Service

我們之前講到過,ServiceManager扮演了查號臺的角色,但是即使是查號臺,也需要事先蒐集Service的資訊才能知道有這樣的Service,那麼這個過程是怎麼完成的呢?

其實Native Service在創建時,都需要向ServiceManager聲明自己的存在,以surfaceflinger爲例,我們查看它的啓動函數

int main(int, char**) {
    // When SF is launched in its own process, limit the number of
    // binder threads to 4.
    ProcessState::self()->setThreadPoolMaxThreadCount(4);

    // start the thread pool
    sp<ProcessState> ps(ProcessState::self());
    ps->startThreadPool();

    // instantiate surfaceflinger
    sp<SurfaceFlinger> flinger = new SurfaceFlinger();

    setpriority(PRIO_PROCESS, 0, PRIORITY_URGENT_DISPLAY);

    set_sched_policy(0, SP_FOREGROUND);

    // initialize before clients can connect
    flinger->init();

    // publish surface flinger
    sp<IServiceManager> sm(defaultServiceManager());
    sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false);

    // run in this thread
    flinger->run();

    return 0;

首先生成了一個ProcessState的類,並將thread pool限定爲4, 接着new了一個SurfaceFlinger對象,這個對象其實就是surfaceflinger這個service,我們剛纔有介紹過,緊接着獲取到了IServiceManager這個ServiceManager的代理,併發起了一次Binder通信,Transact是addService.

如果將Binder通信看成是一個類CS的模式,那麼儘管是一個Service,在向ServiceManager註冊時,它扮演的是Client的角色。

這樣SurfaceFlinger通過defaultServiceManager()拿到了ServiceManager在這個進程的代理,也即是Binder的Client端,並直接調用addService,於是:

virtual status_t addService(const String16& name, const sp<IBinder>& service,
        bool allowIsolated)
{
    Parcel data, reply;
    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
    data.writeString16(name);
    data.writeStrongBinder(service);
    data.writeInt32(allowIsolated ? 1 : 0);
    status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
    return err == NO_ERROR ? reply.readExceptionCode() : err;
}}}

IServiceManager向getInterfaceDescriptor()獲取到的句柄內寫入了Parcel.

而此時,在另一端的ServiceManager會讀取到這個Binder的變化,於是

case SVC_MGR_ADD_SERVICE:
    s = bio_get_string16(msg, &len);
    if (s == NULL) {
        return -1;
    }
    handle = bio_get_ref(msg);
    allow_isolated = bio_get_uint32(msg) ? 1 : 0;
    if (do_add_service(bs, s, len, handle, txn->sender_euid,
        allow_isolated, txn->sender_pid))
        return -1;
    break;

在我們之前講到的svcmgr_handler這個回調函數內,會調用到上述的代碼片段,接下來:

int do_add_service(struct binder_state *bs,
                   const uint16_t *s, size_t len,
                   uint32_t handle, uid_t uid, int allow_isolated,
                   pid_t spid)
{
    struct svcinfo *si;

    //ALOGI("add_service('%s',%x,%s) uid=%d\n", str8(s, len), handle,
    //        allow_isolated ? "allow_isolated" : "!allow_isolated", uid);

    if (!handle || (len == 0) || (len > 127))
        return -1;

    if (!svc_can_register(s, len, spid)) {
        ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",
             str8(s, len), handle, uid);
        return -1;
    }

    si = find_svc(s, len);
    if (si) {
        if (si->handle) {
            ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
                 str8(s, len), handle, uid);
            svcinfo_death(bs, si);
        }
        si->handle = handle;
    } else {
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        if (!si) {
            ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
                 str8(s, len), handle, uid);
            return -1;
        }
        si->handle = handle;
        si->len = len;
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
        si->name[len] = '\0';
        si->death.func = (void*) svcinfo_death;
        si->death.ptr = si;
        si->allow_isolated = allow_isolated;
        si->next = svclist;
        svclist = si;
    }

    binder_acquire(bs, handle);
    binder_link_to_death(bs, handle, &si->death);
    return 0;

如果存在這個service,就只是簡單地重新設置,如果不存在,ServiceManager會給這個Service分配一個結構體,並添加至列表svclist,因此不難看出,svclist內記錄了當前的所有Native Service。

這樣我們就完成了Service的添加,查號臺就有了Servie的句柄了。


BpRefBase vs BBinder, BnINTERFACE VS BpINTERFACE

通過之前的講述,我們接觸到四個類,它們是BpRefBase, BBInder, BnINTERFACE, BpINTERFACE,查看代碼不難發現他們的繼承關係如下:

可以看出,BnInterface繼承至BBinder,而BpInterface繼承至BpRefBase,簡單來說,BBinder是Service的核心,也即是真正的Native Service就是BBinder,而BpRefBase是Client拿到的Binder的另一端,隨便查看一個Native Service的代碼就可以發現,無論是BnInterface還是BpInterface都實現了自己的OnTransact,區別是什麼呢?

其實BnInterface的OnTransact實現的是來自Client的請求,而BpInterface是Client用來向Service發送事件的,我們以SurfaceComposer爲例:

這其中IInterface其實是Bn和Bp都需要實現的部分,也即是我們之前講到的“協議”,在ISurfaceComposer中有兩個InnerClass,不用說,一個是Bn一個是Bp, Bn用於處理Client過來的事件,而Bp則是可以通過SurfaceFlinger句柄獲取到的用於向SurfaceComposer發送消息的Binder端。


How to Work.

熟悉了這個架構,我們繼續以addService爲例,剛纔我們有意跳過了一段:即addService是怎麼被ServiceManager讀取到的呢?

還是回到代碼:

    sp<IServiceManager> sm(defaultServiceManager());
    sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false);
第一行拿到了IServiceManager的Bp, 我們看defaultServiceManager的實現:

sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

    {
        AutoMutex _l(gDefaultServiceManagerLock);
        while (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));
            if (gDefaultServiceManager == NULL)
                sleep(1);
        }
    }

    return gDefaultServiceManager;
}

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
    return getStrongProxyForHandle(0);
} 

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one.  See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {
                // Special case for context manager...
                // The context manager is the only object for which we create
                // a BpBinder proxy without already holding a reference.
                // Perform a dummy transaction to ensure the context manager
                // is registered before we create the first local reference
                // to it (which will occur when creating the BpBinder).
                // If a local reference is created for the BpBinder when the
                // context manager is not present, the driver will fail to
                // provide a reference to the context manager, but the
                // driver API does not return status.
                //
                // Note that this is not race-free if the context manager
                // dies while this code runs.
                //
                // TODO: add a driver API to wait for context manager, or
                // stop special casing handle 0 for context manager and add
                // a driver API to get a handle to the context manager with
                // proper reference counting.

                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            }

            b = new BpBinder(handle);
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();

我們看見,其實getContextObject(NULL)返回了一個BpBinder(0), 再查看interface_cast的定義:

inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
    return INTERFACE::asInterface(obj);
}            result = b;}

所以這一下就變成了BpServiceManager(BpBinder(0)), 於是這就是我們的sm, 接下來:

    sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false);
就進入了BpServiceManager的addService方法:

virtual status_t addService(const String16& name, const sp<IBinder>& service,
        bool allowIsolated)
{
    Parcel data, reply;
    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
    data.writeString16(name);
    data.writeStrongBinder(service);
    data.writeInt32(allowIsolated ? 1 : 0);
    status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
    return err == NO_ERROR ? reply.readExceptionCode() : err;
}

我們看到,所有的數據準備好了以後,扔給了remote()->transact來執行,那麼remote()又是什麼?

我們剛纔講了,BpInterface繼承至BpRefBase, 因此我們可以很容易的在BpRefBase的構造函數內找到答案:

BpRefBase::BpRefBase(const sp<IBinder>& o)
    : mRemote(o.get()), mRefs(NULL), mState(0)
{
    extendObjectLifetime(OBJECT_LIFETIME_WEAK);

    if (mRemote) {
        mRemote->incStrong(this);           // Removed on first IncStrong().
        mRefs = mRemote->createWeak(this);  // Held for our entire lifetime.
    }
}

inline  IBinder*        remote()                { return mRemote; }


對比BpServiceManager的構造函數:

class BpServiceManager : public BpInterface<IServiceManager>
{
public:
    BpServiceManager(const sp<IBinder>& impl)
        : BpInterface<IServiceManager>(impl)
    {
    }

不難看出,remote()返回的就是mRemote, 而mRemote便是我們在構造函數時傳入的sp<IBinder>,對於這個例子,它就是BpBinder(0), 於是我們看一下BpBinder的transact:

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

這個時候,一個新的Class IPCThreadState上班了.


IPCThreadState

IPDThreadState其實是整個Binder通信中的實際勞動者, 它有2個Parcel成員:m In和mOut, mOut維護着需要發送的消息,而mIn維護着需要接受的消息,通過不斷地查詢發送和接受,它承擔起了Binder通信的核心讀寫部分:

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }

    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }

    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }

    if ((flags & TF_ONE_WAY) == 0) {
        #if 0
        if (code == 4) { // relayout
            ALOGI(">>>>>> CALLING transaction 4");
        } else {
            ALOGI(">>>>>> CALLING transaction %d", code);
        }
        #endif
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) { // relayout
            ALOGI("<<<<<< RETURNING transaction 4");
        } else {
            ALOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif

        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }

    return err;
}

很簡單,調用writeTransactionData寫入transaction之後進入waitForResponse等待,於是整個Binder的通信機制就很清晰了.


More

上述都是以surfaceflinger和servicemanager爲主角介紹的,其實如之前所講,surfaceflinger在向servicemanager註冊時扮演的是client角色,當其他進程需要surfaceflinger支持時,他又扮演了server的角色,所以對於普通service和client的交互其實與surfaceflinger與servicemanager的交互類似,大家可以嘗試通過代碼來進行推斷學習。


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章