本博客將講解本地服務的註冊過程,爲了方便大家更好地理解,選擇了MediaPlayer Service作爲例子。
啓動並註冊MediaPlayer Service的代碼在frameworks/base/media/mediaserver/main_mediaserver.cpp中,如下:
main_mediaserver.cpp
1
2
3
4
5
6
7
8
9
10
11
12
|
int main(int argc, char** argv)
{
sp<ProcessState>proc(ProcessState::self());
sp<IServiceManager>sm=defaultServiceManager();
LOGI("ServiceManager: %p",sm.get());
AudioFlinger::instantiate();
MediaPlayerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
|
表面上看,啓動服務的代碼異常簡單,實際上只是代碼封裝得好,裏面的調用非常複雜。下面我們將逐一剖析。
1.spproc(ProcessState::self());
首先我們看ProcessState::self();這個方法,如下所示
ProcessState::self()
1
2
3
4
5
6
7
8
|
sp<ProcessState>ProcessState::self()
{
if(gProcess!=NULL) return gProcess;
AutoMutex -l(gProcessMutex);
if(gProcess==NULL) gProcess=new ProcessState;
return gProcess;
}
|
顯然,gProcess是一個全局變量,由於ProcessState的構造函數爲私有構造函數,所以只能採用static函數生成ProcessState實例。再來看它的構造函數,如下所示:
ProcessState::ProcessState()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
ProcessState::ProcessState()
: mDriverFD(open_driver())
, mVMStart(MAP_FAILED)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
// XXX Ideally, there should be a specific define for whether we
// have mmap (or whether we could possibly have the kernel module
// availabla).
#if !defined(HAVE_WIN32_IPC)
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
close(mDriverFD);
mDriverFD = -1;
}
#else
mDriverFD = -1;
#endif
}
if (mDriverFD < 0) {
// Need to run without the driver, starting our own thread pool.
}
}
|
構造函數的實體比較簡單,主要就是調用了mmap()方法,但是它在成員列表中進行了很多的操作,除了爲成員變量設置初始值之外,主要就是open_driver()返回文件描述符了。
open_driver()將打開"dev/binder",並且返回一個文件描述符(其實就是一個整型數)給mDriverFD;其代碼如下:
open_driver()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
|
static int open_driver()
{
if (gSingleProcess) {
return -1;
}
int fd = open("/dev/binder", O_RDWR);
if (fd >= 0) {
fcntl(fd, F_SETFD, FD_CLOEXEC);
int vers;
#if defined(HAVE_ANDROID_OS)
status_t result = ioctl(fd, BINDER_VERSION, &vers);
#else
status_t result = -1;
errno = EPERM;
#endif
if (result == -1) {
LOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
close(fd);
fd = -1;
}
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
LOGE("Binder driver protocol does not match user space protocol!");
close(fd);
fd = -1;
}
#if defined(HAVE_ANDROID_OS)
size_t maxThreads = 15;
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
if (result == -1) {
LOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
#endif
} else {
LOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
}
return fd;
}
|
open_drvier()的代碼量雖然不少,但是其實主要就是做了兩件事:第一,打開/dev/binder這個binder設備,這個是android在內核中設置的一個專門用於完成進程間通訊的虛擬設備;第二,result=ioctl(fd,BINDER_SET_MAX_THREADS,&maxThreads);的作用是通過ioctl()的方式告訴內核,這個fd支持的最大線程數是15個。
下面再回到ProcessState的構造函數中分析mmap(),其實非常簡單,就是根據返回的文件描述符,將用戶空間的特定區域映射到內核空間的特定區域中。由於用戶空間中的進程不能直接訪問內核空間,所以只能通過內核空間的特定映射區域來訪問內核空間。當調用mmap()函數時,將從0x40000000地址開始開闢一塊指定大小的空間,而後調用內核的binder_mmap()函數。
在Android系統中,內核空間以及由mmap()函數映射出的區域都事先被定義好,Android系統採用Prelinked方式預先確定各個庫將被連接的地址。這些連接信息在/build/core/prelink-linux-arm.map中可以查看到,如下所示:
0000000 - 0xFFFFFFFF Kernel
# 0xB0100000 - 0xBFFFFFFF Thread 0 Stack
# 0xB0000000 - 0xB00FFFFF Linker
# 0xA0000000 - 0xBFFFFFFF Prelinked System Libraries
# 0x90000000 - 0x9FFFFFFF Prelinked App Libraries
# 0x80000000 - 0x8FFFFFFF Non-prelinked Libraries
# 0x40000000 - 0x7FFFFFFF mmap'd stuff
# 0x10000000 - 0x3FFFFFFF Thread Stacks
# 0x00000000 - 0x0FFFFFFF .text / .data / heap
其中mmap’d stuff即爲mmap()函數的起始映射地址。
至此,main()函數中的第一行代碼就分析完了,總結一下,主要做了以下兩件事:
-
打開/dev/binder設備,並且根據返回的文件描述符將用戶空間的特定區域映射到內核空間的特定區域中;
-
新建了一個ProcessState對象
整個創建過程可以用一個簡單的示意圖表示如下:
2.spsm=defaultServiceManager();
其中defaultServiceManager()的代碼如下:
defaultServiceManager()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
sp<IServiceManager>defaultServiceManager()
{
if(gDefaultServiceManager!=NULL) return gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
if(gDefaultServiceManager==NULL){
gDefaultServiceManager=interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
}
}
return gDefaultServiceManager;
}
|
顯然這裏採用了單例設計模式,gDefaultServiceManager只會創建一次,而ProcessState::self()在前面已經分析過,下面看ProcessState::getContextObject()方法:
ProcessState::getContextObject()
1
2
3
4
5
6
7
8
|
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
if (supportsProcesses()) {
return getStrongProxyForHandle(0);
} else {
return getContextObject(String16("default"), caller);
}
}
|
在真機上supportsProcesses()爲true,所以進入getStrongProxyForHandle()方法:
ProcessState::getStrongProxyForHandle()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
|
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
|
首先分析被調用的lookupHandleLocked()方法:
ProcessState::lookupHandleLocked(int32_t)
1
2
3
4
5
6
7
8
9
10
11
12
|
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
const size_t N=mHandleToObject.size();
if (N <= (size_t)handle) {
handle_entry e;
e.binder = NULL;
e.refs = NULL;
status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
if (err < NO_ERROR) return NULL;
}
return &mHandleToObject.editItemAt(handle);
}
|
其中mHandleToObject是一個Vector對象,Vector相關的資料在C++ STL中的資料可查看的,讀者可將其簡單地理解成handle_entry對象組成的序列,由於傳入的handle==0,所以N<=(size_t)handle肯定成立 ,從而創建一個handle_entry對象,創建完之後插入到mHandleToObject中,顯然此對象的binder成員爲NULL。再回到ProcessState::getStrongProxyForHandle()方法中,顯然此時會創立一個BpBinder對象,這一點從註釋中也可以看到。
所以,gDefaultServiceManager=interface_cast(ProcessState::self()->getContextObject(NULL));在這裏可以等價爲:
1
|
gDefaultServiceManager=interface_cast<IServiceManager>(new BpBinder(0));
|
下面我們看一下BpBinder的構造函數:
BpBinder::BpBinder(int32_t)
1
2
3
4
5
6
7
8
9
10
11
|
BpBinder::BpBinder(int32_t handle)
: mHandle(handle)
, mAlive(1)
, mObitsSent(0)
, mObituaries(NULL)
{
LOGV("Creating BpBinder %p handle %d\n",this, mHandle);
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
IPCThreadState::self()->incWeakHandle(handle);
}
|
成員初始化列表比較簡單,下面重點講一下IPCThreadState::self()->incWeakHandle(handle),下面是IPCThreadState::self()的代碼:
IPCThreadState::self()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;
}
if (gShutdown) return NULL;
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS) {
if (pthread_key_create(&gTLS, threadDestructor) != 0) {
pthread_mutex_unlock(&gTLSMutex);
return NULL;
}
gHaveTLS = true;
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
|
顯然IPCThreadState 注意gHaveTLS的意思是是否含有Thread Local Storage,首次進入時沒有,所以代碼走到下面,pthread_key_create(&gTLS,threadDestructor)的主要作用就是新建一個TLS key並且保存相應的析構函數指針。顯然,後面再進入IPCThreadState::self()這個函數時,就只需要通過pthread_getspecific(k)獲取相應的TLS key即可。
下面看一下IPCThreadState的構造方法:
IPCThreadState::IPCThreadState()
1
2
3
4
5
6
7
8
|
IPCThreadState::IPCThreadState()
:mProcess(ProcessState::self()),mMyThreadId(androidGetTid())
{
pthread_setspecific(gTLS,this);
clearCaller();
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}
|
首先是在成員初始化列表中爲mProcess和mMyThreadId賦值。mIn,mOut是用於與Binder Driver通信的Parcel對象。然後pthread_setspecific()方法如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
int pthread_setspecific(pthread_key_t key,const void *ptr)
{
int err=EINVAL;
tlsmap_t* map;
if(TLSMAP_VALIDATE_KEY(key)){
/*check that we're trying to set data for an allocated key*/
map=tlsmap_lock();
if(tlsmap_test(map,key)){
((uint32_t*)__get_tls())[key]=(uint32_t)ptr;
err=0;
}
tlsmap_unlock(map);
}
return err;
}
|
顯然thread_setspecific(gTLS,this);的作用就是爲map中key爲gTLS賦值爲當前IPCThreadState對象。
至此,總結一下,在defaultServiceManager()中我們主要做了以下幾個工作:
那 gDefaultServiceManager=interface_cast(new BpBinder(0)); 是如何實現的呢?
先看一下interface_cast模板函數:
1
2
3
4
5
|
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
|
但是我們會發現asInterface()的代碼是不存在的,不過有如下兩段宏:
1
2
|
DECLARE_META_INTERFACE(ServiceManager);
IMPLEMENT_META_INTERFACE(ServiceManager,"android.os.IServiceManager")
|
其中前者是宏定義,後者是宏實現,將IMPLEMENT_META_INTERFACE宏擴展後,得到如下asInterface()的代碼:
IServiceManager::asInterface()
1
2
3
4
5
6
7
8
9
10
11
|
sp<IServiceManager> IServiceManager::asInterface(const sp<IBinder>& obj)
{
sp<IServiceManager>intr;
if(obj!=NULL){
intr=static_cast<IServiceManager*>(
obj->queryLocalInterface(IServiceManager::descriptor).get());
if(intr==NULL){
intr=new BpServiceManager(obj);
}
}
}
|
IBinder的queryLocalInterface()函數將根據obj是BBinder或BpBinder而採取不同的行爲動作,當參數obj是BBinder對象時,轉換類型爲服務對象;當參數obj是BpBinder對象時,則返回NULL.顯然,這裏返回的是NULL.所以會新建一個BpServiceManager對象。下面我們看一下BpServiceManager的構造函數:
1
2
3
4
|
BpServiceManager(const sp<IBinder>& impl)
: BpInterface<IServiceManager>(impl)
{
}
|
非常簡單,就是將傳入的BpBinder對象傳給BpInterface完成構造,而BpInterface的構造函數如下:
1
2
3
4
5
|
template<typename INTERFACE>
inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote)
: BpRefBase(remote)
{
}
|
這裏又再一次將BpBinder對象傳遞給了BpRefBase,再看一下BpRefBase的構造函數:
BpRefBase::BpRefBase()
1
2
3
4
5
6
7
8
9
10
|
BpRefBase::BpRefBase(const sp<IBinder>& o)
: mRemote(o.get()), mRefs(NULL), mState(0)
{
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
if(mRemote){
mRemote->incStrong(this);
mRefs=mReote->createWeak(this);
}
}
|
顯然,這裏將BpBinder對象傳遞給了BpBinder對象傳遞到了mRemote,其中o.get()是模板類sp中的方法,sp是Google定義的用於處理指針的模板類,可將sp理解爲Strong Pointer,與之相對的是wp(Weak Pointer),此處不展開討論。 這裏BpServiceManager,BpInterface,BpRefBase的繼承關係如下:
至此,spsm=defaultServiceManager();就分析完畢,返回的是一個BpServiceManager對象,而且該BpServiceManager對象中的mRemote對象是handle值爲0的BpBinder對象;
3.MediaPlayerService::instantiate();
該方法的代碼如下:
MediaPlayerService::instantiate()
1
2
3
4
|
void MediaPlayerService::instantiate(){
defaultServiceManager()->addService(
String16("media.player"),new MediaPlayerService());
}
|
3.1 MediaPlayerService
首先看一下MediaPlayerService這個類,它的構造函數非常簡單,就不展開說了。關鍵是注意到MediaPlayerService繼承自BnMediaPlayerService,BnMediaPlayerSerivce中重寫了virtual函數onTransact(),如下所示:
BnMediaPlayerService::onTransact()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
|
status_t BnMediaPlayerService::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case CREATE_URL: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
pid_t pid = data.readInt32();
sp<IMediaPlayerClient> client =
interface_cast<IMediaPlayerClient>(data.readStrongBinder());
const char* url = data.readCString();
KeyedVector<String8, String8> headers;
int32_t numHeaders = data.readInt32();
for (int i = 0; i < numHeaders; ++i) {
String8 key = data.readString8();
String8 value = data.readString8();
headers.add(key, value);
}
sp<IMediaPlayer> player = create(
pid, client, url, numHeaders > 0 ? &headers : NULL);
reply->writeStrongBinder(player->asBinder());
return NO_ERROR;
} break;
case CREATE_FD: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
pid_t pid = data.readInt32();
sp<IMediaPlayerClient> client = interface_cast<IMediaPlayerClient>(data.readStrongBinder());
int fd = dup(data.readFileDescriptor());
int64_t offset = data.readInt64();
int64_t length = data.readInt64();
sp<IMediaPlayer> player = create(pid, client, fd, offset, length);
reply->writeStrongBinder(player->asBinder());
return NO_ERROR;
} break;
case DECODE_URL: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
const char* url = data.readCString();
uint32_t sampleRate;
int numChannels;
int format;
sp<IMemory> player = decode(url, &sampleRate, &numChannels, &format);
reply->writeInt32(sampleRate);
reply->writeInt32(numChannels);
reply->writeInt32(format);
reply->writeStrongBinder(player->asBinder());
return NO_ERROR;
} break;
case DECODE_FD: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
int fd = dup(data.readFileDescriptor());
int64_t offset = data.readInt64();
int64_t length = data.readInt64();
uint32_t sampleRate;
int numChannels;
int format;
sp<IMemory> player = decode(fd, offset, length, &sampleRate, &numChannels, &format);
reply->writeInt32(sampleRate);
reply->writeInt32(numChannels);
reply->writeInt32(format);
reply->writeStrongBinder(player->asBinder());
return NO_ERROR;
} break;
case SNOOP: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
sp<IMemory> snooped_audio = snoop();
reply->writeStrongBinder(snooped_audio->asBinder());
return NO_ERROR;
} break;
case CREATE_MEDIA_RECORDER: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
pid_t pid = data.readInt32();
sp<IMediaRecorder> recorder = createMediaRecorder(pid);
reply->writeStrongBinder(recorder->asBinder());
return NO_ERROR;
} break;
case CREATE_METADATA_RETRIEVER: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
pid_t pid = data.readInt32();
sp<IMediaMetadataRetriever> retriever = createMetadataRetriever(pid);
reply->writeStrongBinder(retriever->asBinder());
return NO_ERROR;
} break;
case GET_OMX: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
sp<IOMX> omx = getOMX();
reply->writeStrongBinder(omx->asBinder());
return NO_ERROR;
} break;
default:
return BBinder::onTransact(code, data, reply, flags);
}
}
|
顯然是根據不同的命令(code值)進行相應的回調操作。而BnMediaPlayerService又繼承自BnInterface,因而可作出如下的UML圖:
3.2 addService()
再回到MediaPlayerService::instantiate()這個方法中,defaultServiceManager()返回的是全局變量gDefaultServiceManager,也就是剛剛創建的BpServiceManager,而BpServiceManager::addService()方法的代碼如下:
BpServiceManager::addService()
1
2
3
4
5
6
7
8
9
|
virtual status_t addService(const String16& name,const sp<IBinder>& service)
{
Parcel data,reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
data.writeStrongBinder(service);
status_t err=remote()->transact(ADD_SERVICE_TRANSACTION,data,&reply);
return err==NO_ERROR ? reply.readInt32() : err;
}
|
其中service是剛剛創建的MediaPlayerService對象。Parcel對象data的作用是保存傳入的數據,在這裏data中保存的數據如下所示:
至於flat_binder_object會在後面說明。
3.2.1 writeStrongBinder()
reply的作用很明顯,是用於保存返回的數據的。下面進入Parcel::writeStrongBinder()方法:
Parcel::writeStrongBinder()
1
2
3
4
|
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
return flatten_binder(ProcessState::self(), val, this);
}
|
前面分析過,ProcessState::self()返回的是它的單例對象,而此處的val其實是MediaPlayerService對象,由於它間接繼承於BBinder,而BBinder繼承於IBinder,所以可作用IBinder對象使用。下面進入flatten_binder()方法:
Parcel::flatten_binder()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
status_t flatten_binder(const sp<ProcessState>& proc,
const sp<IBinder>& binder, Parcel* out)
{
flat_binder_object obj;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
if (binder != NULL) {
IBinder *local = binder->localBinder();
if (!local) {
BpBinder *proxy = binder->remoteBinder();
if (proxy == NULL) {
LOGE("null proxy");
}
const int32_t handle = proxy ? proxy->handle() : 0;
obj.type = BINDER_TYPE_HANDLE;
obj.handle = handle;
obj.cookie = NULL;
} else {
obj.type = BINDER_TYPE_BINDER;
obj.binder = local->getWeakRefs();
obj.cookie = local;
}
} else {
obj.type = BINDER_TYPE_BINDER;
obj.binder = NULL;
obj.cookie = NULL;
}
return finish_flatten_binder(binder, obj, out);
}
|
由於當前的binder其實是MediaPlayerService實例,而MediaPlayerService間接繼承自BBinder,而BBinder::localBinder()返回的是this,所以local不爲NULL,從而執行else部分的代碼。下面看一下flat_binder_object的數據結構:
struct flat_binder_object
1
2
3
4
5
6
7
8
9
|
struct flat_binder_object{
unsigned long type;
unsigned long flags;
union{
void*binder;
signed long handle;
};
void*cookie;
}
|
所以執行完else部分的代碼之後,其type值爲BINDER_TYPE_BINDER,binder成員值則爲MediaPlayerService對象的弱引用,cookie則指向MediaPlayerService對象。然後,調用finish_flatten_binder()函數,將flat_binder_object對象保存到data中。
至於finish_flatten_binder(binder,obj,out);這個語句,展開來講的話非常長,所以將它單獨放在一篇博客中進行分析,博客鏈接爲Android
Binder機制分析(4) Parcel類分析。
至此,data.writeStrongBinder(service);就分析完畢。下面進行remote()->transact(ADD_SERVICE_TRANSACTION,data,&reply)的講解。
3.2.2 IPCThreadState::self()->transact()
再回到BpServiceManager::addService()方法中,前面講過,remote()返回的是handle值爲0的BpBinder對象,所以這裏的remote()->transact(ADD_SERVICE_TRANSACTION,data,&reply)本質上是調用BpBinder的transact()方法,其代碼如下:
BpBinder::transact()
1
2
3
4
5
6
|
status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
status_t status=IPCThreadState::self()->transact(
mHandle,code,data,reply,flags);
return status;
}
|
其中code值爲ADD_SERVICE_TRANSACTION,data爲上面分析過的Parcel對象引用,reply用於放置返回值,flags爲默認值0; 下面進入IPCThreadState::transact()方法,其主要代碼如下:
IPCThreadState::transact()
1
2
3
4
5
6
7
8
9
10
11
12
|
status_t IPCThreadState::transact(int32_t handle,uint32_t code,const Parcel& data,
Parcel* reply,uint32_t flags)
{
status_t err=data.errorCheck();
flags|=TF_ACCEPT_FDS;
err=writeTransactionData(BC_TRANSACTION,flags,
handle,code,data,NULL);
...
err=waitForResponse(reply);
...
return err;
}
|
3.2.2.1 writeTransactionData()分析
下面是writeTransactionData()方法的代碼如下:
IPCThreadState::writeTransactionData()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.handle = handle;
tr.code = code;
tr.flags = binderFlags;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = statusBuffer;
tr.offsets_size = 0;
tr.data.ptr.offsets = NULL;
} else {
return (mLastError = err);
}
mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
|
首先我們看一下binder_transaction_data的定義:
struct binder_transaction_data
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
struct binder_transaction_data{
uninon{
size_t handle;
void*ptr;
}target;
void*cookie;
unsigned int code;
unsigned int flags;
pid_t sender_pid;
uid_t sender_euid;
size_t data_size;
size_t offsets_size;
union{
struct{
const void*buffer;
const void*offsets;
}ptr;
uint8_t buf[8];
}data;
};
|
所以writeTransactionData()方法就好理解了:
-
首先將handle(此處值爲0)賦值給tr.target.handle;
-
然後將code值(ADD_SERVICE_TRANSACTION)賦給tr.code;
-
之後將binderFlags(值爲0)賦值給tr.flags;
-
data_size中保存的是data.ipcDataSize()的返回值,ipcDataSize()方法如下:
Parcel::ipcDataSize() const
1
2
3
4
|
size_t Parcel::ipcDataSize() const
{
return (mDataSize>mDataPos?mDataSize:mDataPos);
}
|
由博客Android
Binder機制(3) Parcel類分析的分析可知,寫入每個字符串之前會先寫入字符串長度(32位整型數,所以是4字節),而且寫入的字符串中每個字符佔2個字節(而不是一個),再加上flat_binder_object對象,所以此處data_size=4+262+4+122+16=100
此時binder_transaction_data對象中的內容如下圖所示:
-
buffer保存着data(送信Parcel)成員變量mData的指針
-
offset_size保存着數據4,它是data(送信Parcel)成員變量mObject中的數據大小
-
offsets保存着data成員變量mObject的指針
3.2.2.2 IPCThreadState::waitForResponse()
該方法的主要代碼如下:
IPCThreadState::waitForResponse()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
|
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing waitForResponse Command: "
<< getReturnString(cmd) << endl;
}
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
case BR_DEAD_REPLY:
err = DEAD_OBJECT;
goto finish;
case BR_FAILED_REPLY:
err = FAILED_TRANSACTION;
goto finish;
case BR_ACQUIRE_RESULT:
{
LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
const int32_t result = mIn.readInt32();
if (!acquireResult) continue;
*acquireResult = result ? NO_ERROR : INVALID_OPERATION;
}
goto finish;
case BR_REPLY:
{
binder_transaction_data tr;
err = mIn.read(&tr, sizeof(tr));
LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
if (err != NO_ERROR) goto finish;
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
reply->ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(size_t),
freeBuffer, this);
} else {
err = *static_cast<const status_t*>(tr.data.ptr.buffer);
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(size_t), this);
}
} else {
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(size_t), this);
continue;
}
}
goto finish;
default:
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
}
return err;
}
|
-
由於talkWithDriver()展開來講的話非常複雜,在後面的博客中會給出。這裏直接給結論:調用talkWithDriver()函數後,將保存在mOut中的Binder IPC數據傳遞給Binder Driver,並將來自Binder Driver的Binder Driver的Binder IPC保存在mIn中。另外,新建binder_node對象也是在talkWithDriver()這個調用中發生的,後面會有博客進行詳細講解。
-
之後,調用mIn.readInt32()讀取Binder協議,在從Binder Driver接收到Binder協議中保存着BR_REPLY,所以繼續執行switch語句中與BR_REPLY相匹配的部分。
-
調用mIn.read(&tr,sizeof(tr)),讀取binder_transaction_data數據結構
IPCThreadState從Binder Driver接收Binder IPC數據後,保存在mIn中,所保存的數據內容如下:
如圖所示,binder_transaction_data的buffer與offsets指向Binder mmap區域中的Binder RPC數據。data_size表示buffer中存儲的有效數據的大小。在Context Manager處理服務註冊時,若成功,則返回0,否則返回-1.
之後調用ipcSetDataReference()方法,該方法的主要代碼如下:
Parcel::ipcSetDataReference()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,
const size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)
{
freeDataNoInit();
mError = NO_ERROR;
mData = const_cast<uint8_t*>(data);
mDataSize = mDataCapacity = dataSize;
//LOGI("setDataReference Setting data size of %p to %lu (pid=%d)\n", this, mDataSize, getpid());
mDataPos = 0;
LOGV("setDataReference Setting data pos of %p to %d\n", this, mDataPos);
mObjects = const_cast<size_t*>(objects);
mObjectsSize = mObjectsCapacity = objectsCount;
mNextObjectHint = 0;
mOwner = relFunc;
mOwnerCookie = relCookie;
scanForFds();
}
|
在ipcSetDataReference()中,以接收到的binder_transaction_data數據結構爲基礎,設置reply主要的成員變量:
-
buffer保存在mData中,且該buffer持有接收的Binder RPC數據的起始地址
-
mDataSize保存着data_size,data_size指接收的Binder RPC數據的大小
-
mObjects保存着flat_binder_object結構體在Binder RPC中的存儲位置offsets
-
mObjectsSize保存Binder RPC中flat_binder_object結構體的個數
至此,MediaPlayerService::instantiate();講解完畢.
-
ProcessState::self()->startThreadPool();分析
從函數名稱就能夠知道函數的作用是啓動線程池,其代碼如下:
ProcessState::startThreadPool()
1
2
3
4
5
6
7
8
|
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
if (!mThreadPoolStarted) {
mThreadPoolStarted = true;
spawnPooledThread(true);
}
}
|
開始時,線程池尚未啓動,所以mThreadPoolStarted==false,從而調用spawnPooledThread()方法,其代碼如下:
ProcessState::spawnPooledThread()
1
2
3
4
5
6
7
8
9
10
11
|
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
int32_t s = android_atomic_add(1, &mThreadPoolSeq);
char buf[32];
sprintf(buf, "Binder Thread #%d", s);
LOGV("Spawning new pooled thread, name=%s\n", buf);
sp<Thread> t = new PoolThread(isMain);
t->run(buf);
}
}
|
注意傳入的參數isMain爲true,代表這是主線程。
代碼也非常簡單,由於此時mThreadPoolStared==true,所以新建PoolThread並運行,其中buf是利用sprintf方法得到的字符數組,其代表PoolThread的線程名稱。實際上PoolThread並沒有實現run()方法,它調用的其實是基類Thread的run方法,代碼很簡單,這裏就不再討論了。
5.IPCThreadState::self()->joinThreadPool(); 分析
由於joinThreadPool()函數中的isMain默認值爲true,故這裏isMain爲true.該方法的主要代碼如下:
IPCThreadState::joinThreadPool()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
|
void IPCThreadState::joinThreadPool(bool isMain)
{
...
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
// This thread may have been spawned by a thread that was in the background
// scheduling group, so first we will make sure it is in the default/foreground
// one to avoid performing an initial transaction in the background.
androidSetThreadSchedulingGroup(mMyThreadId, ANDROID_TGROUP_DEFAULT);
status_t result;
do {
int32_t cmd;
// When we've cleared the incoming command queue, process any pending derefs
if (mIn.dataPosition() >= mIn.dataSize()) {
size_t numPending = mPendingWeakDerefs.size();
if (numPending > 0) {
for (size_t i = 0; i < numPending; i++) {
RefBase::weakref_type* refs = mPendingWeakDerefs[i];
refs->decWeak(mProcess.get());
}
mPendingWeakDerefs.clear();
}
numPending = mPendingStrongDerefs.size();
if (numPending > 0) {
for (size_t i = 0; i < numPending; i++) {
BBinder* obj = mPendingStrongDerefs[i];
obj->decStrong(mProcess.get());
}
mPendingStrongDerefs.clear();
}
}
// now get the next command to be processed, waiting if necessary
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) continue;
cmd = mIn.readInt32();
...
result = executeCommand(cmd);
}
// After executing the command, ensure that the thread is returned to the
// default cgroup before rejoining the pool. The driver takes care of
// restoring the priority, but doesn't do anything with cgroups so we
// need to take care of that here in userspace. Note that we do make
// sure to go in the foreground after executing a transaction, but
// there are other callbacks into user code that could have changed
// our group so we want to make absolutely sure it is put back.
androidSetThreadSchedulingGroup(mMyThreadId, ANDROID_TGROUP_DEFAULT);
// Let this thread exit the thread pool if it is no longer
// needed and it is not the main process thread.
if(result == TIMED_OUT && !isMain) {
break;
}
} while (result != -ECONNREFUSED && result != -EBADF);
...
mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}
|
-
由於isMain爲true,所以這裏mOut寫入的是BC_ENTER_LOOPER;
-
之後調用androidSetThreadSchedulingGroup將當前線程設置在默認的線程組中
-
弱引用相關的處理,到講解完了executeCommand()再講解
-
得到talkWithDriver()的結果,之後執行executeCommand()方法
executeCommand()方法的代碼如下:
IPCThreadState::executeCommand()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
|
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch (cmd) {
...
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
...
Parcel buffer;
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(size_t), freeBuffer, this);
const pid_t origPid = mCallingPid;
const uid_t origUid = mCallingUid;
mCallingPid = tr.sender_pid;
mCallingUid = tr.sender_euid;
...
Parcel reply;
...
if (tr.target.ptr) {
sp<BBinder> b((BBinder*)tr.cookie);
const status_t error = b->transact(tr.code, buffer, &reply, 0);
if (error < NO_ERROR) reply.setError(error);
} else {
const status_t error = the_context_object->transact(tr.code, buffer, &reply, 0);
if (error < NO_ERROR) reply.setError(error);
}
if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
sendReply(reply, 0);
} else {
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}
mCallingPid = origPid;
mCallingUid = origUid;
...
}
break;
...
}
...
return result;
}
|
如果從Binder Driver中讀取到有事務需要處理,則返回結果爲BR_TRANSACTION,所以這裏只放了BR_TRANSACTION這種情況。顯然,如果運行正常的話,會執行到b->transact(tr.code,buffer,&reply,0);這個語句,而前面在flatten_binder(const sp& proc,const sp& binder, Parcel* out)中說過cookie其實指向的是新建的MediaPlayerService對象,考慮到MediaPlayerService繼承於BnPlayerService,而BnPlayerService繼承於BBinder,而BBinder的transact()代碼如下:
BBinder::transact()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
status_t BBinder::transact(uint32_t code,const Parcel& data,Parcel* reply,uint32_t flags)
{
data.setDataPosition(0);
status_t err=NO_ERROR;
switch(code){
case PING_TRANSACTION:
reply->writeInt32(pingBinder());
break;
default:
err=onTransact(code,data,reply,flags);
break;
}
if(replay!=NULL){
reply->setDataPosition(0);
}
}
|
由於BnMediaPlayerService重寫了onTransaction()方法,所以這裏會調用BnMediaPlayerService的onTransact()方法:
BnMediaPlayerService::onTransact()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
|
status_t BnMediaPlayerService::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case CREATE_URL: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
pid_t pid = data.readInt32();
sp<IMediaPlayerClient> client =
interface_cast<IMediaPlayerClient>(data.readStrongBinder());
const char* url = data.readCString();
KeyedVector<String8, String8> headers;
int32_t numHeaders = data.readInt32();
for (int i = 0; i < numHeaders; ++i) {
String8 key = data.readString8();
String8 value = data.readString8();
headers.add(key, value);
}
sp<IMediaPlayer> player = create(
pid, client, url, numHeaders > 0 ? &headers : NULL);
reply->writeStrongBinder(player->asBinder());
return NO_ERROR;
} break;
case CREATE_FD: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
pid_t pid = data.readInt32();
sp<IMediaPlayerClient> client = interface_cast<IMediaPlayerClient>(data.readStrongBinder());
int fd = dup(data.readFileDescriptor());
int64_t offset = data.readInt64();
int64_t length = data.readInt64();
sp<IMediaPlayer> player = create(pid, client, fd, offset, length);
reply->writeStrongBinder(player->asBinder());
return NO_ERROR;
} break;
case DECODE_URL: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
const char* url = data.readCString();
uint32_t sampleRate;
int numChannels;
int format;
sp<IMemory> player = decode(url, &sampleRate, &numChannels, &format);
reply->writeInt32(sampleRate);
reply->writeInt32(numChannels);
reply->writeInt32(format);
reply->writeStrongBinder(player->asBinder());
return NO_ERROR;
} break;
case DECODE_FD: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
int fd = dup(data.readFileDescriptor());
int64_t offset = data.readInt64();
int64_t length = data.readInt64();
uint32_t sampleRate;
int numChannels;
int format;
sp<IMemory> player = decode(fd, offset, length, &sampleRate, &numChannels, &format);
reply->writeInt32(sampleRate);
reply->writeInt32(numChannels);
reply->writeInt32(format);
reply->writeStrongBinder(player->asBinder());
return NO_ERROR;
} break;
case SNOOP: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
sp<IMemory> snooped_audio = snoop();
reply->writeStrongBinder(snooped_audio->asBinder());
return NO_ERROR;
} break;
case CREATE_MEDIA_RECORDER: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
pid_t pid = data.readInt32();
sp<IMediaRecorder> recorder = createMediaRecorder(pid);
reply->writeStrongBinder(recorder->asBinder());
return NO_ERROR;
} break;
case CREATE_METADATA_RETRIEVER: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
pid_t pid = data.readInt32();
sp<IMediaMetadataRetriever> retriever = createMetadataRetriever(pid);
reply->writeStrongBinder(retriever->asBinder());
return NO_ERROR;
} break;
case GET_OMX: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
sp<IOMX> omx = getOMX();
reply->writeStrongBinder(omx->asBinder());
return NO_ERROR;
} break;
default:
return BBinder::onTransact(code, data, reply, flags);
}
}
|
顯然,在這裏針對不同的code進行不同的動作,以DECODE_FD爲例,調用了decode(fd, offset, length, &sampleRate, &numChannels,
&format);方法,而這個decode()方法是在MediaPlayerService中實現了,這樣就最終調用了MediaPlayerService的服務函數了。
到這裏,executeCommand()方法就講解完了,再回到joinThradPool()中,由於這是一個循環,所以會一直在這裏循環地取出隊列中的命令並調用MediaPlayerSerivce的相應方法進行處裏。到這裏,服務的註冊就完成了。