Android Binder淺析 — CameraService往ServiceManager添加服務
爲了更好的理解Binder添加服務的原理,請先閱讀前一篇博文Android Binder淺析 – 開啓ServiceManager服務
添加服務Service簡介
通常Binder添加服務是以key-value形式將自己加入ServiceManager的服務列表list裏面去,key是服務的名字,如CameraService服務的名字是media.camera,value則是真實的服務指針,如CameraService就是一個BnCamerService(服務端的服務一般以Bn開頭);而客戶端去ServiceManager裏面查找服務也是通過服務名稱去查找,查找到後會獲取到一個BpCamerService(客戶端拿到服務端的對象是Bp開頭),然後使用這個Bp代理類進行方法調用,調用方法時,會將調用方法、參數封裝成Parcel對象,進而封裝爲binder_write_read、binder_transaction_data在驅動層傳遞,在內核空間內拷貝到目標進程,目標進程收到數據後進行解數據結構,根據命令code去執行相應的方法;如下圖所示:
服務添加過程原理分析
分析之前,需要在認識一個結構體binder_transaction_data:
struct binder_transaction_data {
target用於brTranscation傳遞過程中的目標
union {
目標對象的binder實體
__u32 handle;
目標對象binder的實體引用,也就是指針
binder_uintptr_t ptr;
} target;
cookie附帶參數
binder_uintptr_t cookie;
命令代碼
__u32 code;
/* General information about the transaction. */
__u32 flags;
pid_t sender_pid;
uid_t sender_euid;
描述後面的data聯合體裏面的buffer大小
binder_size_t data_size;
描述後面的data聯合體裏面的offsets的大小
binder_size_t offsets_size;
/* If this transaction is inline, the data immediately
* follows here; otherwise, it ends with a pointer to
* the data buffer.
*/
union {
struct {
傳輸的數據內容,存儲普通數據、binder實體或者引用
binder_uintptr_t buffer;
/* offsets from buffer to flat_binder_object structs */
binder_uintptr_t offsets;
} ptr;
__u8 buf[8];
} data;
};
CameraService添加Service到ServiceManager
由於沒有下載到最新的源碼,暫以Android6.0.1源碼分析,CameraService位於/frameworks/av/media/mediaserver/main_mediaserver.cpp
int main(int argc __unused, char** argv)
{
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
ALOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
MediaPlayerService::instantiate();
ResourceManagerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
SoundTriggerHwService::instantiate();
RadioService::instantiate();
registerExtensions();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
裏面啓動了很多服務,這裏我們只關注CamerService服務以及一些必要的啓動函數,
ProcessState獲取與Binder驅動交互的能力
先來看看第一條代碼
sp<ProcessState> proc(ProcessState::self());
創建一個局部變量proc,給他初始化爲ProcessState::self(),這個變量類型爲sp模板類來創建,你可以理解爲C++的智能指針,實質也比較類似,感興趣的可以去StrongPointer.h去看看這個sp;下面我們來看看ProcessState源碼:
self單利模式
sp<ProcessState> ProcessState::self()
{
Mutex::Autolock _l(gProcessMutex);
if (gProcess != NULL) {
return gProcess;
}
gProcess = new ProcessState;
return gProcess;
}
構造方法
ProcessState::ProcessState()
參數化列表,調用了open_driver給mDriverFD賦值
: mDriverFD(open_driver())
, mVMStart(MAP_FAILED)
, mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
, mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
, mExecutingThreadsCount(0)
, mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
#if !defined(HAVE_WIN32_IPC)
映射地址
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
close(mDriverFD);
mDriverFD = -1;
}
#else
mDriverFD = -1;
#endif
}
LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating.");
}
打開binder驅動,獲取與binder驅動交互的能力
static int open_driver()
{
int fd = open("/dev/binder", O_RDWR);
if (fd >= 0) {
fcntl(fd, F_SETFD, FD_CLOEXEC);
int vers = 0;
查詢binder驅動的版本,ioctl會調用到binder驅動層的binder_ioctl裏面,
根據命令BINDER_VERSION,switch-case條件執行查詢版本號
status_t result = ioctl(fd, BINDER_VERSION, &vers);
if (result == -1) {
ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
close(fd);
fd = -1;
}
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
ALOGE("Binder driver protocol does not match user space protocol!");
close(fd);
fd = -1;
}
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
設置當前進程的binder實體最多可以使用DEFAULT_MAX_BINDER_THREADS個線程
執行到binder驅動層實質就是給Binder_proc結構體的maxThread賦值
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
if (result == -1) {
ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
} else {
ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
}
return fd;
}
ProcessState的作用主要是打開binder驅動,獲取與binder驅動交互的能力,完成內存映射mmap,設置最大線程訪問量;這個步驟也會去初始化binder驅動的binder_proc結構體、binder_thread等,映射內存以及管理等
獲取ServiceManager用於添加服務
這裏實質是獲取到了代理的ServiceManager對象,想想前面一章Android Binder淺析 – 開啓ServiceManager服務啓動ServiceManager是在另一個進程的,而現在我是在新的進程去獲取這個ServiceManager對象,這就涉及到跨進程的獲取對象,這塊涉及Android的代理框架,所以先來看看這個代理框架,如下圖:
上圖小結:請看圖片中的綠色字體以及attention
繼續上面第二行代碼:
sp<IServiceManager> sm = defaultServiceManager();
讓我們看看這個defaultServiceManager是做什麼用的?
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
while (gDefaultServiceManager == NULL) {
這裏注意interface_cast這個方法,很重要
gDefaultServiceManager = interface_cast<IServiceManager>(
方法裏面的參數是ProcessState的getContextObject函數
ProcessState::self()->getContextObject(NULL));
if (gDefaultServiceManager == NULL)
sleep(1);
}
}
return gDefaultServiceManager;
}
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
return getStrongProxyForHandle(0);
}
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
const size_t N=mHandleToObject.size();
if (N <= (size_t)handle) {
handle_entry e;
e.binder = NULL;
e.refs = NULL;
這裏不要以爲一個局部變量插入到mHandleToObject成員中會出現內存錯誤
因爲mHandleToObject是Vector,而Vector底層是數組,這裏是值拷貝
status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
if (err < NO_ERROR) return NULL;
}
return &mHandleToObject.editItemAt(handle);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
實質從當前類的一個Vector成員查找handle爲0的handle_entry;
lookupHandleLocked返回一定是有值的,因爲查不到會給他創建一個
handle_entry* e = lookupHandleLocked(handle);
所以e!=NULL爲true
if (e != NULL) {
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
Parcel data;
這裏向服務端servericeManager發送一個ping包,檢驗是否存活,並返回NO_ERROR
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}
再次封裝爲BpBinder,
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
一句話總結:ProcessState::self()->getContextObject(NULL))返回一個BpBinder(mhandle=0)
而這個BpBinder還要經過interface_cast<IServiceManager>來處理一下才能賦值給sm,那這個interface_cast是一個什麼東東?
查看源碼發現interface_cast定義在IInterface.h下面:
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
也就是說會調用模板類的asInterface方法,這裏的模板類爲IServiceManager,但是去該類裏面發現沒有asInterface這個方法?這又是爲什麼?
簡單來說,這是使用了宏定義,自動生成方法實現的,實現原理如下:
宏定義DECLARE_META_INTERFACE和IMPLEMENT_META_INTERFACE
聲明瞭這兩個宏定義,前者自動爲類添加一些屬性和方法說明,後者則自動爲屬性賦值以及方法的具體實現,如下是兩個宏定義的具體聲明:
#define DECLARE_META_INTERFACE(INTERFACE) \
//爲INTERFACE這個類聲明一個類型爲String16的字符串descriptor
static const ::android::String16 descriptor; \
//聲明一個函數爲asInterface(sp<IBinder>)的方法
static ::android::sp<I##INTERFACE> asInterface( \
const ::android::sp<::android::IBinder>& obj); \
//理解同上
virtual const ::android::String16& getInterfaceDescriptor() const; \
//##意思爲字符串拼接,即聲明一個IINTERFACE的構造方法和析構方法
I##INTERFACE(); \
virtual ~I##INTERFACE(); \
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
//爲descriptor賦值爲NAME
const ::android::String16 I##INTERFACE::descriptor(NAME); \
const ::android::String16& \
I##INTERFACE::getInterfaceDescriptor() const { \
return I##INTERFACE::descriptor; \
} \
//asInterface方法的具體實現
::android::sp<I##INTERFACE> I##INTERFACE::asInterface( \
const ::android::sp<::android::IBinder>& obj) \
{ \
::android::sp<I##INTERFACE> intr; \
if (obj != NULL) { \
//queryLocalInterface是從sp裏面獲取真實對象,可理解爲java的弱引用
intr = static_cast<I##INTERFACE*>( \
obj->queryLocalInterface( \
I##INTERFACE::descriptor).get()); \
//沒有獲取到就new一個,Bp的意思就是Binder Proxy,實質就是一個代理類
if (intr == NULL) { \
intr = new Bp##INTERFACE(obj); \
} \
} \
return intr; \
}
是不是很方便,兩個宏定義就自動添加了許多東西
而在IServiceManager裏面使用了
DECLARE_META_INTERFACE(ServiceManager);
IMPLEMENT_META_INTERFACE(ServiceManager, “android.os.IServiceManager”);
小結:所以sm最後實質得到的是BpServiceManager(BpBinder(0))
添加CameraService服務
本文開始的代碼有各種各樣的服務,這裏我們只研究CameraService服務,先看看CameraService的框架圖:
CameraService::instantiate位於它的BinderService基類,他是一個Binder輔助類,快速將服務添加到sm中去:
static void instantiate() { publish(); }
static status_t publish(bool allowIsolated = false) {
//defaultServiceManager分析過了,就是一個BpServiceManager(BpBinder(0))
sp<IServiceManager> sm(defaultServiceManager());
return sm->addService(
//getServiceName是media.camera
String16(SERVICE::getServiceName()),
//new了一個CameraService allowIsolated爲false
new SERVICE(), allowIsolated);
}
繼續去看看BpServiceManager的addService:
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated)
{
Parcel data, reply;
//getInterfaceDescriptor返回android.os.IServiceManager
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name); //name = media.camera
data.writeStrongBinder(service);//service = CameraService
data.writeInt32(allowIsolated ? 1 : 0); //0
//remote() 返回的是BpBinder,因爲IServiceManager繼承BpRefBase(mRemote)
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
//去BpBinder尋找transact函數
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
if (mAlive) {
//這裏注意mHandle是0 code是方法碼 不要和cmd搞混
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
Parcel封裝數據
上面的代碼將發送的數據寫入到Parcel中去,基本類型好理解,但是寫入CameraService到Parcel不是太好理解,這裏我們看看它是如何實現的?查看Parcel前先學習一個新的結構體flat_binder_object,位於源碼kernel下binder.h下:
binder實體平滑對象結構體,用於在兩個進程中傳遞使用
struct binder_object_header {
__u32 type;
};
struct flat_binder_object {
表示binder的類型
struct binder_object_header hdr;
__u32 flags;
8 bytes of data.
union {
binder_uintptr_t binder; local object
__u32 handle; remote object
};
附帶的數據extra data associated with local object
binder_uintptr_t cookie;
};
再次之前要看看Parcel是如何對我們的Service進行封裝的,這點很關鍵,對我們後續取數據至關重要,對Parcel不理解可以看看Android Parcel淺析文章,這裏我們就只分析writeStrongBinder函數:
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
return flatten_binder(ProcessState::self(), val, this);
}
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
const sp<IBinder>& binder, Parcel* out)
{
flat_binder_object obj; 這個結構體前面有介紹
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
if (binder != NULL) {
localBinder函數在Binder.cpp中,返回this,也就是service本身
IBinder *local = binder->localBinder();
if (!local) {
BpBinder *proxy = binder->remoteBinder();
if (proxy == NULL) {
ALOGE("null proxy");
}
const int32_t handle = proxy ? proxy->handle() : 0;
obj.type = BINDER_TYPE_HANDLE;
obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
obj.handle = handle;
obj.cookie = 0;
} else {
local不爲空執行到這裏
obj.type = BINDER_TYPE_BINDER; 指明type是一個binder實體
binder則是其弱應用
obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
cookie傳遞是binder實體的地址
obj.cookie = reinterpret_cast<uintptr_t>(local);
}
} else {
obj.type = BINDER_TYPE_BINDER;
obj.binder = 0;
obj.cookie = 0;
}
return finish_flatten_binder(binder, obj, out);
}
inline static status_t finish_flatten_binder(
const sp<IBinder>& /*binder*/, const flat_binder_object& flat, Parcel* out)
{
return out->writeObject(flat, false);
}
status_t Parcel::writeObject(const flat_binder_object& val, bool nullMetaData)
{
const bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity;
const bool enoughObjects = mObjectsSize < mObjectsCapacity;
有充足的的空間和容量
if (enoughData && enoughObjects) {
restart_write:
二進制拷貝,將flat_binder_object拷貝進mData數組中
*reinterpret_cast<flat_binder_object*>(mData+mDataPos) = val;
finishWrite根系mDataPos偏移以及當前容量
return finishWrite(sizeof(flat_binder_object));
}
/***/
}
小結:
這樣就把傳遞的Binder實體的地址先寫入flat_binder_object結構體,在將這個結構體內容寫入Parcel的mData變量中去了
繼續剛纔的內容,進入IPCThreadState查看transact函數
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();
if (err == NO_ERROR) {
writeTransactionData作用將請求參數封裝爲binder_transaction_data並寫到mOut中去
mOut是IPCThreadState的成員,爲Parcel類型
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
flags默認是0 所以結果爲0
if ((flags & TF_ONE_WAY) == 0) {
replay不爲空
if (reply) {
等待響應,waitForResponse函數會與binder驅動通信
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
} else {
err = waitForResponse(NULL, NULL);
}
return err;
}
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr; binder_transaction_data在本文前面分析過
tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
tr.target.handle = handle; 目標handle爲0
tr.code = code; code爲ADDSERVICE
tr.flags = binderFlags;
tr.cookie = 0; cookie及id暫未0
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
參數無錯誤會進入第一個條件
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize(); 請求參數的大小,Parcel中的mDataSize
tr.data.ptr.buffer = data.ipcData(); 請求的參數,轉換爲一個二進制數組
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}
mOut是IPCThreadState的成員,mOut是發出去的數據,mIn讀回來的數據,
mOut和mIn也是Parcel類型
mOut.writeInt32(cmd); 寫入命令BC_TRANSACTION
mOut.write(&tr, sizeof(tr)); 又將binder_transaction_data拷貝到mOut的mData中去
return NO_ERROR;
}
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
switch-case條件執行返回的cmd命令
此處省略
}
}
底層binder驅動通信
status_t IPCThreadState::talkWithDriver(bool doReceive)
{ ProcessState的mDriverFD,之前已經打開過的
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}
這裏會再次將數據封裝binder_write_read
binder_write_read bwr;
needRead爲true,因爲mIn設置了有讀取容量的
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
// We don't want to write anything if we are still reading
// from data left in the input buffer and the caller
// has requested to read the next data.
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr賦值寫入的數據和長度
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
// This is what we'll read.
if (doReceive && needRead) {
賦值讀取
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
#if defined(HAVE_ANDROID_OS)
會進入binder驅動層,將數據寫進去
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
if (mProcess->mDriverFD <= 0) {
err = -EBADF;
}
} while (err == -EINTR);
}
成功回來了
if (err >= NO_ERROR) {
將寫入的數據清零
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
返回的數據寫入到mIn中去
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
return NO_ERROR;
}
return err;
}
進入binder驅動層寫入數據
同上,理解代碼之前先弄明白一個結構體binder_transaction,binder驅動層兩個進程之間數據交互的格式
struct binder_transaction {
int debug_id;
struct binder_work work;
來自於來那個線程
struct binder_thread *from;
struct binder_transaction *from_parent;
目標進程的binder_proc
struct binder_proc *to_proc;
目標線程
struct binder_thread *to_thread;
struct binder_transaction *to_parent;
是否需要回復
unsigned need_reply:1;
/* unsigned is_dead:1; */ /* not used at the moment */
binder_buffer管理映射的內存節點,前面一章有講到這個數據結構
struct binder_buffer *buffer;
具體的方法碼,注意不是cmd哦
unsigned int code;
具體傳遞的標誌
unsigned int flags;
long priority;
long saved_priority;
kuid_t sender_euid;
};
好,數據結構講的差不多,可以開始上面源碼分析,做個對上面的小結,防止代碼分析時遺忘了:
(1) BpServiceManager(BpBinder(0)).addService(“media.camera”, new CameraService);
(2) 在addService中,首先將參數以及CamerService封裝到Parcel對象,注意這裏先將CameraService封裝到flat_binder_object結構體,在二進制拷貝到了Parcel中,最後調用了remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
(3) remote()實質就是BpBinder,而BpBinder的transact調用了IPCThreadState的transact方法
(4)IPCThreadState中有兩個Parcel類型mIn、mOut的成員變量,用於數據輸出輸入;在transact中,先將傳遞進來的Parcel請求參數讀出來轉換成binder_transaction_data結構體,最後在寫入到mOut中去
(5) 然後在IPCThreadState的talkWithDriver中將mOut中數據讀取出來轉換成binder_write_read結構體,最後將此參數通過ioctl與binder驅動層調用
進入驅動層,爲了更好的瞭解代碼,省去大量的無關代碼
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
void __user *ubuf = (void __user *)arg;
thread = binder_get_thread(proc);
switch (cmd) {
case BINDER_WRITE_READ:
ret = binder_ioctl_write_read(filp, cmd, arg, thread);
break;
}
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
將參數拷貝到內核空間
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
if (bwr.write_size > 0) {
將binder_write_read拆分了,只傳入寫入的參數,write_buffer就是
binder_transaction_data
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
trace_binder_write_done(ret);
if (ret < 0) {
bwr.read_consumed = 0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
}
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
獲取binder上下文,可以通過這個context獲取binder驅動的管理者
struct binder_context *context = proc->context;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
get_user會從binder_transaction_data中讀取其cmd變量
這個cmd在IPCThreadState中設置爲BC_TRANSACTION
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
switch (cmd) {
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
將用戶空間的binder_transaction_data拷貝到內核空間的binder_transaction_data
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
binder數據交互,該函數內部會再次封裝請求參數爲binder_transaction;
注意cmd==BC_REPLY爲false,因爲cmd等於BC_TRANSACTION
binder_transaction(proc, thread, &tr,
cmd == BC_REPLY, 0);
break;
}
}
}
}
這裏的reply爲false
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply,
binder_size_t extra_buffers_size)
{
int ret;
struct binder_transaction *t;
binder_size_t *offp, *off_end, *off_start;
struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
struct list_head *target_list;
wait_queue_head_t *target_wait;
struct binder_context *context = proc->context;
if (reply) {
}else{
handle爲0執行else
if (tr->target.handle) {
}else{
binder_context_mgr_node已經是ServiceManager進程那邊的binder_node
target_node = context->binder_context_mgr_node;
}
target_proc已經是serviceManager那邊的binder_proc了
target_proc = target_node->proc;
}
target_thread爲null
if (target_thread) {
} else {
target_list = &target_proc->todo;
target_wait = &target_proc->wait;
}
t的類型是binder_transaction
t = kzalloc(sizeof(*t), GFP_KERNEL);
reply爲0 tr.falgs也爲0
if (!reply && !(tr->flags & TF_ONE_WAY)){
設置源頭線程爲當前thread
t->from = thread;
}
t->sender_euid = task_euid(proc->tsk); 發送線程uid
t->to_proc = target_proc; 目標binder_proc
t->to_thread = target_thread; target_thread爲null
t->code = tr->code; code爲ADDSERVICE
t->flags = tr->flags; flags爲0
t->priority = task_nice(current); 設置優先級
t.buffer爲binder_buffer,這裏實質是從目標target_proc映射的內存幾點binder_buffer中
尋找一個空的_buffer節點
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
tr->offsets_size, extra_buffers_size,
!reply && (t->flags & TF_ONE_WAY));
分配成功後設置狀態爲已使用
t->buffer->allow_user_free = 0;
t->buffer->debug_id = t->debug_id;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
標記binder_buffer的data起始偏移
off_start = (binder_size_t *)(t->buffer->data +
ALIGN(tr->data_size, sizeof(void *)));
offp = off_start;
把當前線程傳遞的binder_transaction_data的ptr.buffer拷貝到binder_buffer的data
這裏拷貝的實質是原始的封裝參數,也就數Parcel裏面封裝的那些mdeia.camera,以及CameraService
if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
tr->data.ptr.buffer, tr->data_size)) {
binder_user_error("%d:%d got transaction with invalid data ptr\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
goto err_copy_data_failed;
}
同上拷貝數據長度偏移
if (copy_from_user(offp, (const void __user *)(uintptr_t)
tr->data.ptr.offsets, tr->offsets_size)) {
binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
goto err_copy_data_failed;
}
for (; offp < off_end; offp++) {
hdr = (struct binder_object_header *)(t->buffer->data + *offp);
off_min = *offp + object_size;
switch (hdr->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
struct flat_binder_object *fp;
在從binder_transaction_data的buffer中將flat_binder_object拷貝出來
flat_binder_object存儲了binder實體,而且這個type爲BINDER_TYPE_BINDER
fp = to_flat_binder_object(hdr);
拷貝到這個新的fp中,fp是屬於內核空間的
,這裏要賦值並改變這個結構體部分成員,函數解釋在後面
ret = binder_translate_binder(fp, t, thread);
if (ret < 0) {
return_error = BR_FAILED_REPLY;
goto err_translate_failed;
}
} break;
}
if (reply) {
} else if (!(t->flags & TF_ONE_WAY)) {
BUG_ON(t->buffer->async_transaction != 0);
t->need_reply = 1;
t->from_parent = thread->transaction_stack;
thread->transaction_stack = t;
} else {
}
t->work.type = BINDER_WORK_TRANSACTION;
將事務添加到目標進程的list中去
list_add_tail(&t->work.entry, target_list);
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
待處理事項進行添加
list_add_tail(&tcomplete->entry, &thread->todo);
target_wait在前面判斷線程是否存在時進行賦值的了
if (target_wait)
喚醒對方在休眠的線程
wake_up_interruptible(target_wait);
return;
}
修改flat_binder_obj原始數據
static int binder_translate_binder(struct flat_binder_object *fp,
struct binder_transaction *t,
struct binder_thread *thread)
{
struct binder_node *node;
struct binder_ref *ref;
struct binder_proc *proc = thread->proc;
struct binder_proc *target_proc = t->to_proc;
從當前binder_thread中查詢這個binder的節點,這裏是第一次進來,所以爲空
node = binder_get_node(proc, fp->binder);
執行這裏
if (!node) {
創建binder_node節點,並當前的proc持有管理,cookie爲binder實體
的真實地址
node = binder_new_node(proc, fp->binder, fp->cookie);
if (!node)
return -ENOMEM;
}
..........
目標target_proc也就是service_manager也要持有這個binder實體的引用
ref = binder_get_ref_for_node(target_proc, node);
if (!ref)
return -EINVAL;
修改fp的type類型爲BINDER_TYPE_HANDLE,一開始爲BINDER_TYPE_BINDER
因爲binder服務查詢獲取都是通過handle句柄來查找的
if (fp->hdr.type == BINDER_TYPE_BINDER)
fp->hdr.type = BINDER_TYPE_HANDLE;
else
fp->hdr.type = BINDER_TYPE_WEAK_HANDLE;
所以這裏binder清0
fp->binder = 0;
handle句柄賦值了
fp->handle = ref->desc;
cookie以前是binder實體真實引用,這裏清0了
fp->cookie = 0;
......
return 0;
}
小結:
以上代碼在從用戶空間轉換到binder驅動的內核空間,涉及到許多數據結構的轉換,牢記這些數據結構能輔助我們快速分析其執行邏輯
- 首先,在用戶空間數據封裝依次是CameraService實體 --> flat_binder_object --> Parcel --> binder_transaction_data --> binder_write_read
- 其次,進入binder驅動內核空間後,數據結構會慢慢從用戶空間拷貝下來,解封裝到再此封裝,從binder_write_read拆解出binder_transaction_data,期間根據其命令BC_TRANSACTION執行到binder_transaction函數,該函數會找出目標target_binder,並將所有傳遞參數封裝到binder_transaction結構體中,在將這個結構體的work_entry引用添加到目標target_proc的target_list(target_proc -> todo),再把完成事項binder_work *tcomplete加入到本線程的todo待處理事項中,通知完成任務了
注意:這裏修改了其中flat_binder_object部分成員,並且binder_proc持有了CameraService服務的binder_node節點,ServiceManager的Binder_proc也持有了該節點
目標進程讀取數據
阻塞喚醒以及讀取數據成功
在前一篇Android Binder淺析 – 開啓ServiceManager服務,ServiceManager成爲binder的管理者後,會開啓循環等待客戶端請求的到來,如果沒有請求,會進入阻塞狀態,位於binder_thread_read函數中:
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
.....省略部分代碼邏輯........
if (wait_for_proc_work)
proc->ready_threads++;
if (wait_for_proc_work) {
} else {
if (non_block) {
if (!binder_has_thread_work(thread))
ret = -EAGAIN;
} else
此處,被喚醒,繼續往下執行
ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
}
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
list_empty爲空,不執行
if (!list_empty(&thread->todo)) {
w = list_first_entry(&thread->todo, struct binder_work,
entry);
proc->todo被CameraService進程加入了待處理事項,取出binder_work類型w
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
w = list_first_entry(&proc->todo, struct binder_work,
entry);
} else {
.....
}
if (end - ptr < sizeof(tr) + 4)
break;
camera進程最後把type設置爲BINDER_WORK_TRANSACTION類型,所以執行這個
switch (w->type) {
case BINDER_WORK_TRANSACTION: {
從binder_work中取出binder_transaction真正存儲數據
t = container_of(w, struct binder_transaction, work);
} break;
}
if (!t)
continue;
if (t->buffer->target_node) {
struct binder_node *target_node = t->buffer->target_node;
tr是本函數的局部變量binder_trsaction_data
拷貝
tr.target.ptr = target_node->ptr;
tr.cookie = target_node->cookie;
t->saved_priority = task_nice(current);
.....
cmd = BR_TRANSACTION;
} else {
.....
}
code實質就是ADD_SERVICE
tr.code = t->code;
tr.flags = t->flags;
tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
if (t->from) {
struct task_struct *sender = t->from->proc->tsk;
tr.sender_pid = task_tgid_nr_ns(sender,
task_active_pid_ns(current));
} else {
。。。。
}
拷貝關鍵數據,這裏就是前面的flat_binder_obj數據
fbo數據大小
tr.data_size = t->buffer->data_size;
偏移代銷
tr.offsets_size = t->buffer->offsets_size;
存放的數據位置,這裏並沒有拷貝數據到這個局部tr裏面去,只是指針引用;
其實質就是binder精髓所在,同一物理頁面映射內核與用戶空間,已經將binder數據
拷貝到服務端的內核空間了,於此同時服務端用戶空間相對應內核空間那塊內存也做了
修改,所以這裏就加一個內核空間與用戶空間的偏移user_buffer_offset即可
tr.data.ptr.buffer = (binder_uintptr_t)(
(uintptr_t)t->buffer->data +
proc->user_buffer_offset);
tr.data.ptr.offsets = tr.data.ptr.buffer +
ALIGN(t->buffer->data_size,
sizeof(void *));
給這個ptr傳遞一個cmd命令進去,ptr是binder_write_read結構體的read引用部分的指針
後面會用到這個cmd,此時cmd爲BR_TRANSACTION
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
把tr這部分數據也再次拷貝到用戶空間ptr上去
if (copy_to_user(ptr, &tr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
刪除這個事務
list_del(&t->work.entry);
target_proc上的buffer內核空間節點使用完畢,允許釋放了
t->buffer->allow_user_free = 1;
退出本函數了,返回他的上級函數
}
}
小結:
上面的binder_thread_read函數主要完成工作
1. 被CameraService進程喚醒,繼續執行
2. 從binder_proc的todo待處理事項中讀取待處理任務binder_work
3. 根據binder_work的type確定爲BINDER_WORK_TRANSACTION傳輸事件,並且從中取出真正的數據binder_transation
4. 拷貝數據binder_transation到本地局部變量binder_transaction_data,這裏關鍵地方是binder_transation中的buffer.data進行的指針拷貝,就把flat_binder_obj拷貝到了用戶空間了;也是binder數據傳遞的高效所在
5. 刪除todo事務,將本地局部變量再次拷貝到傳遞進入的ptr裏面去,ptr就是binder_write_read的read部分的指針
數據傳遞到用戶空間
上一步驟數據已經成功讀取了,返回到binder_ioctl_write_read函數,沒有做特殊的處理
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
.......省略部分代碼.......
if (bwr.read_size > 0) {
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
trace_binder_read_done(ret);
todo待處理已經被刪除了,沒有任務了
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait);
成功返回時ret爲0,所以下面的不執行
if (ret < 0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
將bwr拷貝ubuf裏面,bwr裏面有binder_thread_read讀取出來的數據;
而ubuf是上層binder_ioctl傳遞進來的參數,是binder_write_read類型;
這裏相當再次拷貝一次
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}
上面的binder_ioctl_write_read回來沒有做特殊的處理,只是再次拷貝傳遞參數,回到binder_ioctl中去,返回到binder_ioctl沒有做特殊處理,也直接再次返回到binder_loop中去了,進入binder_loop中去看看:
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(uint32_t));
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
返回回來就到了binder_parse,將讀取到的參數使用binder_parse解析,這裏的func是serviceManager
傳遞進入的回調函數
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
}
}
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
讀取第一個字節命令碼,還記得在binder_thread_read函數中,put進去了一個cmd命令碼
就是在這裏再次使用
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
switch(cmd) {
進入BR_TRANSACTION
case BR_TRANSACTION: {
讀取ptr,實質這個ptr就是binder_transaction_data類型的,所以強轉沒有問題
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if ((end - ptr) < sizeof(*txn)) {
ALOGE("parse: txn too small!\n");
return -1;
}
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;
初始化bio結構體
bio_init(&reply, rdata, sizeof(rdata), 4);
根據txn初始化msg
bio_init_from_txn(&msg, txn);
回調func函數
res = func(bs, txn, &msg, &reply);
返回發送reply
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
ptr += sizeof(*txn);
break;
}
}
}
初始化binder_io結構體
void bio_init_from_txn(struct binder_io *bio, struct binder_transaction_data *txn)
{
data和data0存放ptr.buffer,也就是flat_binder_obj指針
bio->data = bio->data0 = (char *)(intptr_t)txn->data.ptr.buffer;
offs和offs0存放binder實體引用的偏移位置
bio->offs = bio->offs0 = (binder_size_t *)(intptr_t)txn->data.ptr.offsets;
data_avail存放buffer的實際大小值
bio->data_avail = txn->data_size;
offs_avail存放偏移的的實際大小值
bio->offs_avail = txn->offsets_size / sizeof(size_t);
bio->flags = BIO_F_SHARED;
}
以上可能用到的結構體binder_io:
struct binder_io
{
char *data; 寫入或讀取的數據地址
binder_size_t *offs; /* array of offsets */
size_t data_avail; 數據有效總長度大小
size_t offs_avail; 存放我們需要的東西的偏移大小
char *data0; 數去的起始地址
binder_size_t *offs0; /* start of offsets buffer */
uint32_t flags;
uint32_t unused;
};
小結:
以上代碼主要是binder_write_read中讀取出binder_transaction_data的數據,並以它初始化我們的另外一個結構體binder_io,然後傳參執行回調函數svcmgr_handler,看看回調函數:
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
拿到的就是我們寫入的android.os.IServiceManager
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
........
switch(txn->code) {
case SVC_MGR_ADD_SERVICE:
拿到了media.camera字符串
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
拿到了handle唯一句柄
handle = bio_get_ref(msg);
allow_isolated也是寫入進去的0
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
再次執行do_add_service
if (do_add_service(bs, s, len, handle, txn->sender_euid,
allow_isolated, txn->sender_pid))
return -1;
break;
}
}
添加服務到列表
int do_add_service(struct binder_state *bs,
const uint16_t *s, size_t len,
uint32_t handle, uid_t uid, int allow_isolated,
pid_t spid)
{
struct svcinfo *si;
檢查handle是否有效,而且服務名不能大於127
if (!handle || (len == 0) || (len > 127))
return -1;
檢查是否有權限可以註冊
if (!svc_can_register(s, len, spid)) {
ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",
str8(s, len), handle, uid);
return -1;
}
查詢該服務是否已經註冊過
si = find_svc(s, len);
if (si) {
如果該服務已經註冊過,就會死亡這個服務
if (si->handle) {
ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
str8(s, len), handle, uid);
給他的客戶端發送我已經死亡的消息
svcinfo_death(bs, si);
}
同時重新設置新的句柄服務
si->handle = handle;
} else {
創建一個新的服務節點
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
if (!si) {
ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
str8(s, len), handle, uid);
return -1;
}
爲這個服務賦值
si->handle = handle;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
配置死亡回調
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
si->next = svclist;
將服務加入服務鏈表的頭結點
svclist = si;
}
binder_acquire(bs, handle);
binder_link_to_death(bs, handle, &si->death);
return 0;
}
小結:
回調函數執行部分主要檢查服務是否有權限,服務是否已經註冊;如果已經註冊過,就把之前的服務死亡,並重新設置新的服務,爲這個服務設置名稱,句柄handle,加入服務鏈表頭;以後,客戶端向binder請求服務時,通過服務名name就可以使用服務了;這裏我有一個疑問,servicemanager持有的服務列表,其服務的核心實質是一個數值型的handle句柄,最終查詢也是拿到這個句柄就可以使用服務了,爲什麼一個handle就可以使用了呢?搞不明白,暫時我也沒弄清楚,後面繼續分析說不定就可以一探原因了
到這裏,添加服務還沒有完,還記得有個reply參數,還要返回給CameraService進程呢!
在binder_parse裏面有一句binder_send_reply(bs, &reply, txn->data.ptr.buffer, res),這個reply實質寫入了一個0而已,然後就是組裝binder_transaction_data和binder_write_read結構體,發回到CameraService進程而已!
總結
向ServiceManager添加服務的過程,難點在於屬性應用上層各個類的邏輯調用,比如ProcessState、IPCThreadState等等,以及理解服務框架各層的含義,Bpxx、Bnxx以IxxService等,其一這些類的含義是什麼,其二設計師在設計是爲什麼如此來設計;最後進程間通信大量的數據接結構體轉變,每次轉變都會設計數據的傳遞,理解這個數據結構圖變化傳遞的過程
數據結構傳遞中的轉變
簡要記錄一下數據在進程間通信的數據結構體轉變:
CameraService進程端
ServiceManager進程端
至此,CameraService添加服務addService分析完成,如分析過程中有錯誤,望指正,共同學習!