Android應用進程fork出來的子進程能運行虛擬機嗎?

今天(幾個月前)有人問了這個問題,需求還有些奇葩,想在fork出來的子進程中去運行一部分動態下發的代碼,而且不知道動態下發的代碼的具體內容,有可能導致崩潰,所以想在子進程中執行。

這裏不從系統源碼和安全上分析,就從寫出實現代碼,執行,根據異常信息去分析。

我們寫過雙進程反調試,知道fork應用進程去執行是沒有問題的,jni調用生成一個字符串之類的也沒問題。但是一般的反調試代碼都是一個死循環,阻塞了子進程,並沒有繼續去往下執行。而如果你放開循環,比如以下代碼:

jstring myFork(JNIEnv* env, jobject obj){
    pid_t pid = fork();
    const char *tmp;
    if (pid) {
        tmp = "father";
        XLOGE("father pid=%d", getpid());
    } else {
        XLOGE("chlild pid=%d", getpid());
        tmp = "chlild";
    }

    return env->NewStringUTF(tmp);
}

定義了一個jni函數,fork自身並返回一個字符串,通過點擊事件觸發調用。

bt.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                String fork = fork();
                Log.e("zhuo", fork);
            }
        });

執行後,通過日誌發現兩個進程都執行了,子進程也返回了字符串打印了,但是接下來就異常了。

08-08 15:54:45.518 27638-27638/com.zhuotong.myunpack E/zhuo: father pid=27638
    father
08-08 15:54:45.518 28273-28273/? E/zhuo: chlild pid=28273
    chlild
08-08 15:54:45.518 28273-28273/? A/Looper: Thread identity changed from 0x276b00006bf6 to 0x276b00006e71 while dispatching to android.view.ViewRootImpl$ViewRootHandler android.view.View$UnsetPressedState@42702040 what=0
08-08 15:54:45.538 28273-28273/? A/libc: Fatal signal 11 (SIGSEGV) at 0x7507f028 (code=1), thread 28273 (com.tencent.mm)

這個異常可以通過分析函數調用逐步定位,或者分析過looper實現的應該看到looper的那條日誌有印象:

http://androidxref.com/4.4_r1/xref/frameworks/base/core/java/android/os/Looper.java

    public static void loop() {
        final Looper me = myLooper();
        if (me == null) {
            throw new RuntimeException("No Looper; Looper.prepare() wasn't called on this thread.");
        }
        final MessageQueue queue = me.mQueue;

        // Make sure the identity of this thread is that of the local process,
        // and keep track of what that identity token actually is.
        Binder.clearCallingIdentity();
        final long ident = Binder.clearCallingIdentity();

        for (;;) {
            Message msg = queue.next(); // might block
            if (msg == null) {
                // No message indicates that the message queue is quitting.
                return;
            }

            // This must be in a local variable, in case a UI event sets the logger
            Printer logging = me.mLogging;
            if (logging != null) {
                logging.println(">>>>> Dispatching to " + msg.target + " " +
                        msg.callback + ": " + msg.what);
            }

            msg.target.dispatchMessage(msg);

            if (logging != null) {
                logging.println("<<<<< Finished to " + msg.target + " " + msg.callback);
            }

            // Make sure that during the course of dispatching the
            // identity of the thread wasn't corrupted.
            final long newIdent = Binder.clearCallingIdentity();
            if (ident != newIdent) {
                Log.wtf(TAG, "Thread identity changed from 0x"
                        + Long.toHexString(ident) + " to 0x"
                        + Long.toHexString(newIdent) + " while dispatching to "
                        + msg.target.getClass().getName() + " "
                        + msg.callback + " what=" + msg.what);
            }

            msg.recycle();
        }
    }

因爲ident != newIdent,所以觸發異常,調用Log.wtf,http://androidxref.com/4.4_r1/xref/frameworks/base/core/java/android/util/Log.java#255

    public static int wtf(String tag, String msg) {
        return wtf(LOG_ID_MAIN, tag, msg, null, false);
    }

    static int wtf(int logId, String tag, String msg, Throwable tr, boolean localStack) {
        TerribleFailure what = new TerribleFailure(msg, tr);
        int bytes = println_native(logId, ASSERT, tag, msg + '\n'
                + getStackTraceString(localStack ? what : tr));
        sWtfHandler.onTerribleFailure(tag, what);
        return bytes;
    }

    private static TerribleFailureHandler sWtfHandler = new TerribleFailureHandler() {
            public void onTerribleFailure(String tag, TerribleFailure what) {
                RuntimeInit.wtf(tag, what);
            }
        };


    public static void wtf(String tag, Throwable t) {
        try {
            if (ActivityManagerNative.getDefault().handleApplicationWtf(
                    mApplicationObject, tag, new ApplicationErrorReport.CrashInfo(t))) {
                // The Activity Manager has already written us off -- now exit.
                Process.killProcess(Process.myPid());
                System.exit(10);
            }
        } catch (Throwable t2) {
            Slog.e(TAG, "Error reporting WTF", t2);
            Slog.e(TAG, "Original WTF:", t);
        }
    }

這是這個流程,但是異常還不是在這裏,但我們也能看出關係了,當遠程調用調用binder並調用進程內部的接口的時候我們調用clearCallingIdentity清除並保存一個long類型的值,執行完再調用restoreCallingIdentity還原。http://androidxref.com/4.4_r1/xref/frameworks/native/libs/binder/IPCThreadState.cpp#375

int64_t IPCThreadState::clearCallingIdentity()
{
    int64_t token = ((int64_t)mCallingUid<<32) | mCallingPid;
    clearCaller();
    return token;
}

void IPCThreadState::clearCaller()
{
    mCallingPid = getpid();
    mCallingUid = getuid();
}

返回long類型其實就是uid和pid,父進程pid=27638=0x6BF6,子進程pid=28273=0x6E71,結合日誌發現就是因爲進程pid變了,觸發這條日誌。

08-08 15:54:45.518 27638-27638/com.zhuotong.myunpack E/zhuo: father pid=27638
    father
08-08 15:54:45.518 28273-28273/? E/zhuo: chlild pid=28273
    chlild
08-08 15:54:45.518 28273-28273/? A/Looper: Thread identity changed from 0x276b00006bf6 to 0x276b00006e71 while dispatching to android.view.ViewRootImpl$ViewRootHandler android.view.View$UnsetPressedState@42702040 what=0
08-08 15:54:45.538 28273-28273/? A/libc: Fatal signal 11 (SIGSEGV) at 0x7507f028 (code=1), thread 28273 (com.tencent.mm)

而真實產生異常的函數爲readAligned, http://androidxref.com/4.4_r1/xref/frameworks/native/libs/binder/Parcel.cpp#911

template<class T>
status_t Parcel::readAligned(T *pArg) const {
    COMPILE_TIME_ASSERT_FUNCTION_SCOPE(PAD_SIZE(sizeof(T)) == sizeof(T));

    if ((mDataPos+sizeof(T)) <= mDataSize) {
        const void* data = mData+mDataPos;
        mDataPos += sizeof(T);
        *pArg =  *reinterpret_cast<const T*>(data);
        return NO_ERROR;
    } else {
        return NOT_ENOUGH_DATA;
    }
}

template<class T>
T Parcel::readAligned() const {
    T result;
    if (readAligned(&result) != NO_ERROR) {
        result = 0;
    }

    return result;
}

執行到*pArg =  *reinterpret_cast<const T*>(data);時出錯,內存

。。。草稿箱發現未發表,但是後面的部分丟失了,懶得再追代碼再寫了,直接說結論,如果不阻塞肯定會崩潰的,如果阻塞,確認調用的代碼不會觸發到binder的執行,也可以運行。

補充:憑記憶大概是zygote fork完成後,新進程中執行onZygoteInit(),啓動binder線程池。

    virtual void onZygoteInit()
    {
        // Re-enable tracing now that we're no longer in Zygote.
        atrace_set_tracing_enabled(true);

        sp<ProcessState> proc = ProcessState::self();
        ALOGV("App process: starting thread pool.\n");
        proc->startThreadPool();
    }

ProcessState::ProcessState()
    : mDriverFD(open_driver())
    , mVMStart(MAP_FAILED)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
{
    if (mDriverFD >= 0) {
        // XXX Ideally, there should be a specific define for whether we
        // have mmap (or whether we could possibly have the kernel module
        // availabla).
#if !defined(HAVE_WIN32_IPC)
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
        if (mVMStart == MAP_FAILED) {
            // *sigh*
            ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
            close(mDriverFD);
            mDriverFD = -1;
        }
#else
        mDriverFD = -1;
#endif
    }

    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened.  Terminating.");
}

打開/dev/binder驅動設備,再利用mmap()映射內核的地址空間,將Binder驅動的fd賦值ProcessState對象的變量mDriverFD,mmap調用到內核中觸發binder_mmap:kernel/drivers/android/binder.c

static int binder_mmap(struct file *filp, struct vm_area_struct *vma/*用戶態虛擬地址空間描述,地址空間在0~3G*/)
{
    int ret;
    /* 一塊連續的內核虛擬地址空間描述,32位體系架構中地址空間在 3G+896M + 8M ~ 4G之間*/
    struct vm_struct *area;
    struct binder_proc *proc = filp->private_data;
    const char *failure_string;
    struct binder_buffer *buffer;

    if (proc->tsk != current)
        return -EINVAL;

    //申請空間不能大於4M,如果大於4M就改爲4M大小。
    if ((vma->vm_end - vma->vm_start) > SZ_4M)
          vma->vm_end = vma->vm_start + SZ_4M;

     binder_debug(BINDER_DEBUG_OPEN_CLOSE,
             "binder_mmap: %d %lx-%lx (%ld K) vma %lx pagep %lx\n",
               proc->pid, vma->vm_start, vma->vm_end,
             (vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags,
             (unsigned long)pgprot_val(vma->vm_page_prot));

    //檢查vma是否被forbidden,vma是一塊連續的用戶態虛擬內存地址空間的描述

    if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {
          ret = -EPERM;
          failure_string = "bad vm_flags";
        goto err_bad_arg;
    }

    //打開VM_DONTCOPY,關閉VM_MAYWRITE
     vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;

    //加上binder_mmap_lock互斥鎖,因爲接下來要操作proc結構體,可能發生多線程競爭
     mutex_lock(&binder_mmap_lock);

    //一個進程已經有一次mmap,如要執行新的map,需先將之前的unmap。
    if (proc->buffer) {
          ret = -EBUSY;
          failure_string = "already mapped";
          goto err_already_mapped;
    }

    /* 獲取一塊與用戶態空間大小一致的內核的連續虛擬地址空間,
     * 注意虛擬地址空間是在此一次性分配的,物理頁面卻是需要時纔去申請和映射
    */
    area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
    if (area == NULL) {
          ret = -ENOMEM;
          failure_string = "get_vm_area";
        goto err_get_vm_area_failed;
    }

    //將內核虛擬地址記錄在proc的buffer中 
     proc->buffer = area->addr;

    /* 記錄用戶態虛擬地址空間與內核態虛擬地址空間的偏移量,
     * 這樣通過buffer和user_buffer_offset就可以計算出用戶態的虛擬地址。
    */
     proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer;

    /*釋放互斥鎖*/
     mutex_unlock(&binder_mmap_lock);
#ifdef CONFIG_CPU_CACHE_VIPT
    /* CPU的緩存方式是否爲: VIPT(Virtual Index Physical Tag):使用虛擬地址的索引域和物理地址的標記域。
     * 這裏先不管,有興趣的可參照:[https://blog.csdn.net/Q_AN1314/article/details/78980191](https://blog.csdn.net/Q_AN1314/article/details/78980191)
    */

    if (cache_is_vipt_aliasing()) {
        while (CACHE_COLOUR((vma->vm_start ^ (uint32_t)proc->buffer))) {
               pr_info("binder_mmap: %d %lx-%lx maps %p bad alignment\n", proc->pid, vma->vm_start, vma->vm_end, proc->buffer);
               vma->vm_start += PAGE_SIZE;
        }
    }
#endif

    /*分配存放物理頁地址的數組*/
    proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);
    if (proc->pages == NULL) {
          ret = -ENOMEM;
          failure_string = "alloc page array";
        goto err_alloc_pages_failed;
    }

    /*將虛擬地址空間的大小記錄在proc的buffer_size中*/
     proc->buffer_size = vma->vm_end - vma->vm_start;

    /* 安裝vma線性空間操作函數:open,close,fault
    * open-> binder_vma_open: 簡單的輸出日誌,pid,虛擬地址的起止、大小、標誌位(vm_flags和vm_page_prot)
    * close -> binder_vma_close: 將proc的vma,vma_vm_mm設爲NULL,並將proc加入到binder_deferred_workqueue隊列,
    *                            binder驅動有一個單獨的線程處理這個隊列。
    * fault -> binder_vam_fault: 直接返回VM_FAULT_SIGBUS, 
    */
     vma->vm_ops = &binder_vm_ops;

    /*在vma的vm_private_data字段裏存入proc的引指針*/
     vma->vm_private_data = proc;

    /* 先分配1個物理頁,並將其分別映射到內核線性地址和用戶態虛擬地址上,具體詳見2
    */
    if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer **+ PAGE_SIZE**, vma)) {
          ret = -ENOMEM;
          failure_string = "alloc small buf";
        goto err_alloc_small_buf_failed;
    }

    /*成功分配了物理頁並建立好的映射關係後,內核起始虛地址做爲第一個binder_buffer的地址*/
     buffer = proc->buffer;
    /*接着將內核虛擬內存鏈入proc的buffers和free_buffers鏈表中,free標誌位設爲1
     INIT_LIST_HEAD(&proc->buffers);
     list_add(&buffer->entry, &proc->buffers);
     buffer->free = 1;
     binder_insert_free_buffer(proc, buffer);
     /*異步只能使用整個地址空間的一半*/
     proc->free_async_space = proc->buffer_size / 2;
     barrier();
     proc->files = get_files_struct(current);
     proc->vma = vma;
     proc->vma_vm_mm = vma->vm_mm;/*vma->vm_mm: vma對應的mm_struct,描述一個進程的虛擬地址空間,一個進程只有一個*/

    /*pr_info("binder_mmap: %d %lx-%lx maps %p\n",
           proc->pid, vma->vm_start, vma->vm_end, proc->buffer);*/

    return 0;

    /*出錯處理*/
err_alloc_small_buf_failed:

     kfree(proc->pages);
     proc->pages = NULL;
err_alloc_pages_failed:
     mutex_lock(&binder_mmap_lock);
     vfree(proc->buffer);
     proc->buffer = NULL;
err_get_vm_area_failed:
err_already_mapped:
     mutex_unlock(&binder_mmap_lock);

err_bad_arg:
     pr_err("binder_mmap: %d %lx-%lx %s failed %d\n",
            proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
    return ret;
}

vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;因爲設置了不能拷貝,所以fork之後的子進程並沒有複製這塊內存。。。

原因大概就是這樣,所以也許可以再fork子進程後,重新打開/dev/binder,建立和system_server的binder,替換java層綁定binder?也許可以試試,感覺可能能成功。

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章